query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
22
negative_passages
listlengths
9
100
subset
stringclasses
7 values
900668785875423f138279aef0132844
Online Ensemble Learning of Data Streams with Gradually Evolved Classes
[ { "docid": "bd33ed4cde24e8ec16fb94cf543aad8e", "text": "Users' locations are important to many applications such as targeted advertisement and news recommendation. In this paper, we focus on the problem of profiling users' home locations in the context of social network (Twitter). The problem is nontrivial, because signals, which may help to identify a user's location, are scarce and noisy. We propose a unified discriminative influence model, named as UDI, to solve the problem. To overcome the challenge of scarce signals, UDI integrates signals observed from both social network (friends) and user-centric data (tweets) in a unified probabilistic framework. To overcome the challenge of noisy signals, UDI captures how likely a user connects to a signal with respect to 1) the distance between the user and the signal, and 2) the influence scope of the signal. Based on the model, we develop local and global location prediction methods. The experiments on a large scale data set show that our methods improve the state-of-the-art methods by 13%, and achieve the best performance.", "title": "" } ]
[ { "docid": "785164fa04344d976c1d8ed148715ec2", "text": "Integrated Systems Health Management includes as key elements fault detection, fault diagnostics, and failure prognostics. Whereas fault detection and diagnostics have been the subject of considerable emphasis in the Artificial Intelligence (AI) community in the past, prognostics has not enjoyed the same attention. The reason for this lack of attention is in part because prognostics as a discipline has only recently been recognized as a game-changing technology that can push the boundary of systems health management. This paper provides a survey of AI techniques applied to prognostics. The paper is an update to our previously published survey of data-driven prognostics.", "title": "" }, { "docid": "d840814a871a36479e465736077b375a", "text": "With the popularity of the Internet, online news media are pouring numerous of news reports into the Internet every day. People get lost in the information explosion. Although the existing methods are able to extract news reports according to key words, and aggregate news reports into stories or events, they just list the related reports or events in order. Moreover, they are unable to provide the evolution relationships between events within a topic, thus people hardly capture the events development vein. In order to mine the underlying evolution relationships between events within the topic, we propose a novel event evolution Model in this paper. This model utilizes TFIEF and Temporal Distance Cost factor (TDC) to model the event evolution relationships. we construct event evolution relationships map to show the events development vein. The experimental evaluation on real dataset show that our technique precedes the baseline technique.", "title": "" }, { "docid": "23ffdf5e7797e7f01c6d57f1e5546026", "text": "Classroom experiments that evaluate the effectiveness of educational technologies do not typically examine the effects of classroom contextual variables (e.g., out-of-software help-giving and external distractions). Yet these variables may influence students' instructional outcomes. In this paper, we introduce the Spatial Classroom Log Explorer (SPACLE): a prototype tool that facilitates the rapid discovery of relationships between within-software and out-of-software events. Unlike previous tools for retrospective analysis, SPACLE replays moment-by-moment analytics about student and teacher behaviors in their original spatial context. We present a data analysis workflow using SPACLE and demonstrate how this workflow can support causal discovery. We share the results of our initial replay analyses using SPACLE, which highlight the importance of considering spatial factors in the classroom when analyzing ITS log data. We also present the results of an investigation into the effects of student-teacher interactions on student learning in K-12 blended classrooms, using our workflow, which combines replay analysis with SPACLE and causal modeling. Our findings suggest that students' awareness of being monitored by their teachers may promote learning, and that \"gaming the system\" behaviors may extend outside of educational software use.", "title": "" }, { "docid": "892f6150dc4eef8ffaa419cf0ca69532", "text": "Symmetric ankle propulsion is the cornerstone of efficient human walking. The ankle plantar flexors provide the majority of the mechanical work for the step-to-step transition and much of this work is delivered via elastic recoil from the Achilles' tendon — making it highly efficient. Even though the plantar flexors play a central role in propulsion, body-weight support and swing initiation during walking, very few assistive devices have focused on aiding ankle plantarflexion. Our goal was to develop a portable ankle exoskeleton taking inspiration from the passive elastic mechanisms at play in the human triceps surae-Achilles' tendon complex during walking. The challenge was to use parallel springs to provide ankle joint mechanical assistance during stance phase but allow free ankle rotation during swing phase. To do this we developed a novel ‘smart-clutch’ that can engage and disengage a parallel spring based only on ankle kinematic state. The system is purely passive — containing no motors, electronics or external power supply. This ‘energy-neutral’ ankle exoskeleton could be used to restore symmetry and reduce metabolic energy expenditure of walking in populations with weak ankle plantar flexors (e.g. stroke, spinal cord injury, normal aging).", "title": "" }, { "docid": "18b3328725661770be1f408f37c7eb64", "text": "Researchers have proposed various machine learning algorithms for traffic sign recognition, which is a supervised multicategory classification problem with unbalanced class frequencies and various appearances. We present a novel graph embedding algorithm that strikes a balance between local manifold structures and global discriminative information. A novel graph structure is designed to depict explicitly the local manifold structures of traffic signs with various appearances and to intuitively model between-class discriminative information. Through this graph structure, our algorithm effectively learns a compact and discriminative subspace. Moreover, by using L2, 1-norm, the proposed algorithm can preserve the sparse representation property in the original space after graph embedding, thereby generating a more accurate projection matrix. Experiments demonstrate that the proposed algorithm exhibits better performance than the recent state-of-the-art methods.", "title": "" }, { "docid": "4630cb81feb8519de1e12d9061d557f3", "text": "Estimation of fragility functions using dynamic structural analysis is an important step in a number of seismic assessment procedures. This paper discusses the applicability of statistical inference concepts for fragility function estimation, describes appropriate fitting approaches for use with various structural analysis strategies, and studies how to fit fragility functions while minimizing the required number of structural analyses. Illustrative results show that multiple stripe analysis produces more efficient fragility estimates than incremental dynamic analysis for a given number of structural analyses, provided that some knowledge of the building’s capacity is available prior to analysis so that relevant portions of the fragility curve can be approximately identified. This finding has other benefits, as the multiple stripe analysis approach allows for different ground motions to be used for analyses at varying intensity levels, to represent the differing characteristics of low intensity and high intensity shaking. The proposed assessment approach also provides a framework for evaluating alternate analysis procedures that may arise in the future.", "title": "" }, { "docid": "22d687204c9e8829d2ee6da4eeea104e", "text": "In speech based emotion recognition, both acoustic features extraction and features classification are usually time consuming,which obstruct the system to be real time. In this paper, we proposea novel feature selection (FSalgorithm to filter out the low efficiency features towards fast speech emotion recognition.Firstly, each acoustic feature's discriminative ability, time consumption and redundancy are calculated. Then, we map the original feature space into a nonlinear one to select nonlinear features,which can exploit the underlying relationship among the original features. Thirdly, high discriminative nonlinear feature with low time consumption is initially preserved. Finally, a further selection is followed to obtain low redundant features based on these preserved features. The final selected nonlinear features are used in features' extraction and features' classification in our approach, we call them qualified features. The experimental results demonstrate that recognition time consumption can be dramatically reduced in not only the extraction phase but also the classification phase. Moreover, a competitive of recognition accuracy has been observed in the speech emotion recognition.", "title": "" }, { "docid": "5ca5cfcd0ed34d9b0033977e9cde2c74", "text": "We study the impact of regulation on competition between brand-names and generics and pharmaceutical expenditures using a unique policy experiment in Norway, where reference pricing (RP) replaced price cap regulation in 2003 for a sub-sample of o¤-patent products. First, we construct a vertical di¤erentiation model to analyze the impact of regulation on prices and market shares of brand-names and generics. Then, we exploit a detailed panel data set at product level covering several o¤-patent molecules before and after the policy reform. O¤-patent drugs not subject to RP serve as our control group. We …nd that RP signi…cantly reduces both brand-name and generic prices, and results in signi…cantly lower brand-name market shares. Finally, we show that RP has a strong negative e¤ect on average molecule prices, suggesting signi…cant cost-savings, and that patients’ copayments decrease despite the extra surcharges under RP. Key words: Pharmaceuticals; Regulation; Generic Competition JEL Classi…cations: I11; I18; L13; L65 We thank David Bardey, Øivind Anti Nilsen, Frode Steen, and two anonymous referees for valuable comments and suggestions. We also thank the Norwegian Research Council, Health Economics Bergen (HEB) for …nancial support. Corresponding author. Department of Economics and Health Economics Bergen, Norwegian School of Economics and Business Administration, Helleveien 30, N-5045 Bergen, Norway. E-mail: kurt.brekke@nhh.no. Uni Rokkan Centre, Health Economics Bergen, Nygårdsgaten 5, N-5015 Bergen, Norway. E-mail: tor.holmas@uni.no. Department of Economics/NIPE, University of Minho, Campus de Gualtar, 4710-057 Braga, Portugal; and University of Bergen (Economics), Norway. E-mail: o.r.straume@eeg.uminho.pt.", "title": "" }, { "docid": "5fd1be2414777efafc369000a816e3fc", "text": "Findings in the social psychology literatures on attitudes, social perception, and emotion demonstrate that social information processing involves embodiment, where embodiment refers both to actual bodily states and to simulations of experience in the brain's modality-specific systems for perception, action, and introspection. We show that embodiment underlies social information processing when the perceiver interacts with actual social objects (online cognition) and when the perceiver represents social objects in their absence (offline cognition). Although many empirical demonstrations of social embodiment exist, no particularly compelling account of them has been offered. We propose that theories of embodied cognition, such as the Perceptual Symbol Systems (PSS) account (Barsalou, 1999), explain and integrate these findings, and that they also suggest exciting new directions for research. We compare the PSS account to a variety of related proposals and show how it addresses criticisms that have previously posed problems for the general embodiment approach.", "title": "" }, { "docid": "53595cdb8e7a9e8ee2debf4e0dda6d45", "text": "Botnets have become one of the major attacks in the internet today due to their illicit profitable financial gain. Meanwhile, honeypots have been successfully deployed in many computer security defence systems. Since honeypots set up by security defenders can attract botnet compromises and become spies in exposing botnet membership and botnet attacker behaviours, they are widely used by security defenders in botnet defence. Therefore, attackers constructing and maintaining botnets will be forced to find ways to avoid honeypot traps. In this paper, we present a hardware and software independent honeypot detection methodology based on the following assumption: security professionals deploying honeypots have a liability constraint such that they cannot allow their honeypots to participate in real attacks that could cause damage to others, while attackers do not need to follow this constraint. Attackers could detect honeypots in their botnets by checking whether compromised machines in a botnet can successfully send out unmodified malicious traffic. Based on this basic detection principle, we present honeypot detection techniques to be used in both centralised botnets and Peer-to-Peer (P2P) structured botnets. Experiments show that current standard honeypots and honeynet programs are vulnerable to the proposed honeypot detection techniques. At the end, we discuss some guidelines for defending against general honeypot-aware attacks.", "title": "" }, { "docid": "e45d6e01e4ce1af1e3f5af5d39a01e80", "text": "Link prediction is of fundamental importance in network science and machine learning. Early methods consider only simple topological features, while subsequent supervised approaches typically rely on human-labeled data and feature engineering. In this work, we present a new representation learning-based approach called SEMAC that jointly exploits fine-grained node features as well as the overall graph topology. In contrast to the SGNS or SVD methods espoused in previous representation-based studies, our model represents nodes in terms of subgraph embeddings acquired via a form of convex matrix completion to iteratively reduce the rank, and thereby, more effectively eliminate noise in the representation. Thus, subgraph embeddings and convex matrix completion are elegantly integrated into a novel link prediction framework. Experimental results on several datasets show the effectiveness of our method compared to previous work.", "title": "" }, { "docid": "a1292045684debec0e6e56f7f5e85fad", "text": "BACKGROUND\nLncRNA and microRNA play an important role in the development of human cancers; they can act as a tumor suppressor gene or an oncogene. LncRNA GAS5, originating from the separation from tumor suppressor gene cDNA subtractive library, is considered as an oncogene in several kinds of cancers. The expression of miR-221 affects tumorigenesis, invasion and metastasis in multiple types of human cancers. However, there's very little information on the role LncRNA GAS5 and miR-221 play in CRC. Therefore, we conducted this study in order to analyze the association of GAS5 and miR-221 with the prognosis of CRC and preliminary study was done on proliferation, metastasis and invasion of CRC cells. In the present study, we demonstrate the predictive value of long non-coding RNA GAS5 (lncRNA GAS5) and mircoRNA-221 (miR-221) in the prognosis of colorectal cancer (CRC) and their effects on CRC cell proliferation, migration and invasion.\n\n\nMETHODS\nOne hundred and fifty-eight cases with CRC patients and 173 cases of healthy subjects that with no abnormalities, who've been diagnosed through colonoscopy between January 2012 and January 2014 were selected for the study. After the clinicopathological data of the subjects, tissue, plasma and exosomes were collected, lncRNA GAS5 and miR-221 expressions in tissues, plasma and exosomes were measured by reverse transcription quantitative polymerase chain reaction (RT-qPCR). The diagnostic values of lncRNA GAS5 and miR-221 expression in tissues, plasma and exosomes in patients with CRC were analyzed using receiver operating characteristic curve (ROC). Lentiviral vector was constructed for the overexpression of lncRNA GAS5, and SW480 cell line was used for the transfection of the experiment and assigned into an empty vector and GAS5 groups. The cell proliferation, migration and invasion were tested using a cell counting kit-8 assay and Transwell assay respectively.\n\n\nRESULTS\nThe results revealed that LncRNA GAS5 was upregulated while the miR-221 was downregulated in the tissues, plasma and exosomes of patients with CRC. The results of ROC showed that the expressions of both lncRNA GAS5 and miR-221 in the tissues, plasma and exosomes had diagnostic value in CRC. While the LncRNA GAS5 expression in tissues, plasma and exosomes were associated with the tumor node metastasis (TNM) stage, Dukes stage, lymph node metastasis (LNM), local recurrence rate and distant metastasis rate, the MiR-221 expression in tissues, plasma and exosomes were associated with tumor size, TNM stage, Dukes stage, LNM, local recurrence rate and distant metastasis rate. LncRNA GAS5 and miR-221 expression in tissues, plasma and exosomes were found to be independent prognostic factors for CRC. Following the overexpression of GAS5, the GAS5 expressions was up-regulated and miR-221 expression was down-regulated; the rate of cell proliferation, migration and invasion were decreased.", "title": "" }, { "docid": "8f2cfb5cb55b093f67c1811aba8b87e2", "text": "“You make what you measure” is a familiar mantra at datadriven companies. Accordingly, companies must be careful to choose North Star metrics that create a better product. Metrics fall into two general categories: direct count metrics such as total revenue and monthly active users, and nuanced quality metrics regarding value or other aspects of the user experience. Count metrics, when used exclusively as the North Star, might inform product decisions that harm user experience. Therefore, quality metrics play an important role in product development. We present a five-step framework for developing quality metrics using a combination of machine learning and product intuition. Machine learning ensures that the metric accurately captures user experience. Product intuition makes the metric interpretable and actionable. Through a case study of the Endorsements product at LinkedIn, we illustrate the danger of optimizing exclusively for count metrics, and showcase the successful application of our framework toward developing a quality metric. We show how the new quality metric has driven significant improvements toward creating a valuable, user-first product.", "title": "" }, { "docid": "d9ffb9e4bba1205892351b1328977f6c", "text": "Bayesian network models provide an attractive framework for multimodal sensor fusion. They combine an intuitive graphical representation with efficient algorithms for inference and learning. However, the unsupervised nature of standard parameter learning algorithms for Bayesian networks can lead to poor performance in classification tasks. We have developed a supervised learning framework for Bayesian networks, which is based on the Adaboost algorithm of Schapire and Freund. Our framework covers static and dynamic Bayesian networks with both discrete and continuous states. We have tested our framework in the context of a novel multimodal HCI application: a speech-based command and control interface for a Smart Kiosk. We provide experimental evidence for the utility of our boosted learning approach.", "title": "" }, { "docid": "2d3adb98f6b1b4e161d84314958960e5", "text": "BACKGROUND\nBright light therapy was shown to be a promising treatment for depression during pregnancy in a recent open-label study. In an extension of this work, we report findings from a double-blind placebo-controlled pilot study.\n\n\nMETHOD\nTen pregnant women with DSM-IV major depressive disorder were randomly assigned from April 2000 to January 2002 to a 5-week clinical trial with either a 7000 lux (active) or 500 lux (placebo) light box. At the end of the randomized controlled trial, subjects had the option of continuing in a 5-week extension phase. The Structured Interview Guide for the Hamilton Depression Scale-Seasonal Affective Disorder Version was administered to assess changes in clinical status. Salivary melatonin was used to index circadian rhythm phase for comparison with antidepressant results.\n\n\nRESULTS\nAlthough there was a small mean group advantage of active treatment throughout the randomized controlled trial, it was not statistically significant. However, in the longer 10-week trial, the presence of active versus placebo light produced a clear treatment effect (p =.001) with an effect size (0.43) similar to that seen in antidepressant drug trials. Successful treatment with bright light was associated with phase advances of the melatonin rhythm.\n\n\nCONCLUSION\nThese findings provide additional evidence for an active effect of bright light therapy for antepartum depression and underscore the need for an expanded randomized clinical trial.", "title": "" }, { "docid": "1b7a8725023d20e36ef929b427db51e5", "text": "Electronic Customer Relationship Management (eCRM) has become the latest paradigm in the world of Customer Relationship Management. Recent business surveys suggest that up to 50% of such implementations do not yield measurable returns on investment. A secondary analysis of 13 case studies suggests that many of these limited success implementations can be attributed to usability and resistance factors. The objective of this paper is to review the general usability and resistance principles in order build an integrative framework for analyzing eCRM case studies. The conclusions suggest that if organizations want to get the most from their eCRM implementations they need to revisit the general principles of usability and resistance and apply them.", "title": "" }, { "docid": "4e122b71c30c6c0721d5065adcf0b52c", "text": "License plate recognition usually contains three steps, namely license plate detection/localization, character segmentation and character recognition. When reading characters on a license plate one by one after license plate detection step, it is crucial to accurately segment the characters. The segmentation step may be affected by many factors such as license plate boundaries (frames). The recognition accuracy will be significantly reduced if the characters are not properly segmented. This paper presents an efficient algorithm for character segmentation on a license plate. The algorithm follows the step that detects the license plates using an AdaBoost algorithm. It is based on an efficient and accurate skew and slant correction of license plates, and works together with boundary (frame) removal of license plates. The algorithm is efficient and can be applied in real-time applications. The experiments are performed to show the accuracy of segmentation.", "title": "" }, { "docid": "589fe1890a0852d5880522429527ca44", "text": "The field of machine learning strives to develop algorithms that, through learning, lead to generalization; that is, the ability of a machine to perform a task that it was not explicitly trained for. Numerous approaches have been developed ranging from neural network models striving to replicate neurophysiology to more abstract mathematical manipulations which identify numerical similarities. Nevertheless a common theme amongst the varied approaches is that learning techniques incorporate a strategic component to try and yield the best possible decision or classification. The mathematics of game theory formally analyzes strategic interactions between competing players and is consequently quite appropriate to apply to the field of machine learning with potential descriptive as well as functional insights. Furthermore, game theoretic mechanism design seeks to develop a framework to achieve a desired outcome, and as such is applicable for defining a paradigm capable of performing classification. In this work we present a game theoretic chip-fire classifier which as an iterated game is able to perform pattern classification.", "title": "" }, { "docid": "dbf5d0f6ce7161f55cf346e46150e8d7", "text": "Loan fraud is a critical factor in the insolvency of financial institutions, so companies make an effort to reduce the loss from fraud by building a model for proactive fraud prediction. However, there are still two critical problems to be resolved for the fraud detection: (1) the lack of cost sensitivity between type I error and type II error in most prediction models, and (2) highly skewed distribution of class in the dataset used for fraud detection because of sparse fraud-related data. The objective of this paper is to examine whether classification cost is affected both by the cost-sensitive approach and by skewed distribution of class. To that end, we compare the classification cost incurred by a traditional cost-insensitive classification approach and two cost-sensitive classification approaches, Cost-Sensitive Classifier (CSC) and MetaCost. Experiments were conducted with a credit loan dataset from a major financial institution in Korea, while varying the distribution of class in the dataset and the number of input variables. The experiments showed that the lowest classification cost was incurred when the MetaCost approach was used and when non-fraud data and fraud data were balanced. In addition, the dataset that includes all delinquency variables was shown to be most effective on reducing the classification cost. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0b3555b8c1932a2364a7264cbf2f7c25", "text": "This paper introduces a novel weighted unsupervised learning for object detection using an RGB-D camera. This technique is feasible for detecting the moving objects in the noisy environments that are captured by an RGB-D camera. The main contribution of this paper is a real-time algorithm for detecting each object using weighted clustering as a separate cluster. In a preprocessing step, the algorithm calculates the pose 3D position X, Y, Z and RGB color of each data point and then it calculates each data point’s normal vector using the point’s neighbor. After preprocessing, our algorithm calculates k-weights for each data point; each weight indicates membership. Resulting in clustered objects of the scene. Keywords—Weighted Unsupervised Learning, Object Detection, RGB-D camera, Kinect", "title": "" } ]
scidocsrr
c86d1ca67012aa1721c42dcea95be26c
Extraction and Approximation of Numerical Attributes from the Web
[ { "docid": "a15f80b0a0ce17ec03fa58c33c57d251", "text": "The World-Wide Web consists of a huge number of unstructured documents, but it also contains structured data in the form of HTML tables. We extracted 14.1 billion HTML tables from Google’s general-purpose web crawl, and used statistical classification techniques to find the estimated 154M that contain high-quality relational data. Because each relational table has its own “schema” of labeled and typed columns, each such table can be considered a small structured database. The resulting corpus of databases is larger than any other corpus we are aware of, by at least five orders of magnitude. We describe the WebTables system to explore two fundamental questions about this collection of databases. First, what are effective techniques for searching for structured data at search-engine scales? Second, what additional power can be derived by analyzing such a huge corpus? First, we develop new techniques for keyword search over a corpus of tables, and show that they can achieve substantially higher relevance than solutions based on a traditional search engine. Second, we introduce a new object derived from the database corpus: the attribute correlation statistics database (AcsDB) that records corpus-wide statistics on cooccurrences of schema elements. In addition to improving search relevance, the AcsDB makes possible several novel applications: schema auto-complete, which helps a database designer to choose schema elements; attribute synonym finding, which automatically computes attribute synonym pairs for schema matching; and join-graph traversal, which allows a user to navigate between extracted schemas using automatically-generated join links. ∗Work done while all authors were at Google, Inc. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commer cial advantage, the VLDB copyright notice and the title of the publication an d its date appear, and notice is given that copying is by permission of the Very L arge Data Base Endowment. To copy otherwise, or to republish, to post o n servers or to redistribute to lists, requires a fee and/or special pe rmission from the publisher, ACM. VLDB ’08 Auckland, New Zealand Copyright 2008 VLDB Endowment, ACM 000-0-00000-000-0/00/ 00.", "title": "" } ]
[ { "docid": "1ca70e99cf3dc1957627efc68af32e0c", "text": "In the paradigm of multi-task learning, multiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learning that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combination of a finite number of underlying basis tasks. The coefficients of the linear combination are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on the assumption that task parameters within a group lie in a low dimensional subspace but allows the tasks in different groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods.", "title": "" }, { "docid": "d5f43b7405e08627b7f0930cc1ddd99e", "text": "Source code duplication, commonly known as code cloning, is considered an obstacle to software maintenance because changes to a cloned region often require consistent changes to other regions of the source code. Research has provided evidence that the elimination of clones may not always be practical, feasible, or cost-effective. We present a clone management approach that describes clone regions in a robust way that is independent from the exact text of clone regions or their location in a file, and that provides support for tracking clones in evolving software. Our technique relies on the concept of abstract clone region descriptors (CRDs), which describe clone regions using a combination of their syntactic, structural, and lexical information. We present our definition of CRDs, and describe a clone tracking system capable of producing CRDs from the output of different clone detection tools, notifying developers of modifications to clone regions, and supporting updates to the documented clone relationships. We evaluated the performance and usefulness of our approach across three clone detection tools and five subject systems, and the results indicate that CRDs are a practical and robust representation for tracking code clones in evolving software.", "title": "" }, { "docid": "ca1a2eafb7d21438bc933c195c94a49d", "text": "   The Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control.    MITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams.    MITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process.    MITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today’s and tomorrow’s clinically motivated research.", "title": "" }, { "docid": "79cdd24d14816f45b539f31606a3d5ee", "text": "The huge increase in type 2 diabetes is a burden worldwide. Many marketed compounds do not address relevant aspects of the disease; they may already compensate for defects in insulin secretion and insulin action, but loss of secreting cells (β-cell destruction), hyperglucagonemia, gastric emptying, enzyme activation/inhibition in insulin-sensitive cells, substitution or antagonizing of physiological hormones and pathways, finally leading to secondary complications of diabetes, are not sufficiently addressed. In addition, side effects for established therapies such as hypoglycemias and weight gain have to be diminished. At present, nearly 1000 compounds have been described, and approximately 180 of these are going to be developed (already in clinical studies), some of them directly influencing enzyme activity, influencing pathophysiological pathways, and some using G-protein-coupled receptors. In addition, immunological approaches and antisense strategies are going to be developed. Many compounds are derived from physiological compounds (hormones) aiming at improving their kinetics and selectivity, and others are chemical compounds that were obtained by screening for a newly identified target in the physiological or pathophysiological machinery. In some areas, great progress is observed (e.g., incretin area); in others, no great progress is obvious (e.g., glucokinase activators), and other areas are not recommended for further research. For all scientific areas, conclusions with respect to their impact on diabetes are given. Potential targets for which no chemical compound has yet been identified as a ligand (agonist or antagonist) are also described.", "title": "" }, { "docid": "4424a73177671ce5f1abcd304e546434", "text": "Photorealistic frontal view synthesis from a single face image has a wide range of applications in the field of face recognition. Although data-driven deep learning methods have been proposed to address this problem by seeking solutions from ample face data, this problem is still challenging because it is intrinsically ill-posed. This paper proposes a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details. Four landmark located patch networks are proposed to attend to local textures in addition to the commonly used global encoderdecoder network. Except for the novel architecture, we make this ill-posed problem well constrained by introducing a combination of adversarial loss, symmetry loss and identity preserving loss. The combined loss function leverages both frontal face distribution and pre-trained discriminative deep face models to guide an identity preserving inference of frontal views from profiles. Different from previous deep learning methods that mainly rely on intermediate features for recognition, our method directly leverages the synthesized identity preserving image for downstream tasks like face recognition and attribution estimation. Experimental results demonstrate that our method not only presents compelling perceptual results but also outperforms state-of-theart results on large pose face recognition.", "title": "" }, { "docid": "b042f6478ef34f4be8ee9b806ddf6011", "text": "By using an extensive framework for e-learning enablers and disablers (including 37 factors) this paper sets out to identify which of these challenges are most salient for an e-learning course in Sri Lanka. The study includes 1887 informants and data has been collected from year 2004 to 2007, covering opinions of students and staff. A quantitative approach is taken to identify the most important factors followed by a qualitative analysis to explain why and how they are important. The study identified seven major challenges in the following areas: Student support, Flexibility, Teaching and Learning Activities, Access, Academic confidence, Localization and Attitudes. In this paper these challenges will be discussed and solutions suggested.", "title": "" }, { "docid": "754de97083b172ccadc88033a4faa48c", "text": "BACKGROUND\nMany molecularly targeted anticancer agents entering the definitive stage of clinical development benefit only a subset of treated patients. This may lead to missing effective agents by the traditional broad-eligibility randomized trials due to the dilution of the overall treatment effect. We propose a statistically rigorous biomarker-adaptive threshold phase III design for settings in which a putative biomarker to identify patients who are sensitive to the new agent is measured on a continuous or graded scale.\n\n\nMETHODS\nThe design combines a test for overall treatment effect in all randomly assigned patients with the establishment and validation of a cut point for a prespecified biomarker of the sensitive subpopulation. The performance of the biomarker-adaptive design, relative to a traditional design that ignores the biomarker, was evaluated in a simulation study. The biomarker-adaptive design was also used to analyze data from a prostate cancer trial.\n\n\nRESULTS\nIn the simulation study, the biomarker-adaptive design preserved the power to detect the overall effect when the new treatment is broadly effective. When the proportion of sensitive patients as identified by the biomarker is low, the proposed design provided a substantial improvement in efficiency compared with the traditional trial design. Recommendations for sample size planning and implementation of the biomarker-adaptive design are provided.\n\n\nCONCLUSIONS\nA statistically valid test for a biomarker-defined subset effect can be prospectively incorporated into a randomized phase III design without compromising the ability to detect an overall effect if the intervention is beneficial in a broad population.", "title": "" }, { "docid": "7d3fd4245f62d264e41beb43900fd22b", "text": "This article presents a novel recovery method for fixed-wing Unmanned Aerial Vehicles (UAVs), aimed at enabling operations from marine vessels. Instead of using the conventional method of using a fixed net on the ship deck, we propose to suspend a net under two cooperative multirotor UAVs. While keeping their relative formation, the multirotor UAVs are able to intercept the incoming fixed-wing UAV along a virtual runway over the sea, and transport it back to the ship. In addition to discussing the concept and design a control system, this paper also presents experimental validation of the proposed concept for a smallscale UAV platform.", "title": "" }, { "docid": "8e6ab2776e8e1ad7cb3d02b9dfbcb733", "text": "We present a case report of a 4months old first born male child which was brought to our hospital with complaints of abdominal distension and mass in the upper abdomen causing feeding difficulty. Child was clinically found to have a firm non tender mass of about 10 x 8cms in the left upper quadrant of the abdomen which was clinically suspected to be Neuroblastoma. The child was subjected to ultrasound examination using 5-7Mhz Linear transducer in Philips HD11XE machine, which revealed a multicystic heterogeneous mass lesion of 10 x 8cms in the left hypochondrium, displacing the left kidney posteriorly and spleen inferiorly and crossing the midline showing significant peripheral colour uptake, possibility of Neuroblastoma. The child was then subject to CT scan of abdomen with contrast enhancement using 16slice Toshiba Activion scanner. The findings were a large, fairly well defined heterogeneous mass showing both solid and cystic areas showing significant internal and peripheral enhancement with areas of coarse amorphous calcifications. The mass was seen to erode the posterior wall of stomach and displacing the oral contrast within the stomach. The bowel loops were displaced inferiorly and towards the right, the left kidney posteriorly and the spleen inferiorly. No adjacent lymphadenopathy was seen. The child later underwent exploratory laparotomy and a large multicystic mass arising from postero-inferior wall of the stomach along its greater curvature was excised and stomach repaired. On histopathology it was proved to be an immature gastric teratoma containing mixed derivatives of all three germ cell layers.", "title": "" }, { "docid": "fb83fca1b1ed1fca15542900bdb3748d", "text": "Learning disease severity scores automatically from collected measurements may aid in the quality of both healthcare and scientific understanding. Some steps in that direction have been taken and machine learning algorithms for extracting scoring functions from data have been proposed. Given the rapid increase in both quantity and diversity of data measured and stored, the large amount of information is becoming one of the challenges for learning algorithms. In this work, we investigated the direction of the problemwhere the dimensionality of measured variables is large. Learning the severity score in such cases brings the issue of which of measured features are relevant. We have proposed a novel approach by combining desirable properties of existing formulations, which compares favorably to alternatives in accuracy and especially in the robustness of the learned scoring function.The proposed formulation has a nonsmooth penalty that induces sparsity.This problem is solved by addressing a dual formulationwhich is smooth and allows an efficient optimization.The proposed approachmight be used as an effective and reliable tool for both scoring function learning and biomarker discovery, as demonstrated by identifying a stable set of genes related to influenza symptoms’ severity, which are enriched in immune-related processes.", "title": "" }, { "docid": "acd0eb77f9361d4765240922a02f6a54", "text": "We show how the Hamiltonian Monte Carlo algorithm can sometimes be speeded up by “splitting” the Hamiltonian in a way that allows much of the movement around the state space to be done at low computational cost. One context where this is possible is when the log density of the distribution of interest (the potential energy function) can be written as the log of a Gaussian density, which is a quadratic function, plus a slowly-varying function. Hamiltonian dynamics for quadratic energy functions can be analytically solved. With the splitting technique, only the slowlyvarying part of the energy needs to be handled numerically, and this can be done with a larger stepsize (and hence fewer steps) than would be necessary with a direct simulation of the dynamics. Another context where splitting helps is when the most important terms of the potential energy function and its gradient can be evaluated quickly, with only a slowlyvarying part requiring costly computations. With splitting, the quick portion can be handled with a small stepsize, while the costly portion uses a larger stepsize. We show that both of these splitting approaches can reduce the computational cost of sampling from the posterior distribution for a logistic regression model, using either a Gaussian approximation centered on the posterior mode, or a Hamiltonian split into a B. Shahbaba ( ) Department of Statistics and Department of Computer Science, University of California, Irvine, CA 92697, USA e-mail: babaks@uci.edu S. Lan · W.O. Johnson Department of Statistics, University of California, Irvine, CA 92697, USA R.M. Neal Department of Statistics and Department of Computer Science, University of Toronto, Toronto, Ontario M5S 3G3, Canada term that depends on only a small number of critical cases, and another term that involves the larger number of cases whose influence on the posterior distribution is small.", "title": "" }, { "docid": "7e264804d56cab24454c59fe73b51884", "text": "General Douglas MacArthur remarked that \"old soldiers never die; they just fade away.\" For decades, researchers have concluded that visual working memories, like old soldiers, fade away gradually, becoming progressively less precise as they are retained for longer periods of time. However, these conclusions were based on threshold-estimation procedures in which the complete termination of a memory could artifactually produce the appearance of lower precision. Here, we use a recall-based visual working memory paradigm that provides separate measures of the probability that a memory is available and the precision of the memory when it is available. Using this paradigm, we demonstrate that visual working memory representations may be retained for several seconds with little or no loss of precision, but that they may terminate suddenly and completely during this period.", "title": "" }, { "docid": "454d4211d068fb5009cced3a3dca774b", "text": "The occurrence of ferroresonance oscillations of high voltage inductive (electromagnetic) voltage transformers (VT) has been recorded and reported in a number of papers and reports. Because of its non-linear characteristic, inductive voltage transformer has the possibility of causing ferroresonance with capacitances present in the transmission network, if initiated by a transient occurrence such as switching operation or fault. One of the solutions for ferroresonance mitigation is introducing an air gap into voltage transformer core magnetic path, thus linearizing its magnetizing characteristic and decreasing the possibility of a ferroresonance occurrence. This paper presents results of numerical ATP-EMTP simulation of typical ferroresonance situation involving inductive voltage transformers in high voltage networks with circuit breaker opening operation after which the voltage transformer remains energized through the circuit breaker grading capacitance. Main variable in calculating to the ferroresonance occurrence probability was the magnetizing characteristic change caused by the introduction of an air gap to the VT core, and separate diagrams are presented for VTs with different air gap length, including the paramount gapped transformer design – open core voltage transformers.", "title": "" }, { "docid": "1655080e43831fa11643fd6d6a478a2a", "text": "A novel topology for a soft-switching buck dc– dc converter with a coupled inductor is proposed. The softswitching buck converter has advantages over the traditional hardswitching converters. The most significant advantage is that it offers a lower switching loss. This converter operates under a zero-current switching condition at turn on and a zerovoltage switching condition at turn off. It presents the circuit configuration with a least components for realizing soft switching. Because of soft switching, the proposed converter can attain a high efficiency under heavy load conditions. Likewise, a high efficiency is also attained under light load conditions, which is significantly different from other soft switching buck converters. Keywords— Buck converter, coupled inductor, soft switching, zero-current switching (ZCS), zero-voltage switching (ZVS).", "title": "" }, { "docid": "80e4ac10df91cbbc6bbce4d30ec5abcf", "text": "Although many users are aware of the threats that malware pose, users are unaware that malware can infect peripheral devices. Many embedded devices support firmware update capabilities, yet they do not authenticate such updates; this allows adversaries to infect peripherals with malicious firmware. We present a case study of the Logitech G600 mouse, demonstrating attacks on networked systems which are also feasible against airgapped systems. If the target machine is air-gapped, we show that the Logitech G600 has enough space available to host an entire malware package inside its firmware. We also wrote a file transfer utility that transfers the malware from the mouse to the target machine. If the target is networked, the mouse can be used as a persistent threat that updates and reinstalls malware as desired. To mitigate these attacks, we implemented signature verification code which is essential to preventing malicious firmware from being installed on the mouse. We demonstrate that it is reasonable to include such signature verification code in the bootloader of the mouse.", "title": "" }, { "docid": "8aadbd4f7e91d3a9bd4ce13b22d302d1", "text": "The design and validation of a wireless monitoring system for dealing with wildlife road crossing problems is addressed. The wildlife detection procedure is based on the Doppler radar technology integrated in wireless sensor network devices. Such a solution tries to overcome the so-called habit effect arising with standard alert road-systems (e.g., static or flashing road signs) introducing the principle of real-time and event-based driver notification. To this end, the radar signal is locally processed by the wireless node to infer the target presence close to roadsides. In case of radar detection, the wireless node promptly transmits the collected information to the control unit, for data storage and further statistics. A prototype of the system has been deployed in a real test-site in Alps region for performance assessment. A selected set of preliminary results are here presented and discussed to show the capabilities of the proposed solution.", "title": "" }, { "docid": "7cd8dee294d751ec6c703d628e0db988", "text": "A major component of secondary education is learning to write effectively, a skill which is bolstered by repeated practice with formative guidance. However, providing focused feedback to every student on multiple drafts of each essay throughout the school year is a challenge for even the most dedicated of teachers. This paper first establishes a new ordinal essay scoring model and its state of the art performance compared to recent results in the Automated Essay Scoring field. Extending this model, we describe a method for using prediction on realistic essay variants to give rubric-specific formative feedback to writers. This method is used in Revision Assistant, a deployed data-driven educational product that provides immediate, rubric-specific, sentence-level feedback to students to supplement teacher guidance. We present initial evaluations of this feedback generation, both offline and in deployment.", "title": "" }, { "docid": "23493c14053a4608203f8e77bd899445", "text": "In this paper, lossless and near-lossless compression algorithms for multichannel electroencephalogram (EEG) signals are presented based on image and volumetric coding. Multichannel EEG signals have significant correlation among spatially adjacent channels; moreover, EEG signals are also correlated across time. Suitable representations are proposed to utilize those correlations effectively. In particular, multichannel EEG is represented either in the form of image (matrix) or volumetric data (tensor), next a wavelet transform is applied to those EEG representations. The compression algorithms are designed following the principle of “lossy plus residual coding,” consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual. Such approach guarantees a specifiable maximum error between original and reconstructed signals. The compression algorithms are applied to three different EEG datasets, each with different sampling rate and resolution. The proposed multichannel compression algorithms achieve attractive compression ratios compared to algorithms that compress individual channels separately.", "title": "" }, { "docid": "578d40b5c82fcc59fa2333e47a99d84c", "text": "Brain tumor is one of the major causes of death among people. It is evident that the chances of survival can be increased if the tumor is detected and classified correctly at its early stage. Conventional methods involve invasive techniques such as biopsy, lumbar puncture and spinal tap method, to detect and classify brain tumors into benign (non cancerous) and malignant (cancerous). A computer aided diagnosis algorithm has been designed so as to increase the accuracy of brain tumor detection and classification, and thereby replace conventional invasive and time consuming techniques. This paper introduces an efficient method of brain tumor classification, where, the real Magnetic Resonance (MR) images are classified into normal, non cancerous (benign) brain tumor and cancerous (malignant) brain tumor. The proposed method follows three steps, (1) wavelet decomposition, (2) textural feature extraction and (3) classification. Discrete Wavelet Transform is first employed using Daubechies wavelet (db4), for decomposing the MR image into different levels of approximate and detailed coefficients and then the gray level co-occurrence matrix is formed, from which the texture statistics such as energy, contrast, correlation, homogeneity and entropy are obtained. The results of co-occurrence matrices are then fed into a probabilistic neural network for further classification and tumor detection. The proposed method has been applied on real MR images, and the accuracy of classification using probabilistic neural network is found to be nearly 100%.", "title": "" }, { "docid": "65bf8b0a7432896d034d68fa77864c65", "text": "1. The data revolution in games – and everywhere else – calls for analysis methods that scale to with dataset size. The solution: game data mining. 2. Game data mining deals with the challenges of acquiring actionable insights from game telemetry. 3. Read the chapter for an introduction to game data mining, an overview of methods commonly and not so commonly used, examples, case studies and a substantial amount of practical advice on how to employ game data mining effectively.", "title": "" } ]
scidocsrr
dee141ad2fc7c5554d48a24944cbe586
Personalisation How a computer can know you better than yourself
[ { "docid": "a1c0670a27313de144451adff35ca83f", "text": "Fab is a recommendation system designed to help users sift through the enormous amount of information available in the World Wide Web. Operational since Dec. 1994, this system combines the content-based and collaborative methods of recommendation in a way that exploits the advantages of the two approaches while avoiding their shortcomings. Fab’s hybrid structure allows for automatic recognition of emergent issues relevant to various groups of users. It also enables two scaling problems, pertaining to the rising number of users and documents, to be addressed.", "title": "" } ]
[ { "docid": "16e90e4dbf5597ce6721a6177344db15", "text": "BACKGROUND\nScoping reviews are used to identify knowledge gaps, set research agendas, and identify implications for decision-making. The conduct and reporting of scoping reviews is inconsistent in the literature. We conducted a scoping review to identify: papers that utilized and/or described scoping review methods; guidelines for reporting scoping reviews; and studies that assessed the quality of reporting of scoping reviews.\n\n\nMETHODS\nWe searched nine electronic databases for published and unpublished literature scoping review papers, scoping review methodology, and reporting guidance for scoping reviews. Two independent reviewers screened citations for inclusion. Data abstraction was performed by one reviewer and verified by a second reviewer. Quantitative (e.g. frequencies of methods) and qualitative (i.e. content analysis of the methods) syntheses were conducted.\n\n\nRESULTS\nAfter searching 1525 citations and 874 full-text papers, 516 articles were included, of which 494 were scoping reviews. The 494 scoping reviews were disseminated between 1999 and 2014, with 45% published after 2012. Most of the scoping reviews were conducted in North America (53%) or Europe (38%), and reported a public source of funding (64%). The number of studies included in the scoping reviews ranged from 1 to 2600 (mean of 118). Using the Joanna Briggs Institute methodology guidance for scoping reviews, only 13% of the scoping reviews reported the use of a protocol, 36% used two reviewers for selecting citations for inclusion, 29% used two reviewers for full-text screening, 30% used two reviewers for data charting, and 43% used a pre-defined charting form. In most cases, the results of the scoping review were used to identify evidence gaps (85%), provide recommendations for future research (84%), or identify strengths and limitations (69%). We did not identify any guidelines for reporting scoping reviews or studies that assessed the quality of scoping review reporting.\n\n\nCONCLUSION\nThe number of scoping reviews conducted per year has steadily increased since 2012. Scoping reviews are used to inform research agendas and identify implications for policy or practice. As such, improvements in reporting and conduct are imperative. Further research on scoping review methodology is warranted, and in particular, there is need for a guideline to standardize reporting.", "title": "" }, { "docid": "cd53b16a73aa58ff890d76a5f61ebaa5", "text": "1Department of Mathematics and Computer Science, Adelphi University, 1 South Avenue, Garden City, New York, NY 11530, U.S.A. 2Department of Computer Science, Polytechnic University, 6 Metrotech Center, Brooklyn, New York, NY 11201, U.S.A. 3Department of Computer Science, 3141 Chestnut Street, Drexel University, Philadelphia, PA 19104, U.S.A. 4AT&T Labs—Research, Room C213, 180 Park Avenue, Florham Park, NJ 07932, U.S.A.", "title": "" }, { "docid": "4438015370e500c4bcdc347b3e332538", "text": "This article provides a tutorial introduction to modeling, estimation, and control for multirotor aerial vehicles that includes the common four-rotor or quadrotor case.", "title": "" }, { "docid": "76ebe7821ae75b50116d6ac3f156e571", "text": "Since the financial crisis in 2008 organisations have been forced to rethink their risk management. Therefore entities have changed from silo-based Traditional Risk Management to the overarching framework Enterprise Risk Management. Yet Enterprise Risk Management is a young model and it has to contend with various challenges. At the moment there are just a few research papers but they claim that this approach is reasonable. The two frameworks COSO and GRC try to support Enterprise Risk Management. Research does not provide studies about their efficiency. The challenges of Enterprise Risk Management are the composition of the system, suitable metrics, the human factor and the complex environment.", "title": "" }, { "docid": "d72f47ad136ebb9c74abe484980b212f", "text": "This paper introduces a novel architecture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games. Termed a deep reinforcement relevance network (DRRN), the architecture represents action and state spaces with separate embedding vectors, which are combined with an interaction function to approximate the Q-function in reinforcement learning. We evaluate the DRRN on two popular text games, showing superior performance over other deep Qlearning architectures. Experiments with paraphrased action descriptions show that the model is extracting meaning rather than simply memorizing strings of text.", "title": "" }, { "docid": "edba38e0515256fbb2e72fce87747472", "text": "The risk of predation can have large effects on ecological communities via changes in prey behaviour, morphology and reproduction. Although prey can use a variety of sensory signals to detect predation risk, relatively little is known regarding the effects of predator acoustic cues on prey foraging behaviour. Here we show that an ecologically important marine crab species can detect sound across a range of frequencies, probably in response to particle acceleration. Further, crabs suppress their resource consumption in the presence of experimental acoustic stimuli from multiple predatory fish species, and the sign and strength of this response is similar to that elicited by water-borne chemical cues. When acoustic and chemical cues were combined, consumption differed from expectations based on independent cue effects, suggesting redundancies among cue types. These results highlight that predator acoustic cues may influence prey behaviour across a range of vertebrate and invertebrate taxa, with the potential for cascading effects on resource abundance.", "title": "" }, { "docid": "20def85748f9d2f71cd34c4f0ca7f57c", "text": "Recent advances in artificial intelligence (AI) and machine learning, combined with developments in neuromorphic hardware technologies and ubiquitous computing, promote machines to emulate human perceptual and cognitive abilities in a way that will continue the trend of automation for several upcoming decades. Despite the gloomy scenario of automation as a job eliminator, we argue humans and machines can cross-fertilise in a way that forwards a cooperative coexistence. We build our argument on three pillars: (i) the economic mechanism of automation, (ii) the dichotomy of ‘experience’ that separates the first-person perspective of humans from artificial learning algorithms, and (iii) the interdependent relationship between humans and machines. To realise this vision, policy makers have to implement alternative educational approaches that support lifelong training and flexible job transitions.", "title": "" }, { "docid": "66432ab91b459c3de8e867c8214029d8", "text": "Distributional hypothesis lies in the root of most existing word representation models by inferring word meaning from its external contexts. However, distributional models cannot handle rare and morphologically complex words very well and fail to identify some finegrained linguistic regularity as they are ignoring the word forms. On the contrary, morphology points out that words are built from some basic units, i.e., morphemes. Therefore, the meaning and function of such rare words can be inferred from the words sharing the same morphemes, and many syntactic relations can be directly identified based on the word forms. However, the limitation of morphology is that it cannot infer the relationship between two words that do not share any morphemes. Considering the advantages and limitations of both approaches, we propose two novel models to build better word representations by modeling both external contexts and internal morphemes in a jointly predictive way, called BEING and SEING. These two models can also be extended to learn phrase representations according to the distributed morphology theory. We evaluate the proposed models on similarity tasks and analogy tasks. The results demonstrate that the proposed models can outperform state-of-the-art models significantly on both word and phrase representation learning.", "title": "" }, { "docid": "d0cf19d866af58483217befb27d78ce6", "text": "Image retrieval via a structured query is explored in Johnson, et al. [7]. The query is structured as a scene graph and a graphical model is generated from the scene graph’s object, attribute, and relationship structure. Inference is performed on the graphical model with candidate images and the energy results are used to rank the best matches. In [7], scene graph objects that are not in the set of recognized objects are not represented in the graphical model. This work proposes and tests two approaches for modeling the unrecognized objects in order to leverage the attribute and relationship models to improve image retrieval performance.", "title": "" }, { "docid": "36f2be7a14eeb10ad975aa00cfd30f36", "text": "Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a K-way tensor of length n and Tucker rank r from Gaussian measurements requires Ω(rnK−1) observations. In contrast, a certain (intractable) nonconvex formulation needs only O(r +nrK) observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with O(rbK/2cndK/2e) observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sum-of-nuclear-norms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. l1, nuclear norm). Our new formulation for low-rank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly.", "title": "" }, { "docid": "394e20d6fd7f69ce2f5308951244328f", "text": "Digital multimedia such as images and videos are prevalent on today’s internet and cause significant social impact, which can be evidenced by the proliferation of social networking sites with user generated contents. Due to the ease of generating and modifying images and videos, it is critical to establish trustworthiness for online multimedia information. In this paper, we propose a new framework to perform multimedia forensics by using compact side information to reconstruct the processing history of a multimedia document. We refer to this framework as FASHION, standing for Forensic hASH for informatION assurance. As a first step in the modular design for FASHION, we propose new algorithms based on Radon transform and scale space theory to effectively estimate the parameters of geometric transforms and detect local tampering that an image may have undergone. The FASHION framework is designed to answer a much broader range of questions regarding the processing history of multimedia data than simple binary decision from robust image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic techniques that do not use any side information.", "title": "" }, { "docid": "cbf5019b1363b20c15c284d6d76f3281", "text": "Matching articulated shapes represented by voxel-sets reduces to maximal sub-graph isomorphism when each set is described by a weighted graph. Spectral graph theory can be used to map these graphs onto lower dimensional spaces and match shapes by aligning their embeddings in virtue of their invariance to change of pose. Classical graph isomorphism schemes relying on the ordering of the eigenvalues to align the eigenspaces fail when handling large data-sets or noisy data. We derive a new formulation that finds the best alignment between two congruent K-dimensional sets of points by selecting the best subset of eigenfunctions of the Laplacian matrix. The selection is done by matching eigenfunction signatures built with histograms, and the retained set provides a smart initialization for the alignment problem with a considerable impact on the overall performance. Dense shape matching casted into graph matching reduces then, to point registration of embeddings under orthogonal transformations; the registration is solved using the framework of unsupervised clustering and the EM algorithm. Maximal subset matching of non identical shapes is handled by defining an appropriate outlier class. Experimental results on challenging examples show how the algorithm naturally treats changes of topology, shape variations and different sampling densities.", "title": "" }, { "docid": "90abf21c7a6929a47d789c3e1c56f741", "text": "Nearly 40 years ago, Dr. R.J. Gibbons made the first reports of the clinical relevance of what we now know as bacterial biofilms when he published his observations of the role of polysaccharide glycocalyx formation on teeth by Streptococcus mutans [Sci. Am. 238 (1978) 86]. As the clinical relevance of bacterial biofilm formation became increasingly apparent, interest in the phenomenon exploded. Studies are rapidly shedding light on the biomolecular pathways leading to this sessile mode of growth but many fundamental questions remain. The intent of this review is to consider the reasons why bacteria switch from a free-floating to a biofilm mode of growth. The currently available wealth of data pertaining to the molecular genetics of biofilm formation in commonly studied, clinically relevant, single-species biofilms will be discussed in an effort to decipher the motivation behind the transition from planktonic to sessile growth in the human body. Four potential incentives behind the formation of biofilms by bacteria during infection are considered: (1) protection from harmful conditions in the host (defense), (2) sequestration to a nutrient-rich area (colonization), (3) utilization of cooperative benefits (community), (4) biofilms normally grow as biofilms and planktonic cultures are an in vitro artifact (biofilms as the default mode of growth).", "title": "" }, { "docid": "a23e8d3781d60e708ddb0ef570b700f7", "text": "Orthogonal frequency division multiplexing (OFDM) is a popular method for high data rate wireless transmission. OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhance the system capacity on time-varying and frequency-selective channels, resulting in a multiple-input multiple-output (MIMO) configuration. The paper explores various physical layer research challenges in MIMO-OFDM system design, including physical channel measurements and modeling, analog beam forming techniques using adaptive antenna arrays, space-time techniques for MIMO-OFDM, error control coding techniques, OFDM preamble and packet design, and signal processing algorithms used to perform time and frequency synchronization, channel estimation, and channel tracking in MIMO-OFDM systems. Finally, the paper considers a software radio implementation of MIMO-OFDM.", "title": "" }, { "docid": "20bbd964987c9d13c5c1049f3113c6d2", "text": "Food and eating environments likely contribute to the increasing epidemic of obesity and chronic diseases, over and above individual factors such as knowledge, skills, and motivation. Environmental and policy interventions may be among the most effective strategies for creating population-wide improvements in eating. This review describes an ecological framework for conceptualizing the many food environments and conditions that influence food choices, with an emphasis on current knowledge regarding the home, child care, school, work site, retail store, and restaurant settings. Important issues of disparities in food access for low-income and minority groups and macrolevel issues are also reviewed. The status of measurement and evaluation of nutrition environments and the need for action to improve health are highlighted.", "title": "" }, { "docid": "1d8917f5faaed1531fdcd4df06ff0920", "text": "4G cellular standards are targeting aggressive spectrum reuse (frequency reuse 1) to achieve high system capacity and simplify radio network planning. The increase in system capacity comes at the expense of SINR degradation due to increased intercell interference, which severely impacts cell-edge user capacity and overall system throughput. Advanced interference management schemes are critical for achieving the required cell edge spectral efficiency targets and to provide ubiquity of user experience throughout the network. In this article we compare interference management solutions across the two main 4G standards: IEEE 802.16m (WiMAX) and 3GPP-LTE. Specifically, we address radio resource management schemes for interference mitigation, which include power control and adaptive fractional frequency reuse. Additional topics, such as interference management for multitier cellular deployments, heterogeneous architectures, and smart antenna schemes will be addressed in follow-up papers.", "title": "" }, { "docid": "1c5591bec1b8bfab63309aa2eb488e83", "text": "When performing visualization and classification, people often confront the problem of dimensionality reduction. Isomap is one of the most promising nonlinear dimensionality reduction techniques. However, when Isomap is applied to real-world data, it shows some limitations, such as being sensitive to noise. In this paper, an improved version of Isomap, namely S-Isomap, is proposed. S-Isomap utilizes class information to guide the procedure of nonlinear dimensionality reduction. Such a kind of procedure is called supervised nonlinear dimensionality reduction. In S-Isomap, the neighborhood graph of the input data is constructed according to a certain kind of dissimilarity between data points, which is specially designed to integrate the class information. The dissimilarity has several good properties which help to discover the true neighborhood of the data and, thus, makes S-Isomap a robust technique for both visualization and classification, especially for real-world problems. In the visualization experiments, S-Isomap is compared with Isomap, LLE, and WeightedIso. The results show that S-Isomap performs the best. In the classification experiments, S-Isomap is used as a preprocess of classification and compared with Isomap, WeightedIso, as well as some other well-established classification methods, including the K-nearest neighbor classifier, BP neural network, J4.8 decision tree, and SVM. The results reveal that S-Isomap excels compared to Isomap and WeightedIso in classification, and it is highly competitive with those well-known classification methods.", "title": "" }, { "docid": "ecf9469f4fdd38e6368ae0629c7a1195", "text": "Fingerprint image enhancement is an essential preprocessing step in fingerprint recognition applications. In this paper, we propose a novel filter design method for fingerprint image enhancement, primarily inspired from the traditional Gabor filter (TGF). The previous fingerprint image enhancement methods based on TGF banks have some drawbacks in their image-dependent parameter selection strategy, which leads to artifacts in some cases. To address this issue, we develop an improved version of the TGF, called the modified Gabor filter (MGF). Its parameter selection scheme is image-independent. The remarkable advantages of our MGF over the TGF consist in preserving fingerprint image structure and achieving image enhancement consistency. Experimental results indicate that the proposed MGF enhancement algorithm can reduce the FRR of a fingerprint matcher by approximately 2% at a FAR of 0.01%. 2003 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "9e35454e25d78714576f140928d4a666", "text": "Learning commonsense knowledge from natural language text is nontrivial due to reporting bias: people rarely state the obvious, e.g., “My house is bigger than me.” However, while rarely stated explicitly, this trivial everyday knowledge does influence the way people talk about the world, which provides indirect clues to reason about the world. For example, a statement like, “Tyler entered his house” implies that his house is bigger than Tyler. In this paper, we present an approach to infer relative physical knowledge of actions and objects along five dimensions (e.g., size, weight, and strength) from unstructured natural language text. We frame knowledge acquisition as joint inference over two closely related problems: learning (1) relative physical knowledge of object pairs and (2) physical implications of actions when applied to those object pairs. Empirical results demonstrate that it is possible to extract knowledge of actions and objects from language and that joint inference over different types of knowledge improves performance.", "title": "" }, { "docid": "176a982a60e302dcdd50484562dec7ce", "text": "The palatine aponeurosis is a thin, fibrous lamella comprising the extended tendons of the tensor veli palatini muscles, attached to the posterior border and inferior surface of the palatine bone. In dentistry, the relationship between the “vibrating line” and the border of the hard and soft palate has long been discussed. However, to our knowledge, there has been no discussion of the relationship between the palatine aponeurosis and the vibrating line(s). Twenty sides from ten fresh frozen White cadaveric heads (seven males and three females) whose mean age at death was 79 years) were used in this study. The thickness of the mucosa including the submucosal tissue was measured. The maximum length of the palatine aponeurosis on each side and the distance from the posterior nasal spine to the posterior border of the palatine aponeurosis in the midline were also measured. The relationship between the marked borderlines and the posterior border of the palatine bone was observed. The thickness of the mucosa and submucosal tissue on the posterior nasal spine and the maximum length of the palatine aponeurosis were 3.4 mm, and 12.2 mm on right side and 12.8 mm on left, respectively. The length of the palatine aponeurosis in the midline was 4.9 mm. In all specimens, the borderline between the compressible and incompressible parts corresponded to the posterior border of the palatine bone.", "title": "" } ]
scidocsrr
c23304081a262f1fff80fadacd664000
Provably secure session key distribution: the three party case
[ { "docid": "5a28fbdcce61256fd67d97fc353b138b", "text": "Use of encryption to achieve authenticated communication in computer networks is discussed. Example protocols are presented for the establishment of authenticated connections, for the management of authenticated mail, and for signature verification and document integrity guarantee. Both conventional and public-key encryption algorithms are considered as the basis for protocols.", "title": "" } ]
[ { "docid": "b5009853d22801517431f46683b235c2", "text": "Artificial intelligence (AI) is the study of how to make computers do things which, at the moment, people do better. Thus Strong AI claims that in near future we will be surrounded by such kinds of machine which can completely works like human being and machine could have human level intelligence. One intention of this article is to excite a broader AI audience about abstract algorithmic information theory concepts, and conversely to inform theorists about exciting applications to AI.The science of Artificial Intelligence (AI) might be defined as the construction of intelligent systems and their analysis.", "title": "" }, { "docid": "47501c171c7b3f8e607550c958852be1", "text": "Fundus images provide an opportunity for early detection of diabetes. Generally, retina fundus images of diabetic patients exhibit exudates, which are lesions indicative of Diabetic Retinopathy (DR). Therefore, computational tools can be considered to be used in assisting ophthalmologists and medical doctor for the early screening of the disease. Hence in this paper, we proposed visualisation of exudates in fundus images using radar chart and Color Auto Correlogram (CAC) technique. The proposed technique requires that the Optic Disc (OD) from the fundus image be removed. Next, image normalisation was performed to standardise the colors in the fundus images. The exudates from the modified image are then extracted using Artificial Neural Network (ANN) and visualised using radar chart and CAC technique. The proposed technique was tested on 149 images of the publicly available MESSIDOR database. Experimental results suggest that the method has potential to be used for early indication of DR, by visualising the overlap between CAC features of the fundus images.", "title": "" }, { "docid": "8e06dbf42df12a34952cdd365b7f328b", "text": "Data and theory from prism adaptation are reviewed for the purpose of identifying control methods in applications of the procedure. Prism exposure evokes three kinds of adaptive or compensatory processes: postural adjustments (visual capture and muscle potentiation), strategic control (including recalibration of target position), and spatial realignment of various sensory-motor reference frames. Muscle potentiation, recalibration, and realignment can all produce prism exposure aftereffects and can all contribute to adaptive performance during prism exposure. Control over these adaptive responses can be achieved by manipulating the locus of asymmetric exercise during exposure (muscle potentiation), the similarity between exposure and post-exposure tasks (calibration), and the timing of visual feedback availability during exposure (realignment).", "title": "" }, { "docid": "15dbf1ad05c8219be484c01145c09b6c", "text": "In this paper, we study the contextual bandit problem (also known as the multi-armed bandit problem with expert advice) for linear payoff functions. For T rounds, K actions, and d dimensional feature vectors, we prove an O ( √ Td ln(KT ln(T )/δ) ) regret bound that holds with probability 1− δ for the simplest known (both conceptually and computationally) efficient upper confidence bound algorithm for this problem. We also prove a lower bound of Ω( √ Td) for this setting, matching the upper bound up to logarithmic factors.", "title": "" }, { "docid": "6cce055b947b1d222bfdee01507416a1", "text": "An automatic road sign recognition system first locates road signs within images captured by an imaging sensor on-board of a vehicle, and then identifies road signs assisting the driver of the vehicle to properly operate the vehicle. This paper presents an automatic road sign recognition system capable of analysing live images, detecting multiple road signs within images, and classifying the type of the detected road signs. The system consists of two modules: detection and classification. The detection module segments the input image in the hue-saturation-intensity colour space and locates road signs. The classification module determines the type of detected road signs using a series of one to one architectural Multi Layer Perceptron neural networks. The performances of the classifiers that are trained using Resillient Backpropagation and Scaled Conjugate Gradient algorithms are compared. The experimental results demonstrate that the system is capable of achieving an average recognition hit-rate of 96% using Scaled Conjugate Gradient trained classifiers.", "title": "" }, { "docid": "5f52b31afe9bf18f009a10343ccedaf0", "text": "The preservation of image quality under various display conditions becomes more and more important in the multimedia era. A considerable amount of effort has been devoted to compensating the quality degradation caused by dim LCD backlight for mobile devices and desktop monitors. However, most previous enhancement methods for backlight-scaled images only consider the luminance component and overlook the impact of color appearance on image quality. In this paper, we propose a fast and elegant method that exploits the anchoring property of human visual system to preserve the color appearance of backlight-scaled images as much as possible. Our approach is distinguished from previous ones in many aspects. First, it has a sound theoretical basis. Second, it takes the luminance and chrominance components into account in an integral manner. Third, it has low complexity and can process 720p high-definition videos at 35 frames per second without flicker. The superior performance of the proposed method is verified through psychophysical tests.", "title": "" }, { "docid": "29378712a9ab9031879c95ee8baad923", "text": "In recent decades, different extensional forms of fuzzy sets have been developed. However, these multitudinous fuzzy sets are unable to deal with quantitative information better. Motivated by fuzzy linguistic approach and hesitant fuzzy sets, the hesitant fuzzy linguistic term set was introduced and it is a more reasonable set to deal with quantitative information. During the process of multiple criteria decision making, it is necessary to propose some aggregation operators to handle hesitant fuzzy linguistic information. In this paper, two aggregation operators for hesitant fuzzy linguistic term sets are introduced, which are the hesitant fuzzy linguistic Bonferroni mean operator and the weighted hesitant fuzzy linguistic Bonferroni mean operator. Correspondingly, several properties of these two aggregation operators are discussed. Finally, a practical case is shown in order to express the application of these two aggregation operators. This case mainly discusses how to choose the best hospital about conducting the whole society resourcemanagement research included in awisdommedical health system. Communicated by V. Loia. B Zeshui Xu xuzeshui@263.net Xunjie Gou gouxunjie@qq.com Huchang Liao liaohuchang@163.com 1 Business School, Sichuan University, Chengdu 610064, China 2 School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, China", "title": "" }, { "docid": "6e70435f2d434581f00962b5677facfa", "text": "Many institutions of Higher Education and Corporate Training Institutes are resorting to e-Learning as a means of solving authentic learning and performance problems, while other institutions are hopping onto the bandwagon simply because they do not want to be left behind. Success is crucial because an unsuccessful effort to implement e-Learning will be clearly reflected in terms of the return of investment. One of the most crucial prerequisites for successful implementation of e-Learning is the need for careful consideration of the underlying pedagogy, or how learning takes place online. In practice, however, this is often the most neglected aspect in any effort to implement e-Learning. The purpose of this paper is to identify the pedagogical principles underlying the teaching and learning activities that constitute effective e-Learning. An analysis and synthesis of the principles and ideas by the practicing e-Learning company employing the author will also be presented, in the perspective of deploying an effective Learning Management Systems (LMS). D 2002 Published by Elsevier Science Inc.", "title": "" }, { "docid": "5ec1cff52a55c5bd873b5d0d25e0456b", "text": "This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on WordNet. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually.", "title": "" }, { "docid": "131415093146eeecb6231e22e514170b", "text": "Aspect-Oriented Programming (AOP) provides another way of thinking about program structure that allows developers to separate and modularize concerns like crosscutting concerns. These concerns are maintained in aspects that allows to easily maintain both the core and crosscutting concerns. Much research on this area has been done focused on traditional software development. Although little has been done in the Web development context. In this paper is presented an overview of existing AOP PHP development tools identifying their strengths and weaknesses. Then we compare the existing AOP PHP development tools presented in this paper. We then discuss how these tools can be effectively used in the Web development. Finally, is discussed how AOP can enhance the Web development and are presented some future work possibilities on this area.", "title": "" }, { "docid": "efe4f4e726e40731432a95dbdfcb9f89", "text": "We propose the combination of a keyframe-based monocular SLAM system and a global localization method. The SLAM system runs locally on a camera-equipped mobile client and provides continuous, relative 6DoF pose estimation as well as keyframe images with computed camera locations. As the local map expands, a server process localizes the keyframes with a pre-made, globally-registered map and returns the global registration correction to the mobile client. The localization result is updated each time a keyframe is added, and observations of global anchor points are added to the client-side bundle adjustment process to further refine the SLAM map registration and limit drift. The end result is a 6DoF tracking and mapping system which provides globally registered tracking in real-time on a mobile device, overcomes the difficulties of localization with a narrow field-of-view mobile phone camera, and is not limited to tracking only in areas covered by the offline reconstruction.", "title": "" }, { "docid": "0b44782174d1dae460b86810db8301ec", "text": "We present an overview of Markov chain Monte Carlo, a sampling method for model inference and uncertainty quantification. We focus on the Bayesian approach to MCMC, which allows us to estimate the posterior distribution of model parameters, without needing to know the normalising constant in Bayes’ theorem. Given an estimate of the posterior, we can then determine representative models (such as the expected model, and the maximum posterior probability model), the probability distributions for individual parameters, and the uncertainty about the predictions from these models. We also consider variable dimensional problems in which the number of model parameters is unknown and needs to be inferred. Such problems can be addressed with reversible jump (RJ) MCMC. This leads us to model choice, where we may want to discriminate between models or theories of differing complexity. For problems where the models are hierarchical (e.g. similar structure but with a different number of parameters), the Bayesian approach naturally selects the simpler models. More complex problems require an estimate of the normalising constant in Bayes’ theorem (also known as the evidence) and this is difficult to do reliably for high dimensional problems. We illustrate the applications of RJMCMC with 3 examples from our earlier working involving modelling distributions of geochronological age data, inference of sea-level and sediment supply histories from 2D stratigraphic cross-sections, and identification of spatially discontinuous thermal histories from a suite of apatite fission track samples distributed in 3D. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ec300259d5bcdcf3373d05ddcd8f99ae", "text": "This research focuses on the flapping wing mechanism design for the micro air vehicle model. The paper starts with analysis the topological structure characteristics of Single-Crank Double-Rocker mechanism. Following the design procedure, all of the possible combinations of flapping mechanism which contains not more than 6 components were generated. The design procedure is based on Hong-Sen Yan's creative design theory for mechanical devices. This research designed 31 different types of mechanisms, which provide more directions for the design and fabrication of the micro air vehicle model.", "title": "" }, { "docid": "d4ed4cad670b1e11cfb3c869e34cf9fd", "text": "BACKGROUND\nDespite the many antihypertensive medications available, two-thirds of patients with hypertension do not achieve blood pressure control. This is thought to be due to a combination of poor patient education, poor medication adherence, and \"clinical inertia.\" The present trial evaluates an intervention consisting of health coaching, home blood pressure monitoring, and home medication titration as a method to address these three causes of poor hypertension control.\n\n\nMETHODS/DESIGN\nThe randomized controlled trial will include 300 patients with poorly controlled hypertension. Participants will be recruited from a primary care clinic in a teaching hospital that primarily serves low-income populations.An intervention group of 150 participants will receive health coaching, home blood pressure monitoring, and home-titration of antihypertensive medications during 6 months. The control group (n=150) will receive health coaching plus home blood pressure monitoring for the same duration. A passive control group will receive usual care. Blood pressure measurements will take place at baseline, and after 6 and 12 months. The primary outcome will be change in systolic blood pressure after 6 and 12 months. Secondary outcomes measured will be change in diastolic blood pressure, adverse events, and patient and provider satisfaction.\n\n\nDISCUSSION\nThe present study is designed to assess whether the 3-pronged approach of health coaching, home blood pressure monitoring, and home medication titration can successfully improve blood pressure, and if so, whether this effect persists beyond the period of the intervention.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov identifier: NCT01013857.", "title": "" }, { "docid": "f6f1462e8edd8200948168423c87c1bf", "text": "Users' behaviors are driven by their preferences across various aspects of items they are potentially interested in purchasing, viewing, etc. Latent space approaches model these aspects in the form of latent factors. Although such approaches have been shown to lead to good results, the aspects that are important to different users can vary. In many domains, there may be a set of aspects for which all users care about and a set of aspects that are specific to different subsets of users. To explicitly capture this, we consider models in which there are some latent factors that capture the shared aspects and some user subset specific latent factors that capture the set of aspects that the different subsets of users care about.\n In particular, we propose two latent space models: rGLSVD and sGLSVD, that combine such a global and user subset specific sets of latent factors. The rGLSVD model assigns the users into different subsets based on their rating patterns and then estimates a global and a set of user subset specific local models whose number of latent dimensions can vary.\n The sGLSVD model estimates both global and user subset specific local models by keeping the number of latent dimensions the same among these models but optimizes the grouping of the users in order to achieve the best approximation. Our experiments on various real-world datasets show that the proposed approaches significantly outperform state-of-the-art latent space top-N recommendation approaches.", "title": "" }, { "docid": "0860b29f52d403a0ff728a3e356ec071", "text": "Neuroanatomy has entered a new era, culminating in the search for the connectome, otherwise known as the brain's wiring diagram. While this approach has led to landmark discoveries in neuroscience, potential neurosurgical applications and collaborations have been lagging. In this article, the authors describe the ideas and concepts behind the connectome and its analysis with graph theory. Following this they then describe how to form a connectome using resting state functional MRI data as an example. Next they highlight selected insights into healthy brain function that have been derived from connectome analysis and illustrate how studies into normal development, cognitive function, and the effects of synthetic lesioning can be relevant to neurosurgery. Finally, they provide a précis of early applications of the connectome and related techniques to traumatic brain injury, functional neurosurgery, and neurooncology.", "title": "" }, { "docid": "c94d01ee0aaa8a70ce4e3441850316a6", "text": "Convolutional neural networks (CNNs) are inherently subject to invariable filters that can only aggregate local inputs with the same topological structures. It causes that CNNs are allowed to manage data with Euclidean or grid-like structures (e.g., images), not ones with non-Euclidean or graph structures (e.g., traffic networks). To broaden the reach of CNNs, we develop structure-aware convolution to eliminate the invariance, yielding a unified mechanism of dealing with both Euclidean and non-Euclidean structured data. Technically, filters in the structure-aware convolution are generalized to univariate functions, which are capable of aggregating local inputs with diverse topological structures. Since infinite parameters are required to determine a univariate function, we parameterize these filters with numbered learnable parameters in the context of the function approximation theory. By replacing the classical convolution in CNNs with the structure-aware convolution, Structure-Aware Convolutional Neural Networks (SACNNs) are readily established. Extensive experiments on eleven datasets strongly evidence that SACNNs outperform current models on various machine learning tasks, including image classification and clustering, text categorization, skeleton-based action recognition, molecular activity detection, and taxi flow prediction.", "title": "" }, { "docid": "94bb7d2329cbea921c6f879090ec872d", "text": "We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment. An interactive version of this paper is available at https://worldmodels.github.io", "title": "" }, { "docid": "a29a61f5ad2e4b44e8e3d11b471a0f06", "text": "To ascertain by MRI the presence of filler injected into facial soft tissue and characterize complications by contrast enhancement. Nineteen volunteers without complications were initially investigated to study the MRI features of facial fillers. We then studied another 26 patients with clinically diagnosed filler-related complications using contrast-enhanced MRI. TSE-T1-weighted, TSE-T2-weighted, fat-saturated TSE-T2-weighted, and TIRM axial and coronal scans were performed in all patients, and contrast-enhanced fat-suppressed TSE-T1-weighted scans were performed in complicated patients, who were then treated with antibiotics. Patients with soft-tissue enhancement and those without enhancement but who did not respond to therapy underwent skin biopsy. Fisher’s exact test was used for statistical analysis. MRI identified and quantified the extent of fillers. Contrast enhancement was detected in 9/26 patients, and skin biopsy consistently showed inflammatory granulomatous reaction, whereas in 5/17 patients without contrast enhancement, biopsy showed no granulomas. Fisher’s exact test showed significant correlation (p < 0.001) between subcutaneous contrast enhancement and granulomatous reaction. Cervical lymph node enlargement (longitudinal axis >10 mm) was found in 16 complicated patients (65 %; levels IA/IB/IIA/IIB). MRI is a useful non-invasive tool for anatomical localization of facial dermal filler; IV gadolinium administration is advised in complicated cases for characterization of granulomatous reaction. • MRI is a non-invasive tool for facial dermal filler detection and localization. • MRI-criteria to evaluate complicated/non-complicated cases after facial dermal filler injections are defined. • Contrast-enhanced MRI detects subcutaneous inflammatory granulomatous reaction due to dermal filler. • 65 % patients with filler-related complications showed lymph-node enlargement versus 31.5 % without complications. • Lymph node enlargement involved cervical levels (IA/IB/IIA/IIB) that drained treated facial areas.", "title": "" }, { "docid": "2c4a2d41653f05060ff69f1c9ad7e1a6", "text": "Until recently the information technology (IT)-centricity was the prevailing paradigm in cyber security that was organized around confidentiality, integrity and availability of IT assets. Despite of its widespread usage, the weakness of IT-centric cyber security became increasingly obvious with the deployment of very large IT infrastructures and introduction of highly mobile tactical missions where the IT-centric cyber security was not able to take into account the dynamics of time and space bound behavior of missions and changes in their operational context. In this paper we will show that the move from IT-centricity towards to the notion of cyber attack resilient missions opens new opportunities in achieving the completion of mission goals even if the IT assets and services that are supporting the missions are under cyber attacks. The paper discusses several fundamental architectural principles of achieving cyber attack resilience of missions, including mission-centricity, survivability through adaptation, synergistic mission C2 and mission cyber security management, and the real-time temporal execution of the mission tasks. In order to achieve the overall system resilience and survivability under a cyber attack, both, the missions and the IT infrastructure are considered as two interacting adaptable multi-agent systems. While the paper is mostly concerned with the architectural principles of achieving cyber attack resilient missions, several models and algorithms that support resilience of missions are discussed in fairly detailed manner.", "title": "" } ]
scidocsrr
44011383e338c811690113ab6f0e7146
Challenges on Large Scale Surveillance Video Analysis
[ { "docid": "4805c5df39392619d00a6afaba768fad", "text": "We present a novel approach to online multi-target tracking based on recurrent neural networks (RNNs). Tracking multiple objects in real-world scenes involves many challenges, including a) an a-priori unknown and time-varying number of targets, b) a continuous state estimation of all present targets, and c) a discrete combinatorial problem of data association. Most previous methods involve complex models that require tedious tuning of parameters. Here, we propose for the first time, an end-to-end learning approach for online multi-target tracking. Existing deep learning methods are not designed for the above challenges and cannot be trivially applied to the task. Our solution addresses all of the above points in a principled way. Experiments on both synthetic and real data show promising results obtained at ≈300 Hz on a standard CPU, and pave the way towards future research in this direction.", "title": "" }, { "docid": "2e6c0d221b018569ad7dc10204cbf64e", "text": "Vehicle re-identification is an important problem and has many applications in video surveillance and intelligent transportation. It gains increasing attention because of the recent advances of person re-identification techniques. However, unlike person re-identification, the visual differences between pairs of vehicle images are usually subtle and even challenging for humans to distinguish. Incorporating additional spatio-temporal information is vital for solving the challenging re-identification task. Existing vehicle re-identification methods ignored or used oversimplified models for the spatio-temporal relations between vehicle images. In this paper, we propose a two-stage framework that incorporates complex spatio-temporal information for effectively regularizing the re-identification results. Given a pair of vehicle images with their spatiotemporal information, a candidate visual-spatio-temporal path is first generated by a chain MRF model with a deeply learned potential function, where each visual-spatiotemporal state corresponds to an actual vehicle image with its spatio-temporal information. A Siamese-CNN+Path- LSTM model takes the candidate path as well as the pairwise queries to generate their similarity score. Extensive experiments and analysis show the effectiveness of our proposed method and individual components.", "title": "" }, { "docid": "6191f9b6d5c6b04c6c20191a3d1bf1fd", "text": "When considering person re-identification (re-ID) as a retrieval process, re-ranking is a critical step to improve its accuracy. Yet in the re-ID community, limited effort has been devoted to re-ranking, especially those fully automatic, unsupervised solutions. In this paper, we propose a k-reciprocal encoding method to re-rank the re-ID results. Our hypothesis is that if a gallery image is similar to the probe in the k-reciprocal nearest neighbors, it is more likely to be a true match. Specifically, given an image, a k-reciprocal feature is calculated by encoding its k-reciprocal nearest neighbors into a single vector, which is used for re-ranking under the Jaccard distance. The final distance is computed as the combination of the original distance and the Jaccard distance. Our re-ranking method does not require any human interaction or any labeled data, so it is applicable to large-scale datasets. Experiments on the large-scale Market-1501, CUHK03, MARS, and PRW datasets confirm the effectiveness of our method.", "title": "" } ]
[ { "docid": "85a7176961aec4f8e5bd4335154d929c", "text": "The technology behind information systems evolves at an exponential rate, while at the same time becoming more and more ubiquitous. This brings with it an implicit rise in the average complexity of systems as well as the number of external interactions. In order to allow a proper assessment of the security of such (sub)systems, a whole arsenal of methodologies, methods and tools have been developed in recent years. However, most security auditors commonly use a very small subset of this collection, that best suits their needs. This thesis aims at uncovering the differences and limitations of the most common Risk Assessment frameworks, the conceptual models that support them, as well as the tools that implement them. This is done in order to gain a better understanding of the applicability of each method and/or tool and suggest guidelines to picking the most suitable one. 0000000 Current Established Risk Assessment Methodologies and Tools Page 3 0000000 Current Established Risk Assessment Methodologies and Tools Page 4", "title": "" }, { "docid": "abedd6f0896340a190750666b1d28d91", "text": "This study aimed to characterize the neural generators of the early components of the visual evoked potential (VEP) to isoluminant checkerboard stimuli. Multichannel scalp recordings, retinotopic mapping and dipole modeling techniques were used to estimate the locations of the cortical sources giving rise to the early C1, P1, and N1 components. Dipole locations were matched to anatomical brain regions visualized in structural magnetic resonance imaging (MRI) and to functional MRI (fMRI) activations elicited by the same stimuli. These converging methods confirmed previous reports that the C1 component (onset latency 55 msec; peak latency 90-92 msec) was generated in the primary visual area (striate cortex; area 17). The early phase of the P1 component (onset latency 72-80 msec; peak latency 98-110 msec) was localized to sources in dorsal extrastriate cortex of the middle occipital gyrus, while the late phase of the P1 component (onset latency 110-120 msec; peak latency 136-146 msec) was localized to ventral extrastriate cortex of the fusiform gyrus. Among the N1 subcomponents, the posterior N150 could be accounted for by the same dipolar source as the early P1, while the anterior N155 was localized to a deep source in the parietal lobe. These findings clarify the anatomical origin of these VEP components, which have been studied extensively in relation to visual-perceptual processes.", "title": "" }, { "docid": "135deb35cf3600cba8e791d604e26ffb", "text": "Much of this book describes the algorithms behind search engines and information retrieval systems. By contrast, this chapter focuses on the human users of search systems, and the window through which search systems are seen: the search user interface. The role of the search user interface is to aid in the searcher's understanding and expression of their information needs, and to help users formulate their queries, select among available information sources, understand search results, and keep track of the progress of their search. In the first edition of this book, very little was known about what makes for an effective search interface. In the intervening years, much has become understood about which ideas work from a usability perspective, and which do not. This chapter briefly summarizes the state of the art of search interface design, both in terms of developments in academic research as well as in deployment in commercial systems. The sections that follow discuss how people search, search interfaces today, visualization in search interfaces, and the design and evaluation of search user interfaces. Search tasks range from the relatively simple (e.g., looking up disputed facts or finding weather information) to the rich and complex (e.g., job seeking and planning vacations). Search interfaces should support a range of tasks, while taking into account how people think about searching for information. This section summarizes theoretical models about and empirical observations of the process of online information seeking. Information Lookup versus Exploratory Search User interaction with search interfaces differs depending on the type of task, the amount of time and effort available to invest in the process, and the domain expertise of the information seeker. The simple interaction dialogue used in Web search engines is most appropriate for finding answers to questions or to finding Web sites or other resources that act as search starting points. But, as Marchionini [89] notes, the \" turn-taking \" interface of Web search engines is inherently limited and is many cases is being supplanted by speciality search engines – such as for travel and health information – that offer richer interaction models. Marchionini [89] makes a distinction between information lookup and exploratory search. Lookup tasks are akin to fact retrieval or question answering, and are satisfied by short, discrete pieces of information: numbers, dates, names, or names of files or Web sites. Standard Web search interactions (as well as standard database management system queries) can …", "title": "" }, { "docid": "45356e33e51d8d2e2bfb6365d8269a69", "text": "We survey research on self-driving cars published in the literature focusing on autonomous cars developed since the DARPA challenges, which are equipped with an autonomy system that can be categorized as SAE level 3 or higher. The architecture of the autonomy system of self-driving cars is typically organized into the perception system and the decision-making system. The perception system is generally divided into many subsystems responsible for tasks such as self-driving-car localization, static obstacles mapping, moving obstacles detection and tracking, road mapping, traffic signalization detection and recognition, among others. The decision-making system is commonly partitioned as well into many subsystems responsible for tasks such as route planning, path planning, behavior selection, motion planning, and control. In this survey, we present the typical architecture of the autonomy system of self-driving cars. We also review research on relevant methods for perception and decision making. Furthermore, we present a detailed description of the architecture of the autonomy system of the UFES's car, IARA. Finally, we list prominent autonomous research cars developed by technology companies and reported in the media.", "title": "" }, { "docid": "1186fa429d435d0e2009e8b155cf92cc", "text": "Recommender Systems are software tools and techniques for suggesting items to users by considering their preferences in an automated fashion. The suggestions provided are aimed at support users in various decisionmaking processes. Technically, recommender system has their origins in different fields such as Information Retrieval (IR), text classification, machine learning and Decision Support Systems (DSS). Recommender systems are used to address the Information Overload (IO) problem by recommending potentially interesting or useful items to users. They have proven to be worthy tools for online users to deal with the IO and have become one of the most popular and powerful tools in E-commerce. Many existing recommender systems rely on the Collaborative Filtering (CF) and have been extensively used in E-commerce .They have proven to be very effective with powerful techniques in many famous E-commerce companies. This study presents an overview of the field of recommender systems with current generation of recommendation methods and examines comprehensively CF systems with its algorithms.", "title": "" }, { "docid": "164e5bde10882e3f7a6bcdf473eb7387", "text": "This paper is written for (social science) researchers seeking to analyze the wealth of social media now available. It presents a comprehensive review of software tools for social networking media, wikis, really simple syndication feeds, blogs, newsgroups, chat and news feeds. For completeness, it also includes introductions to social media scraping, storage, data cleaning and sentiment analysis. Although principally a review, the paper also provides a methodology and a critique of social media tools. Analyzing social media, in particular Twitter feeds for sentiment analysis, has become a major research and business activity due to the availability of web-based application programming interfaces (APIs) provided by Twitter, Facebook and News services. This has led to an ‘explosion’ of data services, software tools for scraping and analysis and social media analytics platforms. It is also a research area undergoing rapid change and evolution due to commercial pressures and the potential for using social media data for computational (social science) research. Using a simple taxonomy, this paper provides a review of leading software tools and how to use them to scrape, cleanse and analyze the spectrum of social media. In addition, it discussed the requirement of an experimental computational environment for social media research and presents as an illustration the system architecture of a social media (analytics) platform built by University College London. The principal contribution of this paper is to provide an overview (including code fragments) for scientists seeking to utilize social media scraping and analytics either in their research or business. The data retrieval techniques that are presented in this paper are valid at the time of writing this paper (June 2014), but they are subject to change since social media data scraping APIs are rapidly changing.", "title": "" }, { "docid": "fdb3afefbb8e96eed2e35e8c2a3fd015", "text": "BACKGROUND\nAvoidant/Restrictive Food Intake Disorder (ARFID) is a \"new\" diagnosis in the recently published DSM-5, but there is very little literature on patients with ARFID. Our objectives were to determine the prevalence of ARFID in children and adolescents undergoing day treatment for an eating disorder, and to compare ARFID patients to other eating disorder patients in the same cohort.\n\n\nMETHODS\nA retrospective chart review of 7-17 year olds admitted to a day program for younger patients with eating disorders between 2008 and 2012 was performed. Patients with ARFID were compared to those with anorexia nervosa, bulimia nervosa, and other specified feeding or eating disorder/unspecified feeding or eating disorder with respect to demographics, anthropometrics, clinical symptoms, and psychometric testing, using Chi-square, ANOVA, and post-hoc analysis.\n\n\nRESULTS\n39/173 (22.5%) patients met ARFID criteria. The ARFID group was younger than the non-ARFID group and had a greater proportion of males. Similar degrees of weight loss and malnutrition were found between groups. Patients with ARFID reported greater fears of vomiting and/or choking and food texture issues than those with other eating disorders, as well as greater dependency on nutritional supplements at intake. Children's Eating Attitudes Test scores were lower for children with than without ARFID. A higher comorbidity of anxiety disorders, pervasive developmental disorder, and learning disorders, and a lower comorbidity of depression, were found in those with ARFID.\n\n\nCONCLUSIONS\nThis study demonstrates that there are significant demographic and clinical characteristics that differentiate children with ARFID from those with other eating disorders in a day treatment program, and helps substantiate the recognition of ARFID as a distinct eating disorder diagnosis in the DSM-5.", "title": "" }, { "docid": "f6d57563226c779e7e44a638da35276f", "text": "Given the substantial investment in information technology (IT), and the significant impact it has on organizational success, organisations consume considerable resources to manage acquisition and use of IT in organizations. While, various arguments proposed suggest which IT governance arrangements may work best, our understanding of the effectiveness of such initiatives is limited. We examine the relationship between the effectiveness of IT steering committee-driven IT governance initiatives and firm’s IT management and IT infrastructure related capabilities. We further propose that firm’s IT-related capabilities, generated through IT governance initiatives should improve its business processes and firm-level performance. We test these relationships empirically by a field survey of 216 firms. Results of this study suggest that a firms’ effectiveness of IT steering committee-driven IT governance initiatives positively relate to the level of their IT-related capabilities. We also found positive relationships between IT-related capabilities and internal process-level performance. Our results also support the conjecture that improvement in internal process-level performance will be positively related to improvement in customer service and firm-level performance. For researchers, we demonstrate that the resource-based theory provides a more robust explanation of the determinants of firms IT governance initiatives. This would be ideal in evaluating other IT governance initiatives effectiveness in relation to how they contribute to building performance-differentiating IT-related capabilities. For decision makers, we hope our study has reiterated the notion that IT governance is truly a coordinated effort, embracing all levels of human resources.", "title": "" }, { "docid": "701283830f7f2e371ddc11223d7a776c", "text": "Overtaking is a complex and hazardous driving maneuver for intelligent vehicles. When to initiate overtaking and how to complete overtaking are critical issues for an overtaking intelligent vehicle. We propose an overtaking control method based on the estimation of the conflict probability. This method uses the conflict probability as the safety indicator and completes overtaking by tracking a safe conflict probability. The conflict probability is estimated by the future relative position of intelligent vehicles, and the future relative position is estimated by using the dynamics models of the intelligent vehicles. The proposed method uses model predictive control to track a desired safe conflict probability and synthesizes decision making and control of the overtaking maneuver. The effectiveness of this method has been validated in different experimental configurations, and the effects of some parameters in this control method have also been investigated.", "title": "" }, { "docid": "c200b79726ca0b441bc1311975bf0008", "text": "This article introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90nm to 22nm and beyond. At microarchitectural level, McPAT includes models for the fundamental components of a complete chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, and integrated system components such as memory controllers and Ethernet controllers. At circuit level, McPAT supports detailed modeling of critical-path timing, area, and power. At technology level, McPAT models timing, area, and power for the device types forecast in the ITRS roadmap. McPAT has a flexible XML interface to facilitate its use with many performance simulators.\n Combined with a performance simulator, McPAT enables architects to accurately quantify the cost of new ideas and assess trade-offs of different architectures using new metrics such as Energy-Delay-Area2 Product (EDA2P) and Energy-Delay-Area Product (EDAP). This article explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting trade-offs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies from cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks for manycore designs at the 22nm technology shows that 8-core clustering gives the best energy-delay product, whereas when die area is taken into account, 4-core clustering gives the best EDA2P and EDAP.", "title": "" }, { "docid": "c24bfd3b7bbc8222f253b004b522f7d5", "text": "The Audio/Visual Emotion Challenge and Workshop (AVEC 2017) \"Real-life depression, and affect\" will be the seventh competition event aimed at comparison of multimedia processing and machine learning methods for automatic audiovisual depression and emotion analysis, with all participants competing under strictly the same conditions. The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the depression and emotion recognition communities, as well as the audiovisual processing communities, to compare the relative merits of the various approaches to depression and emotion recognition from real-life data. This paper presents the novelties introduced this year, the challenge guidelines, the data used, and the performance of the baseline system on the two proposed tasks: dimensional emotion recognition (time and value-continuous), and dimensional depression estimation (value-continuous).", "title": "" }, { "docid": "43685bd1927f309c8b9a5edf980ab53f", "text": "In this paper we propose a pipeline for accurate 3D reconstruction from multiple images that deals with some of the possible sources of inaccuracy present in the input data. Namely, we address the problem of inaccurate camera calibration by including a method [1] adjusting the camera parameters in a global structure-and-motion problem which is solved with a depth map representation that is suitable to large scenes. Secondly, we take the triangular mesh and calibration improved by the global method in the first phase to refine the surface both geometrically and radiometrically. Here we propose surface energy which combines photo consistency with contour matching and minimize it with a gradient method. Our main contribution lies in effective computation of the gradient that naturally balances weight between regularizing and data terms by employing scale space approach to find the correct local minimum. The results are demonstrated on standard high-resolution datasets and a complex outdoor scene.", "title": "" }, { "docid": "b990a21742a1db59811d636368527ab0", "text": "We describe a high-performance implementation of the lattice Boltzmann method (LBM) for sparse geometries on graphic processors. In our implementation we cover the whole geometry with a uniform mesh of small tiles and carry out calculations for each tile independently with proper data synchronization at the tile edges. For this method, we provide both a theoretical analysis of complexity and the results for real implementations involving two-dimensional (2D) and three-dimensional (3D) geometries. Based on the theoretical model, we show that tiles offer significantly smaller bandwidth overheads than solutions based on indirect addressing. For 2D lattice arrangements, a reduction in memory usage is also possible, although at the cost of diminished performance. We achieved a performance of 682 MLUPS on GTX Titan (72 percent of peak theoretical memory bandwidth) for the D3Q19 lattice arrangement and double-precision data.", "title": "" }, { "docid": "19879b108f668f3125e485daf19ab453", "text": "This paper describes the development of anisotropic conductive films (ACFs) for ultra-fine pitch chip-on-glass (COG) application. In order to have reliable COG using ACF at fine pitch, the number of conductive particles trapped between the bump and substrate pad should be enough and less conductive particle between adjacent bumps. The anisotropic conductive film is double layered structure, in which ACF and NCF layer thickness is optimized, to have as many conductive particle as possible on bump after COG bonding. In ACF layer, non-conductive particles of diameter 1/5 times smaller than the conductive particles are added to prevent an electrical short between the bumps of COG assembly. The conductive particles are naturally insulated by the nonconductive particles even though conductive particles are flowed into and agglomerated in narrow gap between bumps during COG bonding. Also, flow property of the conductive particles is restrained due to nonconductive particles, and results the number of the conductive particles constantly maintained. To ensure the insulation property at 10 /spl mu/m gap, insulating coated conductive particles were used in ACF layer composition. The double-layered ACF using low temperature curable binder system is also effective in reducing the warpage level of COG assembly due to low modulus and low bonding temperature.", "title": "" }, { "docid": "9766e0507346e46e24790a4873979aa4", "text": "Extreme learning machine (ELM) is proposed for solving a single-layer feed-forward network (SLFN) with fast learning speed and has been confirmed to be effective and efficient for pattern classification and regression in different fields. ELM originally focuses on the supervised, semi-supervised, and unsupervised learning problems, but just in the single domain. To our best knowledge, ELM with cross-domain learning capability in subspace learning has not been exploited very well. Inspired by a cognitive-based extreme learning machine technique (Cognit Comput. 6:376–390, 1; Cognit Comput. 7:263–278, 2.), this paper proposes a unified subspace transfer framework called cross-domain extreme learning machine (CdELM), which aims at learning a common (shared) subspace across domains. Three merits of the proposed CdELM are included: (1) A cross-domain subspace shared by source and target domains is achieved based on domain adaptation; (2) ELM is well exploited in the cross-domain shared subspace learning framework, and a new perspective is brought for ELM theory in heterogeneous data analysis; (3) the proposed method is a subspace learning framework and can be combined with different classifiers in recognition phase, such as ELM, SVM, nearest neighbor, etc. Experiments on our electronic nose olfaction datasets demonstrate that the proposed CdELM method significantly outperforms other compared methods.", "title": "" }, { "docid": "fe79ee9979ed13aa7d1625989adef9f9", "text": "In this paper we propose and carefully evaluate a sequence labeling framework which solely utilizes sparse indicator features derived from dense distributed word representations. The proposed model obtains (near) state-of-the art performance for both part-of-speech tagging and named entity recognition for a variety of languages. Our model relies only on a few thousand sparse coding-derived features, without applying any modification of the word representations employed for the different tasks. The proposed model has favorable generalization properties as it retains over 89.8% of its average POS tagging accuracy when trained at 1.2% of the total available training data, i.e. 150 sentences per language.", "title": "" }, { "docid": "a9f9f918d0163e18cf6df748647ffb05", "text": "In previous work, we have shown that using terms from around citations in citing papers to index the cited paper, in addition to the cited paper's own terms, can improve retrieval effectiveness. Now, we investigate how to select text from around the citations in order to extract good index terms. We compare the retrieval effectiveness that results from a range of contexts around the citations, including no context, the entire citing paper, some fixed windows and several variations with linguistic motivations. We conclude with an analysis of the benefits of more complex, linguistically motivated methods for extracting citation index terms, over using a fixed window of terms. We speculate that there might be some advantage to using computational linguistic techniques for this task.", "title": "" }, { "docid": "8324dc0dfcfb845739a22fb9321d5482", "text": "In this paper, we study deep generative models for effective unsupervised learning. We propose VGAN, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density p(x) is approximated by a variational distribution q(x) that is easy to sample from. The training of VGAN takes a two step procedure: given p(x), q(x) is updated to maximize the lower bound; p(x) is then updated one step with samples drawn from q(x) to decrease the lower bound. VGAN is inspired by the generative adversarial networks (GANs), where p(x) corresponds to the discriminator and q(x) corresponds to the generator, but with several notable differences. We hence name our model variational GANs (VGANs). VGAN provides a practical solution to training deep EBMs in high dimensional space, by eliminating the need of MCMC sampling. From this view, we are also able to identify causes to the difficulty of training GANs and propose viable solutions. 1", "title": "" }, { "docid": "16f5686c1675d0cf2025cf812247ab45", "text": "This paper presents the system analysis and implementation of a soft switching Sepic-Cuk converter to achieve zero voltage switching (ZVS). In the proposed converter, the Sepic and Cuk topologies are combined together in the output side. The features of the proposed converter are to reduce the circuit components (share the power components in the transformer primary side) and to share the load current. Active snubber is connected in parallel with the primary side of transformer to release the energy stored in the leakage inductor of transformer and to limit the peak voltage stress of switching devices when the main switch is turned off. The active snubber can achieve ZVS turn-on for power switches. Experimental results, taken from a laboratory prototype rated at 300W, are presented to verify the effectiveness of the proposed converter. I. Introduction Modern", "title": "" } ]
scidocsrr
6b1090f8de26d7ac41f87c6b606299cf
Clustered Multi-Task Learning: A Convex Formulation
[ { "docid": "5b0e088e2bddd0535bc9d2dfbfeb0298", "text": "We had previously shown that regularization principles lead to approximation schemes that are equivalent to networks with one layer of hidden units, called regularization networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known radial basis functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends radial basis functions (RBF) to hyper basis functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of projection pursuit regression, and several types of neural networks. We propose to use the term generalized regularization networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call generalized regularization networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are (1) radial basis functions that can be generalized to hyper basis functions, (2) some tensor product splines, and (3) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions, and several perceptron-like neural networks with one hidden layer.", "title": "" }, { "docid": "d5771929cdaf41ce059e00b35825adf2", "text": "We develop a new collaborative filtering (CF) method that combines both previously known users’ preferences, i.e. standard CF, as well as product/user attributes, i.e. classical function approximation, to predict a given user’s interest in a particular product. Our method is a generalized low rank matrix completion problem, where we learn a function whose inputs are pairs of vectors – the standard low rank matrix completion problem being a special case where the inputs to the function are the row and column indices of the matrix. We solve this generalized matrix completion problem using tensor product kernels for which we also formally generalize standard kernel properties. Benchmark experiments on movie ratings show the advantages of our generalized matrix completion method over the standard matrix completion one with no information about movies or people, as well as over standard multi-task or single task learning methods.", "title": "" } ]
[ { "docid": "b56d144f1cda6378367ea21e9c76a39e", "text": "The main objective of our work has been to develop and then propose a new and unique methodology useful in developing the various features of heart rate variability (HRV) and carotid arterial wall thickness helpful in diagnosing cardiovascular disease. We also propose a suitable prediction model to enhance the reliability of medical examinations and treatments for cardiovascular disease. We analyzed HRV for three recumbent postures. The interaction effects between the recumbent postures and groups of normal people and heart patients were observed based on HRV indexes. We also measured intima-media of carotid arteries and used measurements of arterial wall thickness as other features. Patients underwent carotid artery scanning using high-resolution ultrasound devised in a previous study. In order to extract various features, we tested six classification methods. As a result, CPAR and SVM (gave about 85%-90% goodness of fit) outperforming the other classifiers.", "title": "" }, { "docid": "f405c62d932eec05c55855eb13ba804c", "text": "Multilevel converters have been under research and development for more than three decades and have found successful industrial application. However, this is still a technology under development, and many new contributions and new commercial topologies have been reported in the last few years. The aim of this paper is to group and review these recent contributions, in order to establish the current state of the art and trends of the technology, to provide readers with a comprehensive and insightful review of where multilevel converter technology stands and is heading. This paper first presents a brief overview of well-established multilevel converters strongly oriented to their current state in industrial applications to then center the discussion on the new converters that have made their way into the industry. In addition, new promising topologies are discussed. Recent advances made in modulation and control of multilevel converters are also addressed. A great part of this paper is devoted to show nontraditional applications powered by multilevel converters and how multilevel converters are becoming an enabling technology in many industrial sectors. Finally, some future trends and challenges in the further development of this technology are discussed to motivate future contributions that address open problems and explore new possibilities.", "title": "" }, { "docid": "19699035d427e648fa495628dac79c71", "text": "We address the problem of online path planning for optimal sensing with a mobile robot. The objective of the robot is to learn the most about its pose and the environment given time constraints. We use a POMDP with a utility function that depends on the belief state to model th finite horizon planning problem. We replan as the robot progresses throughout the environment. The POMDP is highdimensional, continuous, non-differentiable, nonlinear , nonGaussian and must be solved in real-time. Most existing techniques for stochastic planning and reinforcement lear ning are therefore inapplicable. To solve this extremely com plex problem, we propose a Bayesian optimization method that dynamically trades off exploration (minimizing uncer tainty in unknown parts of the policy space) and exploitation (capitalizing on the current best solution). We demonstrate our approach with a visually-guide mobile robot. The solution proposed here is also applicable to other closelyrelated domains, including active vision, sequential expe rimental design, dynamic sensing and calibration with mobile sensors.", "title": "" }, { "docid": "8a46fe0168f838f6dd96c8dc5878e984", "text": "As e-commerce offers new channels for companies to reach consumers, it also brings about significant challenges to the ability to response to customer changing needs, which is named “customer agility”. Both practitioners and researchers consider information management as a source of customer agility. Drawing on the information management literature, this research program attempts to propose an integrative information management framework to study the achievement of customer agility. Subsequently, a process model of how information management helps firms achieve customer agility will be developed. This is achieved by conducting a case study of a Chinese B2C company. This research in progress article presents the preliminary findings from the first visit to the company. It shows that customer agility is achieved through establishing information management structure, developing information management capability and instilling information behaviors and values. More interviews will be conducted in the next phase and the findings will be expanded.", "title": "" }, { "docid": "ed23a782c3e4f03790fb5f7ec95d926c", "text": "This paper presents two WR-3 band (220–325 GHz) filters, one fabricated in metal using high precision computer numerically controlled milling and the other made with metallized SU-8 photoresist technology. Both are based on three coupled resonators, and are designed for a 287.3–295.9-GHz passband, and a 30-dB rejection between 317.7 and 325.9 GHz. The first filter is an extracted pole filter coupled by irises, and is precision milled using the split-block approach. The second filter is composed of three silver-coated SU-8 layers, each 432 μm thick. The filter structures are specially chosen to take advantage of the fabrication processes. When fabrication tolerances are accounted for, very good agreement between measurements and simulations are obtained, with median passband insertion losses of 0.41 and 0.45 dB for the metal and SU-8 devices, respectively. These two filters are potential replacements of frequency selective surface filters used in heterodyne radiometers for unwanted sideband rejection.", "title": "" }, { "docid": "08f9717de25d01f07b96b2c9bc851b31", "text": "This paper addresses the imaging of objects located under a forest cover using polarimetric synthetic aperture radar tomography (POLTOMSAR) at L-band. High-resolution spectral estimators, able to accurately discriminate multiple scattering centers in the vertical direction, are used to separate the response of objects and vehicles embedded in a volumetric background. A new polarimetric spectral analysis technique is introduced and is shown to improve the estimation accuracy of the vertical position of both artificial scatterers and natural environments. This approach provides optimal polarimetric features that may be used to further characterize the objects under analysis. The effectiveness of this novel technique for POLTOMSAR is demonstrated using fully polarimetric L-band airborne data sets acquired by the German Aerospace Center (DLR)'s E-SAR system over the test site in Dornstetten, Germany.", "title": "" }, { "docid": "60d0af0788a1b6641c722eafd0d1b8bb", "text": "Enhancing the quality of image is a continuous process in image processing related research activities. For some applications it becomes essential to have best quality of image such as in forensic department, where in order to retrieve maximum possible information, image has to be enlarged in terms of size, with higher resolution and other features associated with it. Such obtained high quality images have also a concern in satellite imaging, medical science, High Definition Television (HDTV), etc. In this paper a novel approach of getting high resolution image from a single low resolution image is discussed. The Non Sub-sampled Contourlet Transform (NSCT) based learning is used to learn the NSCT coefficients at the finer scale of the unknown high-resolution image from a dataset of high resolution images. The cost function consisting of a data fitting term and a Gabor prior term is optimized using an Iterative Back Projection (IBP). By making use of directional decomposition property of the NSCT and the Gabor filter bank with various orientations, the proposed method is capable to reconstruct an image with less edge artifacts. The validity of the proposed approach is proven through simulation on several images. RMS measures, PSNR measures and illustrations show the success of the proposed method.", "title": "" }, { "docid": "264521c7fa8f281f0f72484e8dad4de0", "text": "Autonomous navigation is a fundamental task in mobile robotics. In the last years, several approaches have been addressing the autonomous navigation in outdoor environments. Lately it has also been extended to robotic vehicles in urban environments. This paper presents a vehicle control system capable of learning behaviors based on examples from human driver and analyzing different levels of memory of the templates, which are an important capability to autonomous vehicle drive. Our approach is based on image processing, template matching classification, finite state machine, and template memory. The proposed system allows training an image segmentation algorithm and a neural network to work with levels of memory of the templates in order to identify navigable and non-navigable regions. As an output, it generates the steering control and speed for the Intelligent Robotic Car for Autonomous Navigation (CaRINA). Several experimental tests have been carried out under different environmental conditions to evaluate the proposed techniques.", "title": "" }, { "docid": "6bb105c38e95c382895811d46ba78341", "text": "In this paper, we analyze and optimize non- binary low-density parity-check (NB-LDPC) codes for magnetic recording applications. While the topic of the error floor performance of binary LDPC codes over additive white Gaussian noise (AWGN) channels has recently received considerable attention, very little is known about the error floor performance of NB-LDPC codes over other types of channels, despite the early results demonstrating superior characteristics of NB-LDPC codes relative to their binary counterparts. We first show that, due to outer looping between detector and decoder in the receiver, the error profile of NB-LDPC codes over partial-response (PR) channels is qualitatively different from the error profile over AWGN channels - this observation motivates us to introduce new combinatorial definitions aimed at capturing decoding errors that dominate PR channel error floor region. We call these errors (or objects) balanced absorbing sets (BASs), which are viewed as a special subclass of previously introduced absorbing sets (ASs). Additionally, we prove that due to the more restrictive definition of BASs (relative to the more general class of ASs), an additional degree of freedom can be exploited in code design for PR channels. We then demonstrate that the proposed code optimization aimed at removing dominant BASs offers improvements in the frame error rate (FER) in the error floor region by up to 2.5 orders of magnitude over the uninformed designs. Our code optimization technique carefully yet provably removes BASs from the code while preserving its overall structure (node degree, quasi-cyclic property, regularity, etc.). The resulting codes outperform existing binary and NB-LDPC solutions for PR channels by about 2.5 and 1.5 orders of magnitude, respectively.", "title": "" }, { "docid": "6ae2f2fa9a58fd101f6f43276ce2ff04", "text": "In the past decade, we have witnessed an unparalleled success of information and communication technologies (ICT), which is expected to be even more proliferating and ubiquitous in the future. Among many ICT applications, ICT components embedded into various devices and systems have become a critical one. In fact, embedded systems with communication capability span virtually every aspect of our daily life. An embedded system is defined as a computer system designed to perform dedicated specific functions, usually under real-time computing constraints. It is called ‘‘embedded’’ because it is embedded as a part of a complete device or system. By contrast, a general-purpose computer is designed to satisfy a wide range of user requirements. Embedded systems range from portable devices such as smart phones and MP3 players, to large installations like plant control systems. Recently, the convergence of cyber and physical spaces [1] has further transformed traditional embedded systems into cyberphysical systems (CPS), which are characterized by tight integration and coordination between computation and physical processes by means of networking. In CPS, various embedded devices with computational components are networked to monitor, sense, and actuate physical elements in the real world. Examples of CPS encompass a wide range of large-scale engineered systems such as avionics, healthcare, transportation, automation, and smart grid systems. In addition, the recent proliferation of smart phones and mobile Internet devices equipped with multiple sensors can be leveraged to enable mobile cyber-physical applications. In all of these systems, it is of critical importance to properly resolve the complex interactions between various computational and physical elements. In this guest editorial, we first provide an overview of CPS by introducing major issues in CPS as well as recent research efforts and future opportunities for CPS. Then, we summarize the papers in the special section by clearly describing their main contributions on CPS research. The remainder of the editorial is organized as follows: In Section 2, we provide an overview of CPS. We first explain the key characteristics of CPS compared to the traditional embedded systems. Then, we introduce the recent trend in CPS research with an emphasis on major research topics in CPS. We introduce recent CPS-related projects in Section 3. Summary of the papers in the special section follows in Section 4 by focusing on their contributions on CPS research. Finally, our conclusion follows in Section 5.", "title": "" }, { "docid": "9a52461cbd746e4e1df5748af37b58ed", "text": "Irony is a pervasive aspect of many online texts, one made all the more difficult by the absence of face-to-face contact and vocal intonation. As our media increasingly become more social, the problem of irony detection will become even more pressing. We describe here a set of textual features for recognizing irony at a linguistic level, especially in short texts created via social media such as Twitter postings or ‘‘tweets’’. Our experiments concern four freely available data sets that were retrieved from Twitter using content words (e.g. ‘‘Toyota’’) and user-generated tags (e.g. ‘‘#irony’’). We construct a new model of irony detection that is assessed along two dimensions: representativeness and relevance. Initial results are largely positive, and provide valuable insights into the figurative issues facing tasks such as sentiment analysis, assessment of online reputations, or decision making.", "title": "" }, { "docid": "56f67b47501d7a2c3030c2361082c764", "text": "One method malware authors use to defeat detection of their programs is to use morphing engines to rapidly generate a large number of variants. Inspired by previous works in author attribution of natural language text, we investigate a problem of attributing a malware to a morphing engine. Specifically, we present the malware engine attribution problem and formally define its three variations: MVRP, DENSITY and GEN, that reflect the challenges malware analysts face nowadays. We design and implement heuristics to address these problems and show their effectiveness on a set of well-known malware morphing engines and a real-world malware collection reaching detection accuracies of 96 % and higher. Our experiments confirm the applicability of the proposed approach in practice and indicate that engine attribution may offer a viable enhancement of current defenses against malware.", "title": "" }, { "docid": "924f6292c557358a8a847b923722652f", "text": "With the openness, flexibility and features that Android offers, it has been widely adopted in applications beyond just SmartPhones. This paper presents the design and implementation of a low cost yet compact and secure Android smart phone based home automation system. This design is based on the popular open sourced Arduino prototyping board where the sensors and electrical appliances are connected to the input/output ports of the board. In order to enhance the system responsiveness and to make it more dynamic, we've integrated a popular and open source RTOS, the scmRTOS, which has a very small footprint on the microcontroller. The controlling application which has been developed for Android devices can also be easily developed on other popular SmartPhone operating systems like Apple's iOS, Microsoft's WP7/8 and BlackBerry OS. Pattern based password protection is implemented to allow only authorized users to control the appliances. Another add-on included is the integration of Google's voice recognition feature that recognizes users' voice commands to control appliances.", "title": "" }, { "docid": "592431c03450be59f10e56dcabed0ebf", "text": "Recent advances in machine learning have led to innovative applications and services that use computational structures to reason about complex phenomenon. Over the past several years, the security and machine-learning communities have developed novel techniques for constructing adversarial samples--malicious inputs crafted to mislead (and therefore corrupt the integrity of) systems built on computationally learned models. The authors consider the underlying causes of adversarial samples and the future countermeasures that might mitigate them.", "title": "" }, { "docid": "504d29db4565df4d809d63c2c2591706", "text": "We apply unsupervised machine learning techniques, mainly principal component analysis (PCA), to compare and contrast the phase behavior and phase transitions in several classical spin models-the square- and triangular-lattice Ising models, the Blume-Capel model, a highly degenerate biquadratic-exchange spin-1 Ising (BSI) model, and the two-dimensional XY model-and we examine critically what machine learning is teaching us. We find that quantified principal components from PCA not only allow the exploration of different phases and symmetry-breaking, but they can distinguish phase-transition types and locate critical points. We show that the corresponding weight vectors have a clear physical interpretation, which is particularly interesting in the frustrated models such as the triangular antiferromagnet, where they can point to incipient orders. Unlike the other well-studied models, the properties of the BSI model are less well known. Using both PCA and conventional Monte Carlo analysis, we demonstrate that the BSI model shows an absence of phase transition and macroscopic ground-state degeneracy. The failure to capture the \"charge\" correlations (vorticity) in the BSI model (XY model) from raw spin configurations points to some of the limitations of PCA. Finally, we employ a nonlinear unsupervised machine learning procedure, the \"autoencoder method,\" and we demonstrate that it too can be trained to capture phase transitions and critical points.", "title": "" }, { "docid": "96ab2d8de746234c79e87902de49f343", "text": "Background subtraction is one of the most commonly used components in machine vision systems. Despite the numerous algorithms proposed in the literature and used in practical applications, key challenges remain in designing a single system that can handle diverse environmental conditions. In this paper we present Multiple Background Model based Background Subtraction Algorithm as such a candidate. The algorithm was originally designed for handling sudden illumination changes. The new version has been refined with changes at different steps of the process, specifically in terms of selecting optimal color space, clustering of training images for Background Model Bank and parameter for each channel of color space. This has allowed the algorithm's applicability to wide variety of challenges associated with change detection including camera jitter, dynamic background, Intermittent Object Motion, shadows, bad weather, thermal, night videos etc. Comprehensive evaluation demonstrates the superiority of algorithm against state of the art.", "title": "" }, { "docid": "c206399c6ebf96f3de3aa5fdb10db49d", "text": "Canine monocytotropic ehrlichiosis (CME), caused by the rickettsia Ehrlichia canis, an important canine disease with a worldwide distribution. Diagnosis of the disease can be challenging due to its different phases and multiple clinical manifestations. CME should be suspected when a compatible history (living in or traveling to an endemic region, previous tick exposure), typical clinical signs and characteristic hematological and biochemical abnormalities are present. Traditional diagnostic techniques including hematology, cytology, serology and isolation are valuable diagnostic tools for CME, however a definitive diagnosis of E. canis infection requires molecular techniques. This article reviews the current literature covering the diagnosis of infection caused by E. canis.", "title": "" }, { "docid": "cd3d9bb066729fc7107c0fef89f664fe", "text": "The extended contact hypothesis proposes that knowledge that an in-group member has a close relationship with an out-group member can lead to more positive intergroup attitudes. Proposed mechanisms are the in-group or out-group member serving as positive exemplars and the inclusion of the out-group member's group membership in the self. In Studies I and 2, respondents knowing an in-group member with an out-group friend had less negative attitudes toward that out-group, even controlling for disposition.il variables and direct out-group friendships. Study 3, with constructed intergroup-conflict situations (on the robbers cave model). found reduced negative out-group attitudes after participants learned of cross-group friendships. Study 4, a minimal group experiment, showed less negative out-group attitudes for participants observing an apparent in-group-out-group friendship.", "title": "" }, { "docid": "f7a8116cefaaf6ab82118885efac4c44", "text": "Entrepreneurs have created a number of new Internet-based platforms that enable owners to rent out their durable goods when not using them for personal consumption. We develop a model of these kinds of markets in order to analyze the determinants of ownership, rental rates, quantities, and the surplus generated in these markets. Our analysis considers both a short run, before consumers can revise their ownership decisions and a long run, in which they can. This allows us to explore how patterns of ownership and consumption might change as a result of these new markets. We also examine the impact of bringing-to-market costs, such as depreciation, labor costs and transaction costs and consider the platform’s pricing problem. An online survey of consumers broadly supports the modeling assumptions employed. For example, ownership is determined by individuals’ forward-looking assessments of planned usage. Factors enabling sharing markets to flourish are explored. JEL L1, D23, D47", "title": "" }, { "docid": "30decb72388cd024661c552670a28b11", "text": "The increasing volume and unstructured nature of data available on the World Wide Web (WWW) makes information retrieval a tedious and mechanical task. Lots of this information is not semantic driven, and hence not machine process able, but its only in human readable form. The WWW is designed to builds up a source of reference for web of meaning. Ontology information on different subjects spread globally is made available at one place. The Semantic Web (SW), moreover as an extension of WWW is designed to build as a foundation of vocabularies and effective communication of Semantics. The promising area of Semantic Web is logical and lexical semantics. Ontology plays a major role to represent information more meaningfully for humans and machines for its later effective retrieval. This paper constitutes the requisite with a unique approach for a representation and reasoning with ontology for semantic analysis of various type of document and also surveys multiple approaches for ontology learning that enables reasoning with uncertain, incomplete and contradictory information in a domain context.", "title": "" } ]
scidocsrr
7186875ab92c8305f0303931bff05cc8
Hallucinating Compressed Face Images
[ { "docid": "225204d66c371372debb3bb2a37c795b", "text": "We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.", "title": "" } ]
[ { "docid": "d5ac5e10fc2cc61e625feb28fc9095b5", "text": "Article history: Received 8 July 2016 Received in revised form 15 November 2016 Accepted 29 December 2016 Available online 25 January 2017 As part of the post-2015 United Nations sustainable development agenda, the world has its first urban sustainable development goal (USDG) “to make cities and human settlements inclusive, safe, resilient and sustainable”. This paper provides an overview of the USDG and explores some of the difficulties around using this goal as a tool for improving cities. We argue that challenges emerge around selecting the indicators in the first place and also around the practical use of these indicators once selected. Three main practical problems of indicator use include 1) the poor availability of standardized, open and comparable data 2) the lack of strong data collection institutions at the city scale to support monitoring for the USDG and 3) “localization” the uptake and context specific application of the goal by diverse actors in widely different cities. Adding to the complexity, the USDG conversation is taking place at the same time as the proliferation of a bewildering array of indicator systems at different scales. Prompted by technological change, debates on the “data revolution” and “smart city” also have direct bearing on the USDG. We argue that despite these many complexities and challenges, the USDG framework has the potential to encourage and guide needed reforms in our cities but only if anchored in local institutions and initiatives informed by open, inclusive and contextually sensitive data collection and monitoring. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4fa688e986d177771c5992262cf342b5", "text": "The TIPSTER Text Summarization Evaluation (SUMMAC) has developed several new extrinsic and intrinsic methods for evaluating summaries. It has established definitively that automatic text summarization is very effective in relevance assessment tasks on news articles. Summaries as short as 17% of full text length sped up decision-making by almost a factor of 2 with no statistically significant degradation in accuracy. Analysis of feedback forms filled in after each decision indicated that the intelligibility of present-day machine-generated summaries is high. Systems that performed most accurately in the production of indicative and informative topic-related summaries used term frequency and co-occurrence statistics, and vocabulary overlap comparisons between text passages. However, in the absence of a topic, these statistical methods do not appear to provide any additional leverage: in the case of generic summaries, the systems were indistinguishable in accuracy. The paper discusses some of the tradeoffs and challenges faced by the evaluation, and also lists some of the lessons learned, impacts, and possible future directions. The evaluation methods used in the SUMMAC evaluation are of interest to both summarization evaluation as well as evaluation of other 'output-related' NLP technologies, where there may be many potentially acceptable outputs, with no automatic way to compare them.", "title": "" }, { "docid": "b2a8b979f4bd96a28746b090bca2a567", "text": "Gradient-based policy search is an alternative to value-function-based methods for reinforcement learning in non-Markovian domains. One apparent drawback of policy search is its requirement that all actions be \\on-policy\"; that is, that there be no explicit exploration. In this paper, we provide a method for using importance sampling to allow any well-behaved directed exploration policy during learning. We show both theoretically and experimentally that using this method can achieve dramatic performance improvements. During this work, Nicolas Meuleau was at the MIT Arti cial Intelligence laboratory, supported in part by a research grant from NTT; Leonid Peshkin by grants from NSF and NTT; and Kee-Eung Kim in part by AFOSR/RLF 30602-95-1-0020.", "title": "" }, { "docid": "08f7c7d3bc473e929b4a224636f2a887", "text": "Some existing CNN-based methods for single-view 3D object reconstruction represent a 3D object as either a 3D voxel occupancy grid or multiple depth-mask image pairs. However, these representations are inefficient since empty voxels or background pixels are wasteful. We propose a novel approach that addresses this limitation by replacing masks with “deformation-fields”. Given a single image at an arbitrary viewpoint, a CNN predicts multiple surfaces, each in a canonical location relative to the object. Each surface comprises a depth-map and corresponding deformation-field that ensures every pixel-depth pair in the depth-map lies on the object surface. These surfaces are then fused to form the full 3D shape. During training we use a combination of perview loss and multi-view losses. The novel multi-view loss encourages the 3D points back-projected from a particular view to be consistent across views. Extensive experiments demonstrate the efficiency and efficacy of our method on single-view 3D object reconstruction.", "title": "" }, { "docid": "fe16f2d946b3ea7bc1169d5667365dbe", "text": "This study assessed embodied simulation via electromyography (EMG) as participants first encoded emotionally ambiguous faces with emotion concepts (i.e., \"angry,\"\"happy\") and later passively viewed the faces without the concepts. Memory for the faces was also measured. At initial encoding, participants displayed more smiling-related EMG activity in response to faces paired with \"happy\" than in response to faces paired with \"angry.\" Later, in the absence of concepts, participants remembered happiness-encoded faces as happier than anger-encoded faces. Further, during passive reexposure to the ambiguous faces, participants' EMG indicated spontaneous emotion-specific mimicry, which in turn predicted memory bias. No specific EMG activity was observed when participants encoded or viewed faces with non-emotion-related valenced concepts, or when participants encoded or viewed Chinese ideographs. From an embodiment perspective, emotion simulation is a measure of what is currently perceived. Thus, these findings provide evidence of genuine concept-driven changes in emotion perception. More generally, the findings highlight embodiment's role in the representation and processing of emotional information.", "title": "" }, { "docid": "1986179d7d985114fa14bbbe01770d8a", "text": "A low-power consumption, small-size smart antenna, named electronically steerable parasitic array radiator (ESPAR), has been designed. Beamforming is achieved by tuning the load reactances at parasitic elements surrounding the active central element. A fast beamforming algorithm based on simultaneous perturbation stochastic approximation with a maximum cross correlation coefficient criterion is proposed. The simulation and experimental results validate the algorithm. In an environment where the signal-to-interference-ratio is 0 dB, the algorithm converges within 50 iterations and achieves an output signal-to-interference-plus-noise-ratio of 10 dB. With the fast beamforming ability and its low-power consumption attribute, the ESPAR antenna makes the mass deployment of smart antenna technologies practical.", "title": "" }, { "docid": "b6c1aa9e3b55b6ad7bd01f8b1c017e7b", "text": "In the last decade, with availability of large datasets and more computing power, machine learning systems have achieved (super)human performance in a wide variety of tasks. Examples of this rapid development can be seen in image recognition, speech analysis, strategic game planning and many more. The problem with many state-of-the-art models is a lack of transparency and interpretability. The lack of thereof is a major drawback in many applications, e.g. healthcare and finance, where rationale for model's decision is a requirement for trust. In the light of these issues, explainable artificial intelligence (XAI) has become an area of interest in research community. This paper summarizes recent developments in XAI in supervised learning, starts a discussion on its connection with artificial general intelligence, and gives proposals for further research directions.", "title": "" }, { "docid": "58b957db2e72d76e5ee1fc5102df7dc1", "text": "This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles.", "title": "" }, { "docid": "45009303764570cbfa3532a9d98f5393", "text": "The Wasserstein distance and its variations, e.g., the sliced-Wasserstein (SW) distance, have recently drawn attention from the machine learning community. The SW distance, specifically, was shown to have similar properties to the Wasserstein distance, while being much simpler to compute, and is therefore used in various applications including generative modeling and general supervised/unsupervised learning. In this paper, we first clarify the mathematical connection between the SW distance and the Radon transform. We then utilize the generalized Radon transform to define a new family of distances for probability measures, which we call generalized slicedWasserstein (GSW) distances. We also show that, similar to the SW distance, the GSW distance can be extended to a maximum GSW (max-GSW) distance. We then provide the conditions under which GSW and max-GSW distances are indeed distances. Finally, we compare the numerical performance of the proposed distances on several generative modeling tasks, including SW flows and SW auto-encoders.", "title": "" }, { "docid": "3f9c720773146d83c61cfbbada3938c4", "text": "How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.", "title": "" }, { "docid": "85e593e5a663346978272bf13a1d135a", "text": "Methods. Two text analysis tools were used to examine the crime narratives of 14 psychopathic and 38 non-psychopathic homicide offenders. Psychopathy was determined using the Psychopathy Checklist-Revised (PCL-R). The Wmatrix linguistic analysis tool (Rayson, 2008) was used to examine parts of speech and semantic content while the Dictionary of Affect and Language (DAL) tool (Whissell & Dewson, 1986) was used to examine the emotional characteristics of the narratives.", "title": "" }, { "docid": "aedb6c6bce85ca8c58b3a4ef0850f3ff", "text": "Data assurance and resilience are crucial security issues in cloud-based IoT applications. With the widespread adoption of drones in IoT scenarios such as warfare, agriculture and delivery, effective solutions to protect data integrity and communications between drones and the control system have been in urgent demand to prevent potential vulnerabilities that may cause heavy losses. To secure drone communication during data collection and transmission, as well as preserve the integrity of collected data, we propose a distributed solution by utilizing blockchain technology along with the traditional cloud server. Instead of registering the drone itself to the blockchain, we anchor the hashed data records collected from drones to the blockchain network and generate a blockchain receipt for each data record stored in the cloud, reducing the burden of moving drones with the limit of battery and process capability while gaining enhanced security guarantee of the data. This paper presents the idea of securing drone data collection and communication in combination with a public blockchain for provisioning data integrity and cloud auditing. The evaluation shows that our system is a reliable and distributed system for drone data assurance and resilience with acceptable overhead and scalability for a large number of drones.", "title": "" }, { "docid": "1967de1be0b095b4a59a5bb0fdc403c0", "text": "As the popularity of content sharing websites has increased, they have become targets for spam, phishing and the distribution of malware. On YouTube, the facility for users to post comments can be used by spam campaigns to direct unsuspecting users to malicious third-party websites. In this paper, we demonstrate how such campaigns can be tracked over time using network motif profiling, i.e. by tracking counts of indicative network motifs. By considering all motifs of up to five nodes, we identify discriminating motifs that reveal two distinctly different spam campaign strategies, and present an evaluation that tracks two corresponding active campaigns.", "title": "" }, { "docid": "e05270c1d2abeda1cee99f1097c1c5d5", "text": "E-transactions have become promising and very much convenient due to worldwide and usage of the internet. The consumer reviews are increasing rapidly in number on various products. These large numbers of reviews are beneficial to manufacturers and consumers alike. It is a big task for a potential consumer to read all reviews to make a good decision of purchasing. It is beneficial to mine available consumer reviews for popular products from various product review sites of consumer. The first step is performing sentiment analysis to decide the polarity of a review. On the basis of polarity, we can then classify the review. Comparison is made among the different WEKA classifiers in the form of charts and graphs.", "title": "" }, { "docid": "f19ac14bf8c88766c55ddcef75e872d2", "text": "Real-time microcontrollers have been widely adopted in cyber-physical systems that require both real-time and security guarantees. Unfortunately, security is sometimes traded for real-time performance in such systems. Notably, memory isolation, which is one of the most established security features in modern computer systems, is typically not available in many real-time microcontroller systems due to its negative impacts on performance and violation of real-time constraints. As such, the memory space of these systems has created an open, monolithic attack surface that attackers can target to subvert the entire systems. In this paper, we present MINION, a security architecture that intends to virtually partition the memory space and enforce memory access control of a real-time microcontroller. MINION can automatically identify the reachable memory regions of realtime processes through off-line static analysis on the system’s firmware and conduct run-time memory access control through hardware-based enforcement. Our evaluation results demonstrate that, by significantly reducing the memory space that each process can access, MINION can effectively protect a microcontroller from various attacks that were previously viable. In addition, unlike conventional memory isolation mechanisms that might incur substantial performance overhead, the lightweight design of MINION is able to maintain the real-time properties of the microcontroller.", "title": "" }, { "docid": "d13bf709580b207841db407338393df6", "text": "One version of a stochastic computer simulation of airspace includes the implementation of complex, high-fidelity models of aircraft. Since the models are pre-existing, third-party developed products, these aircraft models require validation prior to implementation. Several methodologies are available to demonstrate the accuracy of these models and a variety of testers potentially involved, so a notation is proposed to describe the level of testing performed in the validation of a given model using seven fields, each making use of succinct notation. Rather than limiting those qualified to do this type of work or restrict the aircraft models available for use, this classification is proposed in order to allow for anyone to complete validation tasks and to allow for a wide variety of tasks during the course of validation, while keeping the ultimate user of the model easily and fully informed as to the level of testing done and the experience and qualifications of the tester.", "title": "" }, { "docid": "4a65fcbc395eab512d8a7afe33c0f5ae", "text": "In eukaryotes, the spindle-assembly checkpoint (SAC) is a ubiquitous safety device that ensures the fidelity of chromosome segregation in mitosis. The SAC prevents chromosome mis-segregation and aneuploidy, and its dysfunction is implicated in tumorigenesis. Recent molecular analyses have begun to shed light on the complex interaction of the checkpoint proteins with kinetochores — structures that mediate the binding of spindle microtubules to chromosomes in mitosis. These studies are finally starting to reveal the mechanisms of checkpoint activation and silencing during mitotic progression.", "title": "" }, { "docid": "70fe710570590eccf124b4aefd5886a8", "text": "Reinforcement learning problems are often phrased in terms of Markov decision processes (MDPs). In this thesis we go beyond MDPs and consider reinforcement learning in environments that are non-Markovian, non-ergodic and only partially observable. Our focus is not on practical algorithms, but rather on the fundamental underlying problems: How do we balance exploration and exploitation? How do we explore optimally? When is an agent optimal? We follow the nonparametric realizable paradigm: we assume the data is drawn from an unknown source that belongs to a known countable class of candidates. First, we consider the passive (sequence prediction) setting, learning from data that is not independent and identically distributed. We collect results from artificial intelligence, algorithmic information theory, and game theory and put them in a reinforcement learning context: they demonstrate how an agent can learn the value of its own policy. Next, we establish negative results on Bayesian reinforcement learning agents, in particular AIXI. We show that unlucky or adversarial choices of the prior cause the agent to misbehave drastically. Therefore Legg-Hutter intelligence and balanced Pareto optimality, which depend crucially on the choice of the prior, are entirely subjective. Moreover, in the class of all computable environments every policy is Pareto optimal. This undermines all existing optimality properties for AIXI. However, there are Bayesian approaches to general reinforcement learning that satisfy objective optimality guarantees: We prove that Thompson sampling is asymptotically optimal in stochastic environments in the sense that its value converges to the value of the optimal policy. We connect asymptotic optimality to regret given a recoverability assumption on the environment that allows the agent to recover from mistakes. Hence Thompson sampling achieves sublinear regret in these environments. AIXI is known to be incomputable. We quantify this using the arithmetical hierarchy, and establish upper and corresponding lower bounds for incomputability. Further, we show that AIXI is not limit computable, thus cannot be approximated using finite computation. However there are limit computable ε-optimal approximations to AIXI. We also derive computability bounds for knowledge-seeking agents, and give a limit computable weakly asymptotically optimal reinforcement learning agent. Finally, our results culminate in a formal solution to the grain of truth problem: A Bayesian agent acting in a multi-agent environment learns to predict the other agents’ policies if its prior assigns positive probability to them (the prior contains a grain of truth). We construct a large but limit computable class containing a grain of truth and show that agents based on Thompson sampling over this class converge to play ε-Nash equilibria in arbitrary unknown computable multi-agent environments.", "title": "" }, { "docid": "7568cb435d0211248e431d865b6a477e", "text": "We propose prosody embeddings for emotional and expressive speech synthesis networks. The proposed methods introduce temporal structures in the embedding networks, thus enabling fine-grained control of the speaking style of the synthesized speech. The temporal structures can be designed either on the speech side or the text side, leading to different control resolutions in time. The prosody embedding networks are plugged into end-to-end speech synthesis networks and trained without any other supervision except for the target speech for synthesizing. It is demonstrated that the prosody embedding networks learned to extract prosodic features. By adjusting the learned prosody features, we could change the pitch and amplitude of the synthesized speech both at the frame level and the phoneme level. We also introduce the temporal normalization of prosody embeddings, which shows better robustness against speaker perturbations during prosody transfer tasks.", "title": "" } ]
scidocsrr
4cca3386d7c4989ebb216740d91fd942
A low-cost harmonic radar for tracking very small tagged amphibians
[ { "docid": "3973a575bae986eb0410df18b0de8a5a", "text": "The design and operation along with verifying measurements of a harmonic radar transceiver, or tag, developed for insect tracking are presented. A short length of wire formed the antenna while a beam lead Schottky diode across a resonant loop formed the frequency doubler circuit yielding a total tag mass of less than 3 mg. Simulators using the method-of-moments for the antenna, finite-integral time-domain for the loop, and harmonic balance for the nonlinear diode element were used to predict and optimize the transceiver performance. This performance is compared to the ideal case and to measurements performed using a pulsed magnetron source within an anechoic chamber. A method for analysis of the tag is presented and used to optimize the design by creating the largest possible return signal at the second harmonic frequency for a particular incident power density. These methods were verified through measurement of tags both in isolation and mounted on insects. For excitation at 9.41 GHz the optimum tag in isolation had an antenna length of 12 mm with a loop diameter of 1 mm which yielded a harmonic cross-section of 40 mm/sup 2/. For tags mounted on Colorado potato beetles, optimum performance was achieved with an 8 mm dipole fed 2 mm from the beetle attached end. A theory is developed that describes harmonic radar in a fashion similar to the conventional radar range equation but with harmonic cross-section replacing the conventional radar cross-section. This method provides a straightforward description of harmonic radar system performance as well as provides a means to describe harmonic radar tag performance.", "title": "" } ]
[ { "docid": "29f78d229a035e81a082aa411a1e22c9", "text": "This paper presents a new robust method for inertial MEM (MicroElectroMechanical systems) 3D gesture recognition. The linear acceleration and the angular velocity, respectively provided by the accelerometer and the gyrometer, are sampled in time resulting in 6D values at each time step which are used as inputs for the gesture recognition system. We propose to build a system based on Bidirectional Long ShortTerm Memory Recurrent Neural Networks (BLSTM-RNN) for gesture classification from raw MEM data. We also compare this system to a geometric approach using DTW (Dynamic Time Warping) and a statistical method based on HMM (Hidden Markov Model) from filtered and denoised MEM data. Experimental results on 22 individuals producing 14 gestures in the air show that the proposed approach outperforms classical classification methods with a classification mean rate of 95.57% and a standard deviation of 0.50 for 616 test gestures. Furthermore, these experiments underline that combining accelerometer and gyrometer information gives better results that using a single inertial description.", "title": "" }, { "docid": "1f3d84321cc2843349c5b6ef43fc8b9a", "text": "It has long been posited that among emotional stimuli, only negative threatening information modulates early shifts of attention. However, in the last few decades there has been an increase in research showing that attention is also involuntarily oriented toward positive rewarding stimuli such as babies, food, and erotic information. Because reproduction-related stimuli have some of the largest effects among positive stimuli on emotional attention, the present work reviews recent literature and proposes that the cognitive and cerebral mechanisms underlying the involuntarily attentional orientation toward threat-related information are also sensitive to erotic information. More specifically, the recent research suggests that both types of information involuntarily orient attention due to their concern relevance and that the amygdala plays an important role in detecting concern-relevant stimuli, thereby enhancing perceptual processing and influencing emotional attentional processes.", "title": "" }, { "docid": "3c5a5ee0b855625c959593a08d6e1e24", "text": "We present Scalable Host-tree Embeddings for Efficient Partitioning (Sheep), a distributed graph partitioning algorithm capable of handling graphs that far exceed main memory. Sheep produces high quality edge partitions an order of magnitude faster than both state of the art offline (e.g., METIS) and streaming partitioners (e.g., Fennel). Sheep’s partitions are independent of the input graph distribution, which means that graph elements can be assigned to processing nodes arbitrarily without affecting the partition quality. Sheep transforms the input graph into a strictly smaller elimination tree via a distributed map-reduce operation. By partitioning this tree, Sheep finds an upper-bounded communication volume partitioning of the original graph. We describe the Sheep algorithm and analyze its spacetime requirements, partition quality, and intuitive characteristics and limitations. We compare Sheep to contemporary partitioners and demonstrate that Sheep creates competitive partitions, scales to larger graphs, and has better runtime.", "title": "" }, { "docid": "6533c68c486f01df6fbe80993a9902a1", "text": "Frequent pattern mining has been a focused theme in data mining research for over a decade. Abundant literature has been dedicated to this research and tremendous progress has been made, ranging from efficient and scalable algorithms for frequent itemset mining in transaction databases to numerous research frontiers, such as sequential pattern mining, structured pattern mining, correlation mining, associative classification, and frequent pattern-based clustering, as well as their broad applications. In this article, we provide a brief overview of the current status of frequent pattern mining and discuss a few promising research directions. We believe that frequent pattern mining research has substantially broadened the scope of data analysis and will have deep impact on data mining methodologies and applications in the long run. However, there are still some challenging research issues that need to be solved before frequent pattern mining can claim a cornerstone approach in data mining applications.", "title": "" }, { "docid": "192e1bd5baa067b563edb739c05decfa", "text": "This paper presents a simple and accurate design methodology for LLC resonant converters, based on a semi- empirical approach to model steady-state operation in the \"be- low-resonance\" region. This model is framed in a design strategy that aims to design a converter capable of operating with soft-switching in the specified input voltage range with a load ranging from zero up to the maximum specified level.", "title": "" }, { "docid": "961348dd7afbc1802d179256606bdbb8", "text": "Class imbalance is among the most persistent complications which may confront the traditional supervised learning task in real-world applications. The problem occurs, in the binary case, when the number of instances in one class significantly outnumbers the number of instances in the other class. This situation is a handicap when trying to identify the minority class, as the learning algorithms are not usually adapted to such characteristics. The approaches to deal with the problem of imbalanced datasets fall into two major categories: data sampling and algorithmic modification. Cost-sensitive learning solutions incorporating both the data and algorithm level approaches assume higher misclassification costs with samples in the minority class and seek to minimize high cost errors. Nevertheless, there is not a full exhaustive comparison between those models which can help us to determine the most appropriate one under different scenarios. The main objective of this work is to analyze the performance of data level proposals against algorithm level proposals focusing in cost-sensitive models and versus a hybrid procedure that combines those two approaches. We will show, by means of a statistical comparative analysis, that we cannot highlight an unique approach among the rest. This will lead to a discussion about the data intrinsic characteristics of the imbalanced classification problem which will help to follow new paths that can lead to the improvement of current models mainly focusing on class overlap and dataset shift in imbalanced classification. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d5e27463c14210420833554438f05ed3", "text": "During development, the healthy human brain constructs a host of large-scale, distributed, function-critical neural networks. Neurodegenerative diseases have been thought to target these systems, but this hypothesis has not been systematically tested in living humans. We used network-sensitive neuroimaging methods to show that five different neurodegenerative syndromes cause circumscribed atrophy within five distinct, healthy, human intrinsic functional connectivity networks. We further discovered a direct link between intrinsic connectivity and gray matter structure. Across healthy individuals, nodes within each functional network exhibited tightly correlated gray matter volumes. The findings suggest that human neural networks can be defined by synchronous baseline activity, a unified corticotrophic fate, and selective vulnerability to neurodegenerative illness. Future studies may clarify how these complex systems are assembled during development and undermined by disease.", "title": "" }, { "docid": "cd2ad7c7243c2b690239f1466b57c0ea", "text": "In 2001, JPL commissioned four industry teams to make a fresh examination of Mars Sample Return (MSR) mission architectures. As new fiscal realities of a cost-capped Mars Exploration Program unfolded, it was evident that the converged-upon MSR concept did not fit reasonably within a balanced program. Therefore, along with a new MSR Science Steering Group, JPL asked the industry teams plus JPL's Team-X to explore ways to reduce the cost. A paper presented at last year's conference described the emergence of a new, affordable \"Groundbreaking-MSR\" concept (Mattingly et al., 2003). This work addresses the continued evolution of the Groundbreaking MSR concept over the last year. One of the tenets of the low-cost approach is to use substantial heritage from an earlier mission, Mars Science Laboratory (MSL). Recently, the MSL project developed and switched its baseline to a revolutionary landing approach, coined \"skycrane\" where the MSL, which is a rover, would be lowered gently to the Martian surface from a hovering vehicle. MSR has adopted this approach in its mission studies, again continuing to capitalize on the heritage for a significant portion of the new lander. In parallel, a MSR Technology Board was formed to reexamine MSR technology needs and participate in a continuing refinement of architectural trades. While the focused technology program continues to be definitized through the remainder of this year, the current assessment of what technology development is required, is discussed in this paper. In addition, the results of new trade studies and considerations will be discussed. Adopting these changes, the Groundbreaking MSR concept has shifted to that presented in this paper. It remains a project that is affordable and meets the basic science needs defined by the MSR Science Steering Group in 2002.", "title": "" }, { "docid": "65b5d05ea38c4350b98b1e355200d533", "text": "Deep learning usually requires large amounts of labeled training data, but annotating data is costly and tedious. The framework of semi-supervised learning provides the means to use both labeled data and arbitrary amounts of unlabeled data for training. Recently, semisupervised deep learning has been intensively studied for standard CNN architectures. However, Fully Convolutional Networks (FCNs) set the state-of-the-art for many image segmentation tasks. To the best of our knowledge, there is no existing semi-supervised learning method for such FCNs yet. We lift the concept of auxiliary manifold embedding for semisupervised learning to FCNs with the help of Random Feature Embedding. In our experiments on the challenging task of MS Lesion Segmentation, we leverage the proposed framework for the purpose of domain adaptation and report substantial improvements over the baseline model.", "title": "" }, { "docid": "5487ee527ef2a9f3afe7f689156e7e4d", "text": "Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general “compare-aggregate” framework that performs word-level matching followed by aggregation using Convolutional Neural Networks. We particularly focus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple comparison functions based on element-wise operations can work better than standard neural network and neural tensor network.", "title": "" }, { "docid": "e72872277a33dcf6d5c1f7e31f68a632", "text": "Tilt rotor unmanned aerial vehicle (TRUAV) with ability of hovering and high-speed cruise has attached much attention, but its transition control is still a difficult point because of varying dynamics. This paper proposes a multi-model adaptive control (MMAC) method for a quad-TRUAV, and the stability in the transition procedure could be ensured by considering corresponding dynamics. For safe transition, tilt corridor is considered firstly, and actual flight status should locate within it. Then, the MMAC controller is constructed according to mode probabilities, which are calculated by solving a quadratic programming problem based on a set of input- output plant models. Compared with typical gain scheduling control, this method could ensure transition stability more effectively.", "title": "" }, { "docid": "5eb304f9287785a65dd159e42a51eb8c", "text": "The forensic examination following rape has two primary purposes: to provide health care and to collect evidence. Physical injuries need treatment so that they heal without adverse consequences. The pattern of injuries also has a forensic significance in that injuries are linked to the outcome of legal proceedings. This literature review investigates the variables related to genital injury prevalence and location that are reported in a series of retrospective reviews of medical records. The author builds the case that the prevalence and location of genital injury provide only a partial description of the nature of genital trauma associated with sexual assault and suggests a multidimensional definition of genital injury pattern. Several of the cited studies indicate that new avenues of investigation, such as refined measurement strategies for injury severity and skin color, may lead to advancements in health care, forensic, and criminal justice science.", "title": "" }, { "docid": "60ade549a5d58da43824ba0ddf7ab242", "text": "Existing designs for fine-grained, dynamic information-flow control assume that it is acceptable to terminate the entire system when an incorrect flow is detected-i.e, they give up availability for the sake of confidentiality and integrity. This is an unrealistic limitation for systems such as long-running servers. We identify public labels and delayed exceptions as crucial ingredients for making information-flow errors recoverable in a sound and usable language, and we propose two new error-handling mechanisms that make all errors recoverable. The first mechanism builds directly on these basic ingredients, using not-a-values (NaVs) and data flow to propagate errors. The second mechanism adapts the standard exception model to satisfy the extra constraints arising from information flow control, converting thrown exceptions to delayed ones at certain points. We prove that both mechanisms enjoy the fundamental soundness property of non-interference. Finally, we describe a prototype implementation of a full-scale language with NaVs and report on our experience building robust software components in this setting.", "title": "" }, { "docid": "5bca58cbd1ef80ebf040529578d2a72a", "text": "In this letter, a printable chipless tag with electromagnetic code using split ring resonators is proposed. A 4 b chipless tag that can be applied to paper/plastic-based items such as ID cards, tickets, banknotes and security documents is designed. The chipless tag generates distinct electromagnetic characteristics by various combinations of a split ring resonator. Furthermore, a reader system is proposed to digitize electromagnetic characteristics and convert chipless tag to electromagnetic code.", "title": "" }, { "docid": "78275a488b19bf10882ab6ef0c552f60", "text": "The islanding methods are classified in terms of the islanding principle and the distributed Generation (DG). Islanding detection techniques are mainly divided into three types for distribution systems: Remote, local and communication based techniques. Passive methods are based on the information available on the DG site at the point of common coupling (PCC) with the utility grid. A new approach in passive techniques is the use of data-mining to classify the system parameters. Passive techniques are fast and they don’t introduce disturbance in the system but they have a large non detectable zone (NDZ) where they fail to detect the islanding condition. In this paper, considerable indices of a distribution system are collected by using MATLAB simulation. These indices are change in power, change in voltage, rate of change of power, rate of change of voltage, total harmonic distortion (THD) current, voltage, and change in power factor. Adaptive Neuro-Fuzzy inference System (ANFIS) in MATLAB used to classifying for these indices and define the boundaries. The results show the use of ANFIS in reducing the NDZ of passive islanding detection systems. KeywordsNon detectable zone (NDZ), Adaptive NeuroFuzzy inference System (ANFIS), Distributed generation (DG), islanding detection.", "title": "" }, { "docid": "a6b35cc94df399b2c428609958706a94", "text": "Autonomous underwater vehicles (AUVs) are an indispensable tool for marine scientists to study the world's oceans. The Slocum glider is a buoyancy driven AUV designed for missions that can last weeks or even months. Although successful, its hardware and layered control architecture is rather limited and difficult to program. Due to limits in its hardware and software infrastructure, the Slocum glider is not able to change its behavior based on sensor readings while underwater. In this paper, we discuss a new programming architecture for AUVs like the Slocum. We present a new model that allows marine scientists to express AUV missions at a higher level of abstraction, leaving low-level software and hardware details to the compiler and runtime system. The Slocum glider is used as an illustration of how our programming architecture can be implemented within an existing system. The Slocum's new framework consists of an event driven, finite state machine model, a corresponding compiler and runtime system, and a hardware platform that interacts with the glider's existing hardware infrastructure. The new programming architecture is able to implement changes in glider behavior in response to sensor readings while submerged. This crucial capability will enable advanced glider behaviors such as underwater communication and swarming. Experimental results based on simulation and actual glider deployments off the coast of New Jersey show the expressiveness and effectiveness of our prototype implementation.", "title": "" }, { "docid": "6030be1ec26c68bce9edc262681ed11e", "text": "Modeling neural tissue is an important tool to investigate biological neural networks. Until recently, most of this modeling has been done using numerical methods. In the European research project \"FACETS\" this computational approach is complemented by different kinds of neuromorphic systems. A special emphasis lies in the usability of these systems for neuroscience. To accomplish this goal an integrated software/hardware framework has been developed which is centered around a unified neural system description language, called PyNN, that allows the scientist to describe a model and execute it in a transparent fashion on either a neuromorphic hardware system or a numerical simulator. A very large analog neuromorphic hardware system developed within FACETS is able to use complex neural models as well as realistic network topologies, i.e. it can realize more than 10000 synapses per neuron, to allow the direct execution of models which previously could have been simulated numerically only.", "title": "" }, { "docid": "bfde4b16d07f49ede231702547e0b748", "text": "Android Notifications can be considered as essential parts in Human-Smartphone interaction and inextricable modules of modern mobile applications that can facilitate User Interaction and improve User Experience. This paper presents how this well-crafted and thoroughly documented mechanism, provided by the OS can be exploited by an adversary. More precisely, we present attacks that result either in forging smartphone application notifications to lure the user in disclosing sensitive information, or manipulate Android Notifications to launch a Denial of Service attack to the users’ device, locally and remotely, rendering them unusable. This paper concludes by proposing generic countermeasures for the discussed security threats.", "title": "" }, { "docid": "aeb3e0b089e658b532b3ed6c626898dd", "text": "Semantics is seen as the key ingredient in the next phase of the Web infrastructure as well as the next generation of information systems applications. In this context, we review some of the reservations expressed about the viability of the Semantic Web. We respond to these by identifying a Semantic Technology that supports the key capabilities also needed to realize the Semantic Web vision, namely representing, acquiring and utilizing knowledge. Given that scalability is a key challenge, we briefly review our observations from developing three classes of real world applications and corresponding technology components: search/browsing, integration, and analytics. We distinguish this proven technology from some parts of the Semantic Web approach and offer subjective remarks which we hope will foster additional debate.", "title": "" } ]
scidocsrr
952730d7e4071e6f3fba2fc1a322a745
RUPERT: An exoskeleton robot for assisting rehabilitation of arm functions
[ { "docid": "cdc3e4b096be6775547a8902af52e798", "text": "OBJECTIVE\nThe aim of the study was to present a systematic review of studies that investigate the effects of robot-assisted therapy on motor and functional recovery in patients with stroke.\n\n\nMETHODS\nA database of articles published up to October 2006 was compiled using the following Medline key words: cerebral vascular accident, cerebral vascular disorders, stroke, paresis, hemiplegia, upper extremity, arm, and robot. References listed in relevant publications were also screened. Studies that satisfied the following selection criteria were included: (1) patients were diagnosed with cerebral vascular accident; (2) effects of robot-assisted therapy for the upper limb were investigated; (3) the outcome was measured in terms of motor and/or functional recovery of the upper paretic limb; and (4) the study was a randomized clinical trial (RCT). For each outcome measure, the estimated effect size (ES) and the summary effect size (SES) expressed in standard deviation units (SDU) were calculated for motor recovery and functional ability (activities of daily living [ADLs]) using fixed and random effect models. Ten studies, involving 218 patients, were included in the synthesis. Their methodological quality ranged from 4 to 8 on a (maximum) 10-point scale.\n\n\nRESULTS\nMeta-analysis showed a nonsignificant heterogeneous SES in terms of upper limb motor recovery. Sensitivity analysis of studies involving only shoulder-elbow robotics subsequently demonstrated a significant homogeneous SES for motor recovery of the upper paretic limb. No significant SES was observed for functional ability (ADL).\n\n\nCONCLUSION\nAs a result of marked heterogeneity in studies between distal and proximal arm robotics, no overall significant effect in favor of robot-assisted therapy was found in the present meta-analysis. However, subsequent sensitivity analysis showed a significant improvement in upper limb motor function after stroke for upper arm robotics. No significant improvement was found in ADL function. However, the administered ADL scales in the reviewed studies fail to adequately reflect recovery of the paretic upper limb, whereas valid instruments that measure outcome of dexterity of the paretic arm and hand are mostly absent in selected studies. Future research into the effects of robot-assisted therapy should therefore distinguish between upper and lower robotics arm training and concentrate on kinematical analysis to differentiate between genuine upper limb motor recovery and functional recovery due to compensation strategies by proximal control of the trunk and upper limb.", "title": "" } ]
[ { "docid": "904454a191da497071ee9b835561c6e6", "text": "We introduce a stochastic discrete automaton model to simulate freeway traffic. Monte-Carlo simulations of the model show a transition from laminar traffic flow to start-stopwaves with increasing vehicle density, as is observed in real freeway traffic. For special cases analytical results can be obtained.", "title": "" }, { "docid": "ba29af46fd410829c450eed631aa9280", "text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.", "title": "" }, { "docid": "1af5c5e20c1ce827f899dc70d0495bdc", "text": "High power sources and high sensitivity detectors are highly in demand for terahertz imaging and sensing systems. Use of nano-antennas and nano-plasmonic light concentrators in photoconductive terahertz sources and detectors has proven to offer significantly higher terahertz radiation powers and detection sensitivities by enhancing photoconductor quantum efficiency while maintaining its ultrafast operation. This is because of the unique capability of nano-antennas and nano-plasmonic structures in manipulating the concentration of photo-generated carriers within the device active area, allowing a larger number of photocarriers to efficiently contribute to terahertz radiation and detection. An overview of some of the recent advancements in terahertz optoelectronic devices through use of various types of nano-antennas and nano-plasmonic light concentrators is presented in this article.", "title": "" }, { "docid": "6f95d8bcaefcc99209279dadb1beb0a6", "text": "Public cloud software marketplaces already offer users a wealth of choice in operating systems, database management systems, financial software, and virtual networking, all deployable and configurable at the click of a button. Unfortunately, this level of customization has not extended to emerging hypervisor-level services, partly because traditional virtual machines (VMs) are fully controlled by only one hypervisor at a time. Currently, a VM in a cloud platform cannot concurrently use hypervisorlevel services from multiple third-parties in a compartmentalized manner. We propose the notion of a multihypervisor VM, which is an unmodified guest that can simultaneously use services from multiple coresident, but isolated, hypervisors. We present a new virtualization architecture, called Span virtualization, that leverages nesting to allow multiple hypervisors to concurrently control a guest’s memory, virtual CPU, and I/O resources. Our prototype of Span virtualization on the KVM/QEMU platform enables a guest to use services such as introspection, network monitoring, guest mirroring, and hypervisor refresh, with performance comparable to traditional nested VMs.", "title": "" }, { "docid": "3c733b60b2319c706069d9163cf849d4", "text": "A novel dual-mode microstrip square loop resonator is proposed using the slow-wave and dispersion features of the microstrip slow-wave open-loop resonator. It is shown that the designed and fabricated dual-mode microstrip filter has a wide stopband including the first spurious resonance frequency. Also, it has a size reduction of about 50% at the same center frequency, as compared with the dual-mode bandpass filters such as microstrip patch, cross-slotted patch, square loop, and ring resonator filter.", "title": "" }, { "docid": "4ee84cfdef31d4814837ad2811e59cd4", "text": "In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.", "title": "" }, { "docid": "7e3de04fc54b78d66e8209984a76b25c", "text": "OBJECTIVE\nTo assess existing reported human trials of Withania somnifera (WS; common name, ashwagandha) for the treatment of anxiety.\n\n\nDESIGN\nSystematic review of the literature, with searches conducted in PubMed, SCOPUS, CINAHL, and Google Scholar by a medical librarian. Additionally, the reference lists of studies identified in these databases were searched by a research assistant, and queries were conducted in the AYUSH Research Portal. Search terms included \"ashwagandha,\" \"Withania somnifera,\" and terms related to anxiety and stress. Inclusion criteria were human randomized controlled trials with a treatment arm that included WS as a remedy for anxiety or stress. The study team members applied inclusion criteria while screening the records by abstract review.\n\n\nINTERVENTION\nTreatment with any regimen of WS.\n\n\nOUTCOME MEASURES\nNumber and results of studies identified in the review.\n\n\nRESULTS\nSixty-two abstracts were screened; five human trials met inclusion criteria. Three studies compared several dosage levels of WS extract with placebos using versions of the Hamilton Anxiety Scale, with two demonstrating significant benefit of WS versus placebo, and the third demonstrating beneficial effects that approached but did not achieve significance (p=0.05). A fourth study compared naturopathic care with WS versus psychotherapy by using Beck Anxiety Inventory (BAI) scores as an outcome; BAI scores decreased by 56.5% in the WS group and decreased 30.5% for psychotherapy (p<0.0001). A fifth study measured changes in Perceived Stress Scale (PSS) scores in WS group versus placebo; there was a 44.0% reduction in PSS scores in the WS group and a 5.5% reduction in the placebo group (p<0.0001). All studies exhibited unclear or high risk of bias, and heterogenous design and reporting prevented the possibility of meta-analysis.\n\n\nCONCLUSIONS\nAll five studies concluded that WS intervention resulted in greater score improvements (significantly in most cases) than placebo in outcomes on anxiety or stress scales. Current evidence should be received with caution because of an assortment of study methods and cases of potential bias.", "title": "" }, { "docid": "c52a9d3d66d2b56374f26580a728cbd2", "text": "Automatic License Plate Recognition (ALPR) has important applications in traffic surveillance. It is a challenging problem especially in countries like in India where the license plates have varying sizes, number of lines, fonts etc. The difficulty is all the more accentuated in traffic videos as the cameras are placed high and most plates appear skewed. This work aims to address ALPR using Deep CNN methods for real-time traffic videos. We first extract license plate candidates from each frame using edge information and geometrical properties, ensuring high recall. These proposals are fed to a CNN classifier for License Plate detection obtaining high precision. We then use a CNN classifier trained for individual characters along with a spatial transformer network (STN) for character recognition. Our system is evaluated on several traffic videos with vehicles having different license plate formats in terms of tilt, distances, colors, illumination, character size, thickness etc. Results demonstrate robustness to such variations and impressive performance in both the localization and recognition. We also make available the dataset for further research on this topic.", "title": "" }, { "docid": "c4f6edd01cee1e44a00eca11a086a284", "text": "In this paper we investigate the effectiveness of Recurrent Neural Networks (RNNs) in a top-N content-based recommendation scenario. Specifically, we propose a deep architecture which adopts Long Short Term Memory (LSTM) networks to jointly learn two embeddings representing the items to be recommended as well as the preferences of the user. Next, given such a representation, a logistic regression layer calculates the relevance score of each item for a specific user and we returns the top-N items as recommendations.\n In the experimental session we evaluated the effectiveness of our approach against several baselines: first, we compared it to other shallow models based on neural networks (as Word2Vec and Doc2Vec), next we evaluated it against state-of-the-art algorithms for collaborative filtering. In both cases, our methodology obtains a significant improvement over all the baselines, thus giving evidence of the effectiveness of deep learning techniques in content-based recommendation scenarios and paving the way for several future research directions.", "title": "" }, { "docid": "f065684c26f71567c092ee6c85d5e831", "text": "Various types of killings occur within family matrices. The news media highlight the dramatic components, and even novels now use it as a theme. 1 However, a psychiatric understanding remains elusive. Not all killings within a family are familicidal. For want of a better term, I have called the killing of more than one member of a family by another family member \"familicide.\" The destruction of the family unit appears to be the goal. Such behavior comes within the category of \"mass murders\" where a number of victims are killed in a short period of time by one person. However, in mass murders the victims are not exclusively family members. The case of one person committing a series of homicides over an extended period of time, such as months or years, also differs from familicide. The latter can result in the perpetrator getting killed or injured in the process, or subsequently attempting a suicidal act. However, neither injury, nor suicide, nor death of the perpetrator is an indispensable part of familicide. Fifteen different theories purport to explain physical violence within the nuclear family. 2 Varieties of killings within a family are subvarieties and familicide is yet a rarer event. Pedicide is the killing of a child by a parent. These are usually cases of one child being killed by one parent. If the child happens to be an infant, the act is infanticide. Many of the latter are situations where a mother kills her infant and is diagnosed schizophrenic or psychotic depressive. Child beating by a parent can result in inadvertent death. One sibling killing another is fratricide. A child killing a parent is parricide, or more specifically patricide or matricide. Uxoricide is one spouse killing another. Each of these behaviors has its own intrapsychic and interpersonal correlates. Such correlates often involve victimologic aspects. As a caveat, and based on this study, we should not assume that the perpetrators in familicide all bear one diagnosis even in a descriptive nosological sense. A distinction is needed between intra familial homicides related to psychiatric disturbance in one family member and collective types of violence in which families are destroyed. Extermination of families based on national, ethnic, racial or religious backgrounds are not", "title": "" }, { "docid": "8e44d0e60c6460a07d66ba9a90741b86", "text": "Although graph embedding has been a powerful tool for modeling data intrinsic structures, simply employing all features for data structure discovery may result in noise amplification. This is particularly severe for high dimensional data with small samples. To meet this challenge, this paper proposes a novel efficient framework to perform feature selection for graph embedding, in which a category of graph embedding methods is cast as a least squares regression problem. In this framework, a binary feature selector is introduced to naturally handle the feature cardinality in the least squares formulation. The resultant integral programming problem is then relaxed into a convex Quadratically Constrained Quadratic Program (QCQP) learning problem, which can be efficiently solved via a sequence of accelerated proximal gradient (APG) methods. Since each APG optimization is w.r.t. only a subset of features, the proposed method is fast and memory efficient. The proposed framework is applied to several graph embedding learning problems, including supervised, unsupervised, and semi-supervised graph embedding. Experimental results on several high dimensional data demonstrated that the proposed method outperformed the considered state-of-the-art methods.", "title": "" }, { "docid": "85576e6b36757f0a475e7482e4827a91", "text": "Existing approaches to neural machine translation are typically autoregressive models. While these models attain state-of-the-art translation quality, they are suffering from low parallelizability and thus slow at decoding long sequences. In this paper, we propose a novel model for fast sequence generation — the semi-autoregressive Transformer (SAT). The SAT keeps the autoregressive property in global but relieves in local and thus is able to produce multiple successive words in parallel at each time step. Experiments conducted on English-German and ChineseEnglish translation tasks show that the SAT achieves a good balance between translation quality and decoding speed. On WMT’14 English-German translation, the SAT achieves 5.58× speedup while maintains 88% translation quality, significantly better than the previous non-autoregressive methods. When produces two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).", "title": "" }, { "docid": "3b0a36f6d484705f8a68ae4a928b743e", "text": "Solution The unique pure strategy subgame perfect equilibrium is (Rr, r). 2. (30pts.) An entrepreneur has a project that she presents to a capitalist. She has her own money that she could invest in the project and is looking for additional funding from the capitalist. The project is either good (denoted g) (with probability p) or it is bad (denoted b) (with probability 1− p) and only the entrepreneur knows the quality of the project. The entrepreneur (E) decides whether to invest her own money (I) or not (N), the capitalist (C) observes whether the entrepreneur has invested or not and then decides whether to invest his money (i) or not (n). Figure 1 represents the game and gives the payoffs, where the first number is the entrepreneur’s payoff and the second number is the capitalist’s. (a) (20pts.) Find the set of pure strategy perfect Bayesian equilibria of this game.", "title": "" }, { "docid": "e787a1486a6563c15a74a07ed9516447", "text": "This chapter describes how engineering principles can be used to estimate joint forces. Principles of static and dynamic analysis are reviewed, with examples of static analysis applied to the hip and elbow joints and to the analysis of joint forces in human ancestors. Applications to indeterminant problems of joint mechanics are presented and utilized to analyze equine fetlock joints.", "title": "" }, { "docid": "6bca70ccf17fd4380502b7b4e2e7e550", "text": "A consistent UI leaves an overall impression on user’s psychology, aesthetics and taste. Human–computer interaction (HCI) is the study of how humans interact with computer systems. Many disciplines contribute to HCI, including computer science, psychology, ergonomics, engineering, and graphic design. HCI is a broad term that covers all aspects of the way in which people interact with computers. In their daily lives, people are coming into contact with an increasing number of computer-based technologies. Some of these computer systems, such as personal computers, we use directly. We come into contact with other systems less directly — for example, we have all seen cashiers use laser scanners and digital cash registers when we shop. We have taken the same but in extensible line and made more solid justified by linking with other scientific pillars and concluded some of the best holistic base work for future innovations. It is done by inspecting various theories of Colour, Shape, Wave, Fonts, Design language and other miscellaneous theories in detail. Keywords— Karamvir Singh Rajpal, Mandeep Singh Rajpal, User Interface, User Experience, Design, Frontend, Neonex Technology,", "title": "" }, { "docid": "5bf761b94840bcab163ae3a321063b8b", "text": "The simulation method plays an important role in the investigation of the intrabody communication (IBC). Due to the problems of the transfer function and the corresponding parameters, only the simulation of the galvanic coupling IBC along the arm has been achieved at present. In this paper, a method for the mathematical simulation of the galvanic coupling IBC with different signal transmission paths has been introduced. First, a new transfer function of the galvanic coupling IBC was derived with the consideration of the internal resistances of the IBC devices. Second, the determination of the corresponding parameters used in the transfer function was discussed in detail. Finally, both the measurements and the simulations of the galvanic coupling IBC along the different signal transmission paths were carried out. Our investigation shows that the mathematical simulation results coincide with the measurement results over the frequency range from 100 kHz to 5 MHz, which indicates that the proposed method offers the significant advantages in the theoretical analysis and the application of the galvanic coupling IBC.", "title": "" }, { "docid": "0baf2c97da07f954a76b81f840ccca9e", "text": "3 Chapter 1 Introduction 1.1 Background: Identification is an action of recognizing or being recognized, in particular, identification of a thing or person from previous exposures or information. Identification these days is quite necessary as for security purposes. It can be done using biometric parameters such as finger prints, I.D scan, face recognition etc. Most probably the first well known example of a facial recognition system is because of Kohonen, who signified that an uncomplicated neural network could execute face recognition for aligned and normalized face images. The sort of network he recruited was by computing a face illustration by estimating the eigenvectors of the face image's autocorrelation pattern; these eigenvectors are currently called as`Eigen faces. But Kohonen's approach was not a real time triumph due to the need for accurate alignment and normalization. In successive years a great number of researchers attempted facial recognition systems based on edges, inter-feature spaces, and various neural network techniques. While many were victorious using small scale databases of aligned samples, but no one significantly directed the alternative practical problem of vast databases where the position and scale of the face was not known. An image is supposed to be outcome of two real variables, defined in the \" real world \" , for example, a(x, y) where 'a' is the amplitude in terms of brightness of the image at the real coordinate position (x, y). It is now practicable to operate multi-dimensional signals with systems that vary from simple digital circuits to complicated circuits, due to modern technology.  Image Analysis (input image->computation out)  Image Understanding (input image-> high-level interpretation out) 4 In this age of science and technology, images also attain wider opportunity due to the rapidly increasing significance of scientific visualization, for example microarray data in genetic research. To process the image firstly it is transformed into a digital form. Digitization comprises of sampling of image and quantization of sampled values. After transformed into a digital form, processing is performed. It introduces focal attention on image, or improvement of image features such as boundaries, or variation that make a graphic display more effective for representation & study. This technique does not enlarge the intrinsic information content in data. This technique is used to remove the unwanted observed image to reduce the effect of mortifications. Scope and precision of the knowledge of mortifications process and filter design are the basis of …", "title": "" }, { "docid": "2399755bed6b1fc5fac495d54886acc0", "text": "Lately fire outbreak is common issue happening in Malays and the damage caused by these type of incidents is tremendous toward nature and human interest. Due to this the need for application for fire detection has increases in recent years. In this paper we proposed a fire detection algorithm based on image processing techniques which is compatible in surveillance devices like CCTV, wireless camera to UAVs. The algorithm uses RGB colour model to detect the colour of the fire which is mainly comprehended by the intensity of the component R which is red colour. The growth of fire is detected using sobel edge detection. Finally a colour based segmentation technique was applied based on the results from the first technique and second technique to identify the region of interest (ROI) of the fire. After analysing 50 different fire scenarios images, the final accuracy obtained from testing the algorithm was 93.61% and the efficiency was 80.64%.", "title": "" }, { "docid": "861d7ad76337bc7960493d0b69976253", "text": "Dysuria, defined as pain, burning, or discomfort on urination, is more common in women than in men. Although urinary tract infection is the most frequent cause of dysuria, empiric treatment with antibiotics is not always appropriate. Dysuria occurs more often in younger women, probably because of their greater frequency of sexual activity. Older men are more likely to have dysuria because of an increased incidence of prostatic hyperplasia with accompanying inflammation and infection. A comprehensive history and physical examination can often reveal the cause of dysuria. Urinalysis may not be needed in healthier patients who have uncomplicated medical histories and symptoms. In most patients, however, urinalysis can help to determine the presence of infection and confirm a suspected diagnosis. Urine cultures and both urethral and vaginal smears and cultures can help to identify sites of infection and causative agents. Coliform organisms, notably Escherichia coli, are the most common pathogens in urinary tract infection. Dysuria can also be caused by noninfectious inflammation or trauma, neoplasm, calculi, hypoestrogenism, interstitial cystitis, or psychogenic disorders. Although radiography and other forms of imaging are rarely needed, these studies may identify abnormalities in the upper urinary tract when symptoms are more complex.", "title": "" }, { "docid": "ac15d2b4d14873235fe6e4d2dfa84061", "text": "Despite strong popular conceptions of gender differences in emotionality and striking gender differences in the prevalence of disorders thought to involve emotion dysregulation, the literature on the neural bases of emotion regulation is nearly silent regarding gender differences (Gross, 2007; Ochsner & Gross, in press). The purpose of the present study was to address this gap in the literature. Using functional magnetic resonance imaging, we asked male and female participants to use a cognitive emotion regulation strategy (reappraisal) to down-regulate their emotional responses to negatively valenced pictures. Behaviorally, men and women evidenced comparable decreases in negative emotion experience. Neurally, however, gender differences emerged. Compared with women, men showed (a) lesser increases in prefrontal regions that are associated with reappraisal, (b) greater decreases in the amygdala, which is associated with emotional responding, and (c) lesser engagement of ventral striatal regions, which are associated with reward processing. We consider two non-competing explanations for these differences. First, men may expend less effort when using cognitive regulation, perhaps due to greater use of automatic emotion regulation. Second, women may use positive emotions in the service of reappraising negative emotions to a greater degree. We then consider the implications of gender differences in emotion regulation for understanding gender differences in emotional processing in general, and gender differences in affective disorders.", "title": "" } ]
scidocsrr
57e11aded7066e5dff1db7ba76c47d23
Sphere-meshes for real-time hand modeling and tracking
[ { "docid": "2b2398bf61847843e18d1f9150a1bccc", "text": "We present a robust method for capturing articulated hand motions in realtime using a single depth camera. Our system is based on a realtime registration process that accurately reconstructs hand poses by fitting a 3D articulated hand model to depth images. We register the hand model using depth, silhouette, and temporal information. To effectively map low-quality depth maps to realistic hand poses, we regularize the registration with kinematic and temporal priors, as well as a data-driven prior built from a database of realistic hand poses. We present a principled way of integrating such priors into our registration optimization to enable robust tracking without severely restricting the freedom of motion. A core technical contribution is a new method for computing tracking correspondences that directly models occlusions typical of single-camera setups. To ensure reproducibility of our results and facilitate future research, we fully disclose the source code of our implementation.", "title": "" }, { "docid": "ad40625ae8500d8724523ae2e663eeae", "text": "The human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this paper, we present a human hand model with underlying anatomical structure. Animation of the hand model is controlled by muscle contraction values. We employ a physically based hybrid muscle model to convert these contraction values into movement of skin and bones. Pseudo muscles directly control the rotation of bones based on anatomical data and mechanical laws, while geometric muscles deform the skin tissue using a mass-spring system. Thus, resulting animations automatically exhibit anatomically and physically correct finger movements and skin deformations. In addition, we present a deformation technique to create individual hand models from photographs. A radial basis warping function is set up from the correspondence of feature points and applied to the complete structure of the reference hand model, making the deformed hand model instantly animatable.", "title": "" } ]
[ { "docid": "5d3893a22635a977760cde03d3542d2a", "text": "We propose a technique for making Convolutional Neural Network (CNN)-based models more transparent by visualizing input regions that are ‘important’ for predictions – producing visual explanations. Our approach, called Gradient-weighted Class Activation Mapping (Grad-CAM), uses class-specific gradient information to localize important regions. These localizations are combined with existing pixel-space visualizations to create a novel high-resolution and class-discriminative visualization called Guided Grad-CAM. These methods help better understand CNN-based models, including image captioning and visual question answering (VQA) models. We evaluate our visual explanations by measuring their ability to discriminate between classes, to inspire trust in humans, and their correlation with occlusion maps. Grad-CAM provides a new way to understand CNN-based models. We have released code, an online demo hosted on CloudCV [1], and the full paper [8].1", "title": "" }, { "docid": "bf0032959e170733859061e9ca678b03", "text": "This document discribes different optimization methods applied to the air traffic management domain. The first part details genetic algorithms and introduces a crossover operator adapted to partially separated problems. The operator is tested on an airport ground traffic optimization problem. In the second part, the conflict resolution problem is optimized with different (centralized or autonomous) models and different algorithms : genetic algorithms, branch and bound, neural networks, semidefinite programming, hybridization of genetic algorithms and deterministic methods such as linear programming or A∗ algorithms.", "title": "" }, { "docid": "d54e33049b3f5170ec8bd09d8f17c05c", "text": "Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higherlevel representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P (x) is structurally related to some task of interest, say predicting P (y|x). This paper focusses on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.", "title": "" }, { "docid": "e60c295d02b87d4c88e159a3343e0dcb", "text": "In 2163 personally interviewed female twins from a population-based registry, the pattern of age at onset and comorbidity of the simple phobias (animal and situational)--early onset and low rates of comorbidity--differed significantly from that of agoraphobia--later onset and high rates of comorbidity. Consistent with an inherited \"phobia proneness\" but not a \"social learning\" model of phobias, the familial aggregation of any phobia, agoraphobia, social phobia, and animal phobia appeared to result from genetic and not from familial-environmental factors, with estimates of heritability of liability ranging from 30% to 40%. The best-fitting multivariate genetic model indicated the existence of genetic and individual-specific environmental etiologic factors common to all four phobia subtypes and others specific for each of the individual subtypes. This model suggested that (1) environmental experiences that predisposed to all phobias were most important for agoraphobia and social phobia and relatively unimportant for the simple phobias, (2) environmental experiences that uniquely predisposed to only one phobia subtype had a major impact on simple phobias, had a modest impact on social phobia, and were unimportant for agoraphobia, and (3) genetic factors that predisposed to all phobias were most important for animal phobia and least important for agoraphobia. Simple phobias appear to arise from the joint effect of a modest genetic vulnerability and phobia-specific traumatic events in childhood, while agoraphobia and, to a somewhat lesser extent, social phobia result from the combined effect of a slightly stronger genetic influence and nonspecific environmental experiences.", "title": "" }, { "docid": "7e422bc9e691d552543c245e7c154cbf", "text": "Personality assessment and, specifically, the assessment of personality disorders have traditionally been indifferent to computational models. Computational personality is a new field that involves the automatic classification of individuals' personality traits that can be compared against gold-standard labels. In this context, we introduce a new vectorial semantics approach to personality assessment, which involves the construction of vectors representing personality dimensions and disorders, and the automatic measurements of the similarity between these vectors and texts written by human subjects. We evaluated our approach by using a corpus of 2468 essays written by students who were also assessed through the five-factor personality model. To validate our approach, we measured the similarity between the essays and the personality vectors to produce personality disorder scores. These scores and their correspondence with the subjects' classification of the five personality factors reproduce patterns well-documented in the psychological literature. In addition, we show that, based on the personality vectors, we can predict each of the five personality factors with high accuracy.", "title": "" }, { "docid": "e36b2e45cd0153c9167dc515c08f84d0", "text": "It can be argued that the successful management of change is crucial to any organisation in order to survive and succeed in the present highly competitive and continuously evolving business environment. However, theories and approaches to change management currently available to academics and practitioners are often contradictory, mostly lacking empirical evidence and supported by unchallenged hypotheses concerning the nature of contemporary organisational change management. The purpose of this article is, therefore, to provide a critical review of some of the main theories and approaches to organisational change management as an important first step towards constructing a new framework for managing change. The article concludes with recommendations for further research.", "title": "" }, { "docid": "72beed143d925c4e51b7ad1ca063dcc6", "text": "Printed Circuit Board (PCB) is the most important component of the electronic industry and thus, the electronic mass-production facilities make an attempt to achieve Printed Circuit Board with 100% quality assurance. To achieve Printed Circuit Board with 100% quality assurance i. e. to produce zero-defect Printed Circuit Board, inspection method is the vital process during manufacturing of PCB. This paper presents the Hybrid approach which helps in producing zero-defect PCB by detecting the defects. This approach not only detects the defect but also classifies and locates the defects. The hybrid approach uses referential and non-referential methods to analyze the Printed Circuit Board and this approach proposed an algorithm which involves Image Representation, Image Comparison, and Image Segmentation. The algorithm was tested with mass number of images of Printed Circuit Board. The experimental result shows that higher quality assurance is obtained than the previously implemented algorithms.", "title": "" }, { "docid": "96d2e884c65205ef458214594f8b64f5", "text": "The weak methods occur pervasively in AI systems and may form die basic methods for all intelligent systems. The purpose of this paper is to characterize die weak methods and to explain how and why they arise in intelligent systems. We propose an organization, called a universal weak method that provides functionality of all the weak methods.* A universal weak method is an organizational scheme for knowledge that produces the appropriate search behavior given the available task-domain knowledge. We present a problem solving architecture, called SOAR, in which we realize a universal weak method. We then demonstrate the universal weak method with a variety of weak methods on a set of tasks. This research was sponsored by die Defense Advanced Research Projects Agency (DOD), ARPA Order No: 3597, monitored by die Air Force Avionics Laboratory Under Contract F33515-78-C-155L The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of die Defense Advanced Research Projects Agency or the US Government.", "title": "" }, { "docid": "13ad3f52725d8417668ca12d5070482b", "text": "Decoronation of ankylosed teeth in infraposition was introduced in 1984 by Malmgren and co-workers (1). This method is used all over the world today. It has been clinically shown that the procedure preserves the alveolar width and rebuilds lost vertical bone of the alveolar ridge in growing individuals. The biological explanation is that the decoronated root serves as a matrix for new bone development during resorption of the root and that the lost vertical alveolar bone is rebuilt during eruption of adjacent teeth. First a new periosteum is formed over the decoronated root, allowing vertical alveolar growth. Then the interdental fibers that have been severed by the decoronation procedure are reorganized between adjacent teeth. The continued eruption of these teeth mediates marginal bone apposition via the dental-periosteal fiber complex. The erupting teeth are linked with the periosteum covering the top of the alveolar socket and indirectly via the alveolar gingival fibers, which are inserted in the alveolar crest and in the lamina propria of the interdental papilla. Both structures can generate a traction force resulting in bone apposition on top of the alveolar crest. This theoretical biological explanation is based on known anatomical features, known eruption processes and clinical observations.", "title": "" }, { "docid": "4c87f3fb470cb01781b563889b1261d2", "text": "Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset (Antol et al., ICCV 2015) by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at http://visualqa.org/ as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.", "title": "" }, { "docid": "a4e92e4dc5d93aec4414bc650436c522", "text": "Where you can find the compiling with continuations easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, that's not about who are reading this compiling with continuations book. It is about this book that will give wellness for all people from many societies.", "title": "" }, { "docid": "b31723195f18a128e2de04918808601d", "text": "Realistic secure processors, including those built for academic and commercial purposes, commonly realize an “attested execution” abstraction. Despite being the de facto standard for modern secure processors, the “attested execution” abstraction has not received adequate formal treatment. We provide formal abstractions for “attested execution” secure processors and rigorously explore its expressive power. Our explorations show both the expected and the surprising. On one hand, we show that just like the common belief, attested execution is extremely powerful, and allows one to realize powerful cryptographic abstractions such as stateful obfuscation whose existence is otherwise impossible even when assuming virtual blackbox obfuscation and stateless hardware tokens. On the other hand, we show that surprisingly, realizing composable two-party computation with attested execution processors is not as straightforward as one might anticipate. Specifically, only when both parties are equipped with a secure processor can we realize composable two-party computation. If one of the parties does not have a secure processor, we show that composable two-party computation is impossible. In practice, however, it would be desirable to allow multiple legacy clients (without secure processors) to leverage a server’s secure processor to perform a multi-party computation task. We show how to introduce minimal additional setup assumptions to enable this. Finally, we show that fair multi-party computation for general functionalities is impossible if secure processors do not have trusted clocks. When secure processors have trusted clocks, we can realize fair two-party computation if both parties are equipped with a secure processor; but if only one party has a secure processor (with a trusted clock), then fairness is still impossible for general functionalities.", "title": "" }, { "docid": "e7bbef4600048504c8019ff7fdb4758c", "text": "Convenient assays for superoxide dismutase have necessarily been of the indirect type. It was observed that among the different methods used for the assay of superoxide dismutase in rat liver homogenate, namely the xanthine-xanthine oxidase ferricytochromec, xanthine-xanthine oxidase nitroblue tetrazolium, and pyrogallol autoxidation methods, a modified pyrogallol autoxidation method appeared to be simple, rapid and reproducible. The xanthine-xanthine oxidase ferricytochromec method was applicable only to dialysed crude tissue homogenates. The xanthine-xanthine oxidase nitroblue tetrazolium method, either with sodium carbonate solution, pH 10.2, or potassium phosphate buffer, pH 7·8, was not applicable to rat liver homogenate even after extensive dialysis. Using the modified pyrogallol autoxidation method, data have been obtained for superoxide dismutase activity in different tissues of rat. The effect of age, including neonatal and postnatal development on the activity, as well as activity in normal and cancerous human tissues were also studied. The pyrogallol method has also been used for the assay of iron-containing superoxide dismutase inEscherichia coli and for the identification of superoxide dismutase on polyacrylamide gels after electrophoresis.", "title": "" }, { "docid": "be689d89e1e5182895a473a52a1950cd", "text": "This paper designs a Continuous Data Level Auditing system utilizing business process based analytical procedures and evaluates the system’s performance using disaggregated transaction records of a large healthcare management firm. An important innovation in the proposed architecture of the CDA system is the utilization of analytical monitoring as the second (rather than the first) stage of data analysis. The first component of the system utilizes automatic transaction verification to filter out exceptions, defined as transactions violating formal business process rules. The second component of the system utilizes business process based analytical procedures, denoted here ―Continuity Equations‖, as the expectation models for creating business process audit benchmarks. Our first objective is to examine several expectation models that can serve as the continuity equation benchmarks: a Linear Regression Model, a Simultaneous Equation Model, two Vector Autoregressive models, and a GARCH model. The second objective is to examine the impact of the choice of the level of data aggregation on anomaly detection performance. The third objective is to design a set of online learning and error correction protocols for automatic model inference and updating. Using a seeded error simulation approach, we demonstrate that the use of disaggregated business process data allows the detection of anomalies that slip through the analytical procedures applied to more aggregated data. Furthermore, the results indicate that under most circumstances the use of real time error correction results in superior performance, thus showing the benefit of continuous auditing.", "title": "" }, { "docid": "b3b050c35a1517dc52351cd917d0665a", "text": "The amount of information shared via social media is rapidly increasing amid growing concerns over online privacy. This study investigates the effect of controversiality and social endorsement of media content on sharing behavior when choosing between sharing publicly or anonymously. Anonymous sharing is found to be a popular choice (59% of shares), especially for controversial content which is 3.2x more likely to be shard anonymously. Social endorsement was not found to affect sharing behavior, except for sports-related content. Implications for social media interface design are dis-", "title": "" }, { "docid": "4e8c0810a7869b5b4cddf27c12aea4d9", "text": "The success of deep learning has been a catalyst to solving increasingly complex machine-learning problems, which often involve multiple data modalities. We review recent advances in deep multimodal learning and highlight the state-of the art, as well as gaps and challenges in this active research field. We first classify deep multimodal learning architectures and then discuss methods to fuse learned multimodal representations in deep-learning architectures. We highlight two areas of research&#x02013;regularization strategies and methods that learn or optimize multimodal fusion structures&#x02013;as exciting areas for future work.", "title": "" }, { "docid": "f1ab700dead6ef13e626940333ea782f", "text": "The 3(rd) English edition of the Japanese classification of biliary tract cancers was released approximately 10 years after the 5(th) Japanese edition and the 2(nd) English edition. Since the first Japanese edition was published in 1981, the Japanese classification has been in extensive use, particularly among Japanese surgeons and pathologists, because the cancer status and clinical outcomes in surgically resected cases have been the main objects of interest. However, recent advances in the diagnosis, management and research of the disease prompted the revision of the classification that can be used by not only surgeons and pathologists but also by all clinicians and researchers, for the evaluation of current disease status, the determination of current appropriate treatment, and the future development of medical practice for biliary tract cancers. Furthermore, during the past 10 years, globalization has advanced rapidly, and therefore, internationalization of the classification was an important issue to revise the Japanese original staging system, which would facilitate to compare the disease information among institutions worldwide. In order to achieve these objectives, the new Japanese classification of the biliary tract cancers principally adopted the 7(th) edition of staging system developed by the International Union Against Cancer (UICC) and the American Joint Committee on Cancer (AJCC). However, because there are some points pending in these systems, several distinctive points were also included for the purpose of collection of information for the future optimization of the staging system. Free mobile application of the new Japanese classification of the biliary tract cancers is available via http://www.jshbps.jp/en/classification/cbt15.html.", "title": "" }, { "docid": "fde2aefec80624ff4bc21d055ffbe27b", "text": "Object detector with region proposal networks such as Fast/Faster R-CNN [1, 2] have shown the state-of-the art performance on several benchmarks. However, they have limited success for detecting small objects. We argue the limitation is related to insufficient performance of Fast R-CNN block in Faster R-CNN. In this paper, we propose a refining block for Fast R-CNN. We further merge the block and Faster R-CNN into a single network (RF-RCNN). The RF-RCNN was applied on plate and human detection in RoadView image that consists of high resolution street images (over 30M pixels). As a result, the RF-RCNN showed great improvement over the Faster-RCNN.", "title": "" }, { "docid": "c68c5df29702e797b758474f4e8b137e", "text": "Abstract—A miniaturized printed log-periodic fractal dipole antenna is proposed. Tree fractal structure is introduced in an antenna design and evolves the traditional Euclidean log-periodic dipole array into the log-periodic second-iteration tree-dipole array (LPT2DA) for the first time. Main parameters and characteristics of the proposed antenna are discussed. A fabricated proof-of-concept prototype of the proposed antenna is etched on a FR4 substrate with a relative permittivity of 4.4 and volume of 490 mm × 245 mm × 1.5 mm. The impedance bandwidth (measured VSWR < 2) of the fabricated antenna with approximate 40% reduction of traditional log-periodic dipole antenna is from 0.37 to 3.55GHz with a ratio of about 9.59 : 1. Both numerical and experimental results show that the proposed antenna has stable directional radiation patterns and apparently miniaturized effect, which are suitable for various ultra-wideband applications.", "title": "" }, { "docid": "69edd9a94a05c299ae5efeae4ca3e1ae", "text": "Scene-aware dialog systems will be able to have conversations with users about the objects and events around them. Progress on such systems can be made by integrating state-of-the-art technologies from multiple research areas including end-to-end dialog systems visual dialog,and video description. We introduce the Audio Visual SceneAware Dialog (AVSD) challenge and dataset. In this challenge, which is one track of the 7th Dialog System Technology Challenges (DSTC7) workshop1, the task is to build a system that generates responses in a dialog about an input video.", "title": "" } ]
scidocsrr
eb42032ce116985090fffd79efa74982
A Lightweight Method for Building Reliable Operating Systems Despite Unreliable Device Drivers Technical Report IRCS-018 , January 2006
[ { "docid": "68bab5e0579a0cdbaf232850e0587e11", "text": "This article presents a new mechanism that enables applications to run correctly when device drivers fail. Because device drivers are the principal failing component in most systems, reducing driver-induced failures greatly improves overall reliability. Earlier work has shown that an operating system can survive driver failures [Swift et al. 2005], but the applications that depend on them cannot. Thus, while operating system reliability was greatly improved, application reliability generally was not.To remedy this situation, we introduce a new operating system mechanism called a shadow driver. A shadow driver monitors device drivers and transparently recovers from driver failures. Moreover, it assumes the role of the failed driver during recovery. In this way, applications using the failed driver, as well as the kernel itself, continue to function as expected.We implemented shadow drivers for the Linux operating system and tested them on over a dozen device drivers. Our results show that applications and the OS can indeed survive the failure of a variety of device drivers. Moreover, shadow drivers impose minimal performance overhead. Lastly, they can be introduced with only modest changes to the OS kernel and with no changes at all to existing device drivers.", "title": "" }, { "docid": "08e5fda460321069cbc1b33134332c2d", "text": "We propose a method to reuse unmodified device drivers and to improve system dependability using virtual machines. We run the unmodified device driver, with its original operating system, in a virtual machine. This approach enables extensive reuse of existing and unmodified drivers, independent of the OS or device vendor, significantly reducing the barrier to building new OS endeavors. By allowing distinct device drivers to reside in separate virtual machines, this technique isolates faults caused by defective or malicious drivers, thus improving a system’s dependability. We show that our technique requires minimal support infrastructure and provides strong fault isolation. Our prototype’s network performance is within 3–8% of a native Linux system. Each additional virtual machine increases the CPU utilization by about 0.12%. We have successfully reused a wide variety of unmodified Linux network, disk, and PCI device drivers.", "title": "" } ]
[ { "docid": "2d7251e7c6029dae6e32c742c2ad3709", "text": "Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory.", "title": "" }, { "docid": "f78a514dc16163f7de1c21f56211444f", "text": "Many methods have been used to recognise author personality traits from text, typically combining linguistic feature engineering with shallow learning models, e.g. linear regression or Support Vector Machines. This work uses deep-learningbased models and atomic features of text – the characters – to build hierarchical, vectorial word and sentence representations for trait inference. This method, applied to a corpus of tweets, shows state-of-theart performance across five traits and three languages (English, Spanish and Italian) compared with prior work in author profiling. The results, supported by preliminary visualisation work, are encouraging for the ability to detect complex human traits.", "title": "" }, { "docid": "d04a6ca9c09b8c10daf64c9f7830c992", "text": "Slave servo clocks have an essential role in hardware and software synchronization techniques based on Precision Time Protocol (PTP). The objective of servo clocks is to remove the drift between slave and master nodes, while keeping the output timing jitter within given uncertainty boundaries. Up to now, no univocal criteria exist for servo clock design. In fact, the relationship between controller design, performances and uncertainty sources is quite evanescent. In this paper, we propose a quite simple, but exhaustive linear model, which is expected to be used in the design of enhanced servo clock architectures.", "title": "" }, { "docid": "880c9122f0080f8d5abe58d21104dd22", "text": "In wearable visual computing, maintaining a time-evolving representation of the 3D environment along with the pose of the camera provides the geometrical foundation on which person-centred processing can be built. In this paper, an established method for the recognition of feature clusters is used on live imagery to identify and locate planar objects around the wearer. Objects’ locations are incorporated as additional 3D measurements into a monocular simultaneous localization and mapping process, which routinely uses 2D image measurements to acquire and maintain a map of the surroundings, irrespective of whether objects are present or not. Augmenting the 3D maps with automatically recognized objects enables useful annotations of the surroundings to be presented to the wearer. After demonstrating the geometrical integrity of the method, experiments show its use in two augmented reality applications.", "title": "" }, { "docid": "2bf70c7899f6a0263122bd3492b95590", "text": "We present a hierarchical classification model that allows rare objects to borrow statistical strength from related objects that have many training examples. Unlike many of the existing object detection and recognition systems that treat different classes as unrelated entities, our model learns both a hierarchy for sharing visual appearance across 200 object categories and hierarchical parameters. Our experimental results on the challenging object localization and detection task demonstrate that the proposed model substantially improves the accuracy of the standard single object detectors that ignore hierarchical structure altogether.", "title": "" }, { "docid": "126b52ab2e2585eabf3345ef7fb39c51", "text": "We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera. Our framework is the first one to provide simultaneously comprehensive facial motion tracking and a detailed 3D model of the user's head. Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning. The user starts facing the camera with a neutral expression in the first frame, but is free to move, talk and change his face expression as he wills otherwise. The facial motion is tracked using a blendshape representation while the fine geometric details are captured using a Bump image mapped over the template mesh. We propose an efficient algorithm to grow and refine the 3D model of the head on-the-fly and in real-time. We demonstrate robust and high-fidelity simultaneous facial motion tracking and 3D head modeling results on a wide range of subjects with various head poses and facial expressions. Our proposed method offers interesting possibilities for animation production and 3D video telecommunications.", "title": "" }, { "docid": "769885103664a070aba9fa963c2e0506", "text": "As a novel biologically inspired underwater vehicle, a robotic manta ray (RoMan-II) has been developed for potential marine applications. Manta ray can perform diversified locomotion patterns in water by manipulating two wide tins. These motion patterns have been implemented on the developed fish robot, including swimming by flapping fins, turning by modulating phase relations of fins, and online transition of different motion patterns. The movements are achieved by using a model of artificial central pattern generators (CPGs) constructed with coupled nonlinear oscillators. This paper focuses on the analytical formulation of coupling terms in the CPG model and the implementation issues of the CPG-based control on the fish robot. The control method demonstrated on the manta ray robot is expected to be a frame- work that can tackle locomotion control problems in other types of multifin-actuated fish robots or more general robots with rhythmic movement patterns.", "title": "" }, { "docid": "983ec9cdd75d0860c96f89f3c9b2f752", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "7ee36ec4fcae527bfd766e5f00305d5f", "text": "This book is the first technical overview of autonomous vehicles written for a general computing and engineering audience. The authors share their practical experiences of creating autonomous vehicle systems. These systems are complex, consisting of three major subsystems: (1) algorithms for localization, perception, and planning and control; (2) client systems, such as the robotics operating system and hardware platform; and (3) the cloud platform, which includes data storage, simulation, high-definition (HD) mapping, and deep learning model training. The algorithm subsystem extracts meaningful information from sensor raw data to understand its environment and make decisions about its actions. The client subsystem integrates these algorithms to meet real-time and reliability requirements. The cloud platform provides offline computing and storage capabilities for autonomous vehicles. Using the cloud platform, we are able to test new algorithms and update the HD map—plus, train better recognition, tracking, and decision models. This book consists of nine chapters. Chapter 1 provides an overview of autonomous vehicle systems; Chapter 2 focuses on localization technologies; Chapter 3 discusses traditional techniques used for perception; Chapter 4 discusses deep learning based techniques for perception; Chapter 5 introduces the planning and control sub-system, especially prediction and routing technologies; Chapter 6 focuses on motion planning and feedback control of the planning and control subsystem; Chapter 7 introduces reinforcement learning-based planning and control; Chapter 8 delves into the details of client systems design; and Chapter 9 provides the details of cloud platforms for autonomous driving. This book should be useful to students, researchers, and practitioners alike. Whether you are an undergraduate or a graduate student interested in autonomous driving, you will find herein a comprehensive overview of the whole autonomous vehicle technology stack. If you are an autonomous driving practitioner, the many practical techniques introduced in this book will be of interest to you. Researchers will also find plenty of references for an effective, deeper exploration of the various technologies.", "title": "" }, { "docid": "9fdf63457611e41384a26cb208f718b4", "text": "Cyberbullying, in its different forms, is common among children and adolescents and is facilitated by the increased use of technology. The consequences of cyberbullying could be severe, especially on mental health, potentially leading to suicide in extreme cases. Although parents, schools and online social networking sites are encouraged to provide a safe online environment, little is known about the legal avenues which could be utilised to prevent cyberbullying or act as a deterrent to such. This article attempts to explore current laws, and the challenges that exist to establishing cyberbullying legislation in the context of the UK. It is arguable that a number of statutes may be of assistance in relation to cyberbullying, namely Education and Inspections Act 2006, Protection from Harassment Act 1997, Communications Act 2003, Telecommunication Act 1988, Public Order Act 1986, Obscene Publications Act 1959, Computer Misuse Act 1990, Crime and Disorder Act 1998, Defamation Act 2013. However, given the lack of clear definition of bullying, the applicability of these laws to cyberbullying is open to debate. Establishing new legislation or a modification to existing laws is particularly challenging for a number of reasons, namely: an absence of consistent bullying/cyberbullying definition, a difficulty in determining intention to harm or evidence of such, a lack of surveillance, a lack of general awareness, issues surrounding jurisdiction, the role of technology, and the age of criminal responsibility. These challenges are elaborated and discussed in this article.", "title": "" }, { "docid": "c29586780948b05929bed472bccb48e3", "text": "Recognition and perception based mobile applications, such as image recognition, are on the rise. These applications recognize the user's surroundings and augment it with information and/or media. These applications are latency-sensitive. They have a soft-realtime nature - late results are potentially meaningless. On the one hand, given the compute-intensive nature of the tasks performed by such applications, execution is typically offloaded to the cloud. On the other hand, offloading such applications to the cloud incurs network latency, which can increase the user-perceived latency. Consequently, edge computing has been proposed to let devices offload intensive tasks to edge servers instead of the cloud, to reduce latency. In this paper, we propose a different model for using edge servers. We propose to use the edge as a specialized cache for recognition applications and formulate the expected latency for such a cache. We show that using an edge server like a typical web cache, for recognition applications, can lead to higher latencies. We propose Cachier, a system that uses the caching model along with novel optimizations to minimize latency by adaptively balancing load between the edge and the cloud, by leveraging spatiotemporal locality of requests, using offline analysis of applications, and online estimates of network conditions. We evaluate Cachier for image-recognition applications and show that our techniques yield 3x speedup in responsiveness, and perform accurately over a range of operating conditions. To the best of our knowledge, this is the first work that models edge servers as caches for compute-intensive recognition applications, and Cachier is the first system that uses this model to minimize latency for these applications.", "title": "" }, { "docid": "5b50e84437dc27f5b38b53d8613ae2c7", "text": "We present a practical vision-based robotic bin-picking sy stem that performs detection and 3D pose estimation of objects in an unstr ctu ed bin using a novel camera design, picks up parts from the bin, and p erforms error detection and pose correction while the part is in the gri pper. Two main innovations enable our system to achieve real-time robust a nd accurate operation. First, we use a multi-flash camera that extracts rob ust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliabl y detect objects and estimate their poses. FDCM improves the accuracy of cham fer atching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges , a 3D distance transform, and directional integral images. We empiricall y show that these speedups, combined with the use of bounds in the spatial and h ypothesis domains, give the algorithm sublinear computational compl exity. We also apply our FDCM method to other applications in the context of deformable and articulated shape matching. In addition to significantl y improving upon the accuracy of previous chamfer matching methods in all of t he evaluated applications, FDCM is up to two orders of magnitude faster th an the previous methods.", "title": "" }, { "docid": "9333fab791f45ba737158f46dc7e857c", "text": "In recent years, much progress has been made on the development of biodegradable magnesium alloys as \"smart\" implants in cardiovascular and orthopedic applications. Mg-based alloys as biodegradable implants have outstanding advantages over Fe-based and Zn-based ones. However, the extensive applications of Mg-based alloys are still inhibited mainly by their high degradation rates and consequent loss in mechanical integrity. Consequently, extensive studies have been conducted to develop Mg-based alloys with superior mechanical and corrosion performance. This review focuses on the following topics: (i) the design criteria of biodegradable materials; (ii) alloy development strategy; (iii) in vitro performances of currently developed Mg-based alloys; and (iv) in vivo performances of currently developed Mg-based implants, especially Mg-based alloys under clinical trials.", "title": "" }, { "docid": "df9d74df931a596b7025150d11a18364", "text": "In recent years, ''gamification'' has been proposed as a solution for engaging people in individually and socially sustainable behaviors, such as exercise, sustainable consumption, and education. This paper studies demographic differences in perceived benefits from gamification in the context of exercise. On the basis of data gathered via an online survey (N = 195) from an exercise gamification service Fitocracy, we examine the effects of gender, age, and time using the service on social, hedonic, and utilitarian benefits and facilitating features of gamifying exercise. The results indicate that perceived enjoyment and usefulness of the gamification decline with use, suggesting that users might experience novelty effects from the service. The findings show that women report greater social benefits from the use of gamification. Further, ease of use of gamification is shown to decline with age. The implications of the findings are discussed. The question of how we understand gamer demographics and gaming behaviors, along with use cultures of different demographic groups, has loomed over the last decade as games became one of the main veins of entertainment and consumer culture (Yi, 2004). The deeply established perception of games being a field of entertainment dominated by young males has been challenged. Nowadays, digital gaming is a mainstream activity with broad demographics. The gender divide has been diminishing, the age span has been widening, and the average age is higher than An illustrative study commissioned by PopCap (Information Solutions Group, 2011) reveals that it is actually women in their 30s and 40s who play the popular social games on social networking services (see e.g. most – outplaying men and younger people. It is clear that age and gender perspectives on gaming activities and motivations require further scrutiny. The expansion of the game industry and the increased competition within the field has also led to two parallel developments: (1) using game design as marketing (Hamari & Lehdonvirta, 2010) and (2) gamification – going beyond what traditionally are regarded as games and implementing game design there often for the benefit of users. For example, services such as Mindbloom, Fitocracy, Zombies, Run!, and Nike+ are aimed at assisting the user toward beneficial behavior related to lifestyle and health choices. However, it is unclear whether we can see age and gender discrepancies in use of gamified services similar to those in other digital gaming contexts. The main difference between games and gamifica-tion is that gamification is commonly …", "title": "" }, { "docid": "1fa056e87c10811b38277d161c81c2ac", "text": "In this study, six kinds of the drivetrain systems of electric motor drives for EVs are discussed. Furthermore, the requirements of EVs on electric motor drives are presented. The comparative investigation on the efficiency, weight, cost, cooling, maximum speed, and fault-tolerance, safety, and reliability is carried out for switched reluctance motor, induction motor, permanent magnet blushless DC motor, and brushed DC motor drives, in order to find most appropriate electric motor drives for electric vehicle applications. The study shows that switched reluctance motor drives are the prior choice for electric vehicles.", "title": "" }, { "docid": "6b979f7a473686c0d36118e2d20e3c7d", "text": "Abstract An algorithm is proposed which generates a nonlinear kernel-based separating surface that requires as little as 1% of a large dataset for its explicit evaluation. To generate this nonlinear surface, the entire dataset is used as a constraint in an optimization problem with very few variables corresponding to the 1% of the data kept. The remainder of the data can be thrown away after solving the optimization problem. This is achieved by making use of a rectangular m×m̄ kernel K(A, Ā) that greatly reduces the size of the quadratic program to be solved and simplifies the characterization of the nonlinear separating surface. Here, the m rows of A represent the original m data points while the m̄ rows of Ā represent a greatly reduced m̄ data points. Computational results indicate that test set correctness for the reduced support vector machine (RSVM), with a nonlinear separating surface that depends on a small randomly selected portion of the dataset, is better than that of a conventional support vector machine (SVM) with a nonlinear surface that explicitly depends on the entire dataset, and much better than a conventional SVM using a small random sample of the data. Computational times, as well as memory usage, are much smaller for RSVM than that of a conventional SVM using the entire dataset.", "title": "" }, { "docid": "00e9dadbfcad7afca22e458fe52424a0", "text": "Ninety-nine police officers, not identified in previous research as belonging to groups that are superior in lie detection, attempted to detect truths and lies told by suspects during their videotaped police interviews. Accuracy rates were higher than those typically found in deception research and reached levels similar to those obtained by specialized lie detectors in previous research. Accuracy was positively correlated with perceived experience in interviewing suspects and with mentioning cues to detecting deceit that relate to a suspect's story. Accuracy was negatively correlated with popular stereotypical cues such as gaze aversion and fidgeting. As in previous research, accuracy and confidence were not significantly correlated, but the level of confidence was dependent on whether officers judged actual truths or actual lies and on the method by which confidence was measured.", "title": "" }, { "docid": "2f307e10caab050596bc7c081ae95605", "text": "Motion planning is a fundamental tool in robotics, used to generate collision-free, smooth, trajectories, while satisfying task-dependent constraints. In this paper, we present a novel approach to motion planning using Gaussian processes. In contrast to most existing trajectory optimization algorithms, which rely on a discrete state parameterization in practice, we represent the continuous-time trajectory as a sample from a Gaussian process (GP) generated by a linear time-varying stochastic differential equation. We then provide a gradient-based optimization technique that optimizes continuous-time trajectories with respect to a cost functional. By exploiting GP interpolation, we develop the Gaussian Process Motion Planner (GPMP), that finds optimal trajectories parameterized by a small number of states. We benchmark our algorithm against recent trajectory optimization algorithms by solving 7-DOF robotic arm planning problems in simulation and validate our approach on a real 7-DOF WAM arm.", "title": "" }, { "docid": "e87c93e13f94191450216e308215ff38", "text": "Fair scheduling of delay and rate-sensitive packet flows over a wireless channel is not addressed effectively by most contemporary wireline fair scheduling algorithms because of two unique characteristics of wireless media: (a) bursty channel errors, and (b) location-dependent channel capacity and errors. Besides, in packet cellular networks, the base station typically performs the task of packet scheduling for both downlink and uplink flows in a cell; however a base station has only a limited knowledge of the arrival processes of uplink flows.In this paper, we propose a new model for wireless fair scheduling based on an adaptation of fluid fair queueing to handle location-dependent error bursts. We describe an ideal wireless fair scheduling algorithm which provides a packetized implementation of the fluid model while assuming full knowledge of the current channel conditions. For this algorithm, we derive the worst-case throughput and delay bounds. Finally, we describe a practical wireless scheduling algorithm which approximates the ideal algorithm. Through simulations, we show that the algorithm achieves the desirable properties identified in the wireless fluid fair queueing model.", "title": "" } ]
scidocsrr
d2a6229e314c6c0ffd237d56585b12e0
Variations of the Similarity Function of TextRank for Automated Summarization
[ { "docid": "8921cffb633b0ea350b88a57ef0d4437", "text": "This paper addresses the problem of identifying likely topics of texts by their position in the text. It describes the automated training and evaluation of an Optimal Position Policy, a method of locating the likely positions of topic-bearing sentences based on genre-speci c regularities of discourse structure. This method can be used in applications such as information retrieval, routing, and text summarization.", "title": "" } ]
[ { "docid": "973f7ccabebeb79a82f8d40ae2720439", "text": "PURPOSE\nThis article aims to clarify the current state-of-the-art of robotic/mechanical devices for post-stroke thumb rehabilitation as well as the anatomical characteristics and motions of the thumb that are crucial for the development of any device that aims to support its motion.\n\n\nMETHODS\nA systematic literature search was conducted to identify robotic/mechanical devices for post-stroke thumb rehabilitation. Specific electronic databases and well-defined search terms and inclusion/exclusion criteria were used for such purpose. A reasoning model was devised to support the structured abstraction of relevant data from the literature of interest.\n\n\nRESULTS\nFollowing the main search and after removing duplicated and other non-relevant studies, 68 articles (corresponding to 32 devices) were left for further examination. These articles were analyzed to extract data relative to (i) the motions assisted/permitted - either actively or passively - by the device per anatomical joint of the thumb and (ii) mechanical-related aspects (i.e., architecture, connections to thumb, other fingers supported, adjustability to different hand sizes, actuators - type, quantity, location, power transmission and motion trajectory).\n\n\nCONCLUSIONS\nMost articles describe preliminary design and testing of prototypes, rather than the thorough evaluation of commercially ready devices. Defining appropriate kinematic models of the thumb upon which to design such devices still remains a challenging and unresolved task. Further research is needed before these devices can actually be implemented in clinical environments to serve their intended purpose of complementing the labour of therapists by facilitating intensive treatment with precise and repeatable exercises. Implications for Rehabilitation Post-stroke functional disability of the hand, and particularly of the thumb, significantly affects the capability to perform activities of daily living, threatening the independence and quality of life of the stroke survivors. The latest studies show that a high-dose intensive therapy (in terms of frequency, duration and intensity/effort) is the key to effectively modify neural organization and recover the motor skills that were lost after a stroke. Conventional therapy based on manual interaction with physical therapists makes the procedure labour intensive and increases the costs. Robotic/mechanical devices hold promise for complementing conventional post-stroke therapy. Specifically, these devices can provide reliable and accurate therapy for long periods of time without the associated fatigue. Also, they can be used as a means to assess patients? performance and progress in an objective and consistent manner. The full potential of robot-assisted therapy is still to be unveiled. Further exploration will surely lead to devices that can be well accepted equally by therapists and patients and that can be useful both in clinical and home-based rehabilitation practice such that motor recovery of the hand becomes a common outcome in stroke survivors. This overview provides the reader, possibly a designer of such a device, with a complete overview of the state-of-the-art of robotic/mechanical devices consisting of or including features for the rehabilitation of the thumb. Also, we clarify the anatomical characteristics and motions of the thumb that are crucial for the development of any device that aims to support its motion. Hopefully, this?combined with the outlined opportunities for further research?leads to the improvement of current devices and the development of new technology and knowledge in the field.", "title": "" }, { "docid": "2f695b3ee94443705ba0f757bf655ae1", "text": "CORFU1 organizes a cluster of flash devices as a single, shared log that can be accessed concurrently by multiple clients over the network. The CORFU shared log makes it easy to build distributed applications that require strong consistency at high speeds, such as databases, transactional key-value stores, replicated state machines, and metadata services. CORFU can be viewed as a distributed SSD, providing advantages over conventional SSDs such as distributed wear-leveling, network locality, fault tolerance, incremental scalability and geodistribution. A single CORFU instance can support up to 200K appends/sec, while reads scale linearly with cluster size. Importantly, CORFU is designed to work directly over network-attached flash devices, slashing cost, power consumption and latency by eliminating storage servers.", "title": "" }, { "docid": "75519b3621d66f55202ce4cbecc8bff1", "text": "belief-network inference Adnan Darwiche and Gregory Provan Rockwell Science Center 1049 Camino Dos Rios Thousand Oaks, CA 91360 fdarwiche, provang@risc.rockwell.com Abstract We describe a new paradigm for implementing inference in belief networks, which consists of two steps: (1) compiling a belief network into an arithmetic expression called a Query DAG (Q-DAG); and (2) answering queries using a simple evaluation algorithm. Each non-leaf node of a Q-DAG represents a numeric operation, a number, or a symbol for evidence. Each leaf node of a Q-DAG represents the answer to a network query, that is, the probability of some event of interest. It appears that Q-DAGs can be generated using any of the standard algorithms for exact inference in belief networks | we show how they can be generated using the clustering algorithm. The time and space complexity of a Q-DAG generation algorithm is no worse than the time complexity of the inference algorithm on which it is based. The complexity of a Q-DAG evaluation algorithm is linear in the size of the Q-DAG, and such inference amounts to a standard evaluation of the arithmetic expression it represents. The main value of Q-DAGs is in reducing the software and hardware resources required to utilize belief networks in on-line, real-world applications. The proposed framework also facilitates the development of on-line inference on di erent software and hardware platforms due to the simplicity of the Q-DAG evaluation algorithm.", "title": "" }, { "docid": "285a1c073ec4712ac735ab84cbcd1fac", "text": "During a survey of black yeasts of marine origin, some isolates of Hortaea werneckii were recovered from scuba diving equipment, such as silicone masks and snorkel mouthpieces, which had been kept under poor storage conditions. These yeasts were unambiguously identified by phenotypic and genotypic methods. Phylogenetic analysis of both the D1/D2 regions of 26S rRNA gene and ITS-5.8S rRNA gene sequences showed three distinct genetic types. This species is the agent of tinea nigra which is a rarely diagnosed superficial mycosis in Europe. In fact this mycosis is considered an imported fungal infection being much more prevalent in warm, humid parts of the world such as the Central and South Americas, Africa, and Asia. Although H. werneckii has been found in hypersaline environments in Europe, this is the first instance of the isolation of this halotolerant species from scuba diving equipment made with silicone rubber which is used in close contact with human skin and mucous membranes. The occurrence of this fungus in Spain is also an unexpected finding because cases of tinea nigra in this country are practically not seen.", "title": "" }, { "docid": "19bd7a6c21dd50c5dc8d14d5cfd363ab", "text": "Frontotemporal dementia (FTD) is one of the most common forms of dementia in persons younger than 65 years. Variants include behavioral variant FTD, semantic dementia, and progressive nonfluent aphasia. Behavioral and language manifestations are core features of FTD, and patients have relatively preserved memory, which differs from Alzheimer disease. Common behavioral features include loss of insight, social inappropriateness, and emotional blunting. Common language features are loss of comprehension and object knowledge (semantic dementia), and nonfluent and hesitant speech (progressive nonfluent aphasia). Neuroimaging (magnetic resonance imaging) usually demonstrates focal atrophy in addition to excluding other etiologies. A careful history and physical examination, and judicious use of magnetic resonance imaging, can help distinguish FTD from other common forms of dementia, including Alzheimer disease, dementia with Lewy bodies, and vascular dementia. Although no cure for FTD exists, symptom management with selective serotonin reuptake inhibitors, antipsychotics, and galantamine has been shown to be beneficial. Primary care physicians have a critical role in identifying patients with FTD and assembling an interdisciplinary team to care for patients with FTD, their families, and caregivers.", "title": "" }, { "docid": "e3d1282b2ed8c9724cf64251df7e14df", "text": "This paper describes and evaluates the feasibility of control strategies to be adopted for the operation of a microgrid when it becomes isolated. Normally, the microgrid operates in interconnected mode with the medium voltage network; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. An evaluation of the need of storage devices and load shedding strategies is included in this paper.", "title": "" }, { "docid": "3ab0127894704f76407a7f733a4cb91c", "text": "Combining ideas from several previous proposals, such as Active Pages, DIVA, and ULMT, we present the Memory Arithmetic Unit and Interface (MAUI) architecture. Because the \"intelligence\" of the MAUI intelligent memory system architecture is located in the memory-controller, logic and DRAM are not required to be integrated into a single chip, and use of off-the-shelf DRAMs is permitted. The MAUI's computational engine performs memory-bound SIMD computations close to the memory system, enabling more efficient memory pipelining. A simulator modeling the MAUI architecture was added to the SimpleScalar v4.0 tool-set. Not surprisingly, simulations show that application speedup increases as the memory system speed increases and the dataset size increases. Simulation results show single-threaded application speedup of over 100% is possible, and suggest that a total system speedup of about 300% is possible in a multi-threaded environment.", "title": "" }, { "docid": "60664c058868f08a67d14172d87a4756", "text": "The design of legged robots is often inspired by animals evolved to excel at different tasks. However, while mimicking morphological features seen in nature can be very powerful, robots may need to perform motor tasks that their living counterparts do not. In the absence of designs that can be mimicked, an alternative is to resort to mathematical models that allow the relationship between a robot's form and function to be explored. In this paper, we propose such a model to co-design the motion and leg configurations of a robot such that a measure of performance is optimized. The framework begins by planning trajectories for a simplified model consisting of the center of mass and feet. The framework then optimizes the length of each leg link while solving for associated full-body motions. Our model was successfully used to find optimized designs for legged robots performing tasks that include jumping, walking, and climbing up a step. Although our results are preliminary and our analysis makes a number of simplifying assumptions, our findings indicate that the cost function, the sum of squared joint torques over the duration of a task, varies substantially as the design parameters change.", "title": "" }, { "docid": "81f6c52bb579645e5919eac629c90f6d", "text": "A DEA-based stochastic estimation framework is presented to evaluate contextual variables affecting productivity. Conditions are identified under which a two-stage procedure consisting of DEA followed by regression analysis yields consistent estimators of the impact of contextual variables. Conditions are also identified under which DEA in the first stage followed by maximum likelihood estimation in the second stage yields consistent estimators of the impact of contextual variables. Monte Carlo simulations are carried out to compare the performance of our two-stage approach with one-stage and two-stage parametric approaches. Simulation results suggest that DEA-based procedures perform as well as the best parametric method in the estimation of the impact of contextual variables on productivity. Simulation results also indicate that DEA-based procedures perform better than parametric methods in the estimation of individual decision making unit (DMU) productivity. (", "title": "" }, { "docid": "b31be381768bd24e63a18cb778ebda74", "text": "For the last several years, convolutional neural network (CNN) based object detection systems have used a regression technique to predict improved object bounding boxes based on an initial proposal using low-level image features extracted from the CNN. In spite of its prevalence, there is little critical analysis of bounding-box regression or in-depth performance evaluation. This thesis surveys an array of techniques and parameter settings in order to further optimize bounding-box regression and provide guidance for its implementation. I refute a claim regarding training procedure, and demonstrate the effectiveness of using principal component analysis to handle unwieldy numbers of features produced by very deep CNNs.", "title": "" }, { "docid": "a814ce3fb1a3ab48c172120fe0a5125b", "text": "This researchwas sponsored by theU.S. ArmyResearch Laboratory and theUKMinistry of Defence under Agreement Number W911NF-16-3-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army Research Laboratory, the U.S. Government, the UK Ministry of Defence or the UK Government. The U.S. and UK Governments are authorized to reproduce and distribute reprints forGovernment purposes notwithstanding any copyright notation hereon.", "title": "" }, { "docid": "bdb051eb50c3b23b809e06bed81710fc", "text": "PURPOSE\nTo test the hypothesis that physicians' empathy is associated with positive clinical outcomes for diabetic patients.\n\n\nMETHOD\nA correlational study design was used in a university-affiliated outpatient setting. Participants were 891 diabetic patients, treated between July 2006 and June 2009, by 29 family physicians. Results of the most recent hemoglobin A1c and LDL-C tests were extracted from the patients' electronic records. The results of hemoglobin A1c tests were categorized into good control (<7.0%) and poor control (>9.0%). Similarly, the results of the LDL-C tests were grouped into good control (<100) and poor control (>130). The physicians, who completed the Jefferson Scale of Empathy in 2009, were grouped into high, moderate, and low empathy scorers. Associations between physicians' level of empathy scores and patient outcomes were examined.\n\n\nRESULTS\nPatients of physicians with high empathy scores were significantly more likely to have good control of hemoglobin A1c (56%) than were patients of physicians with low empathy scores (40%, P < .001). Similarly, the proportion of patients with good LDL-C control was significantly higher for physicians with high empathy scores (59%) than physicians with low scores (44%, P < .001). Logistic regression analyses indicated that physicians' empathy had a unique contribution to the prediction of optimal clinical outcomes after controlling for physicians' and patients' gender and age, and patients' health insurance.\n\n\nCONCLUSIONS\nThe hypothesis of a positive relationship between physicians' empathy and patients' clinical outcomes was confirmed, suggesting that physicians' empathy is an important factor associated with clinical competence and patient outcomes.", "title": "" }, { "docid": "f82eb2d4cc45577f08c7e867bf012816", "text": "OBJECTIVE\nThe purpose of this study was to compare the retrieval characteristics of the Option Elite (Argon Medical, Plano, Tex) and Denali (Bard, Tempe, Ariz) retrievable inferior vena cava filters (IVCFs), two filters that share a similar conical design.\n\n\nMETHODS\nA single-center, retrospective study reviewed all Option and Denali IVCF removals during a 36-month period. Attempted retrievals were classified as advanced if the routine \"snare and sheath\" technique was initially unsuccessful despite multiple attempts or an alternative endovascular maneuver or access site was used. Patient and filter characteristics were documented.\n\n\nRESULTS\nIn our study, 63 Option and 45 Denali IVCFs were retrieved, with an average dwell time of 128.73 and 99.3 days, respectively. Significantly higher median fluoroscopy times were experienced in retrieving the Option filter compared with the Denali filter (12.18 vs 6.85 minutes; P = .046). Use of adjunctive techniques was also higher in comparing the Option filter with the Denali filter (19.0% vs 8.7%; P = .079). No significant difference was noted between these groups in regard to gender, age, or history of malignant disease.\n\n\nCONCLUSIONS\nOption IVCF retrieval procedures required significantly longer retrieval fluoroscopy time compared with Denali IVCFs. Although procedure time was not analyzed in this study, as a surrogate, the increased fluoroscopy time may also have an impact on procedural direct costs and throughput.", "title": "" }, { "docid": "eb6823bcc7e01dbdc9a21388bde0ce4f", "text": "This paper extends previous research on two approaches to human-centred automation: (1) intermediate levels of automation (LOAs) for maintaining operator involvement in complex systems control and facilitating situation awareness; and (2) adaptive automation (AA) for managing operator workload through dynamic control allocations between the human and machine over time. Some empirical research has been conducted to examine LOA and AA independently, with the objective of detailing a theory of human-centred automation. Unfortunately, no previous work has studied the interaction of these two approaches, nor has any research attempted to systematically determine which LOAs should be used in adaptive systems and how certain types of dynamic function allocations should be scheduled over time. The present research briefly reviews the theory of humancentred automation and LOA and AA approaches. Building on this background, an initial study was presented that attempts to address the conjuncture of these two approaches to human-centred automation. An experiment was conducted in which a dual-task scenario was used to assess the performance, SA and workload effects of low, intermediate and high LOAs, which were dynamically allocated (as part of an AA strategy) during manual system control for various cycle times comprising 20, 40 and 60% of task time. The LOA and automation allocation cycle time (AACT) combinations were compared to completely manual control and fully automated control of a dynamic control task performed in conjunction with an embedded secondary monitoring task. Results revealed LOA to be the driving factor in determining primary task performance and SA. Low-level automation produced superior performance and intermediate LOAs facilitated higher SA, but this was not associated with improved performance or reduced workload. The AACT was the driving factor in perceptions of primary task workload and secondary task performance. When a greater percentage of primary task time was automated, operator perceptual resources were freed-up and monitoring performance on the secondary task improved. Longer automation cycle times than have previously been studied may have benefits for overall human–machine system performance. The combined effect of LOA and AA on all measures did not appear to be ‘additive’ in nature. That is, the LOA producing the best performance (low level automation) did not do so at the AACT, which produced superior performance (maximum cycle time). In general, the results are supportive of intermediate LOAs and AA as approaches to human-centred automation, but each appears to provide different benefits to human–machine system performance. This work provides additional information for a developing theory of human-centred automation. Theor. Issues in Ergon. Sci., 2003, 1–40, preview article", "title": "" }, { "docid": "bb7ac75ae27f16b67a2f5438e7a23aea", "text": "The problem of mining frequent closed patterns has received considerable attention recently as it promises to have much less redundancy compared to discovering all frequent patterns. Existing algorithms can presently be separated into two groups, feature (column) enumeration and row enumeration. Feature enumeration algorithms like CHARM and CLOSET+ are efficient for datasets with small number of features and large number of rows since the number of feature combinations to be enumerated is small. Row enumeration algorithms like CARPENTER on the other hand are more suitable for datasets (eg. bioinformatics data) with large number of features and small number of rows. Both groups of algorithms, however, will encounter problem for datasets that have large number of rows and features. In this paper, we describe a new algorithm called COBBLER which can efficiently mine such datasets . COBBLER is designed to dynamically switch between feature enumeration and row enumeration depending on the data characteristic in the process of mining. As such, each portion of the dataset can be processed using the most suitable method, making the mining more efficient. Several experiments on real-life and synthetic datasets show that COBBLER is an order of magnitude better than previous closed pattern mining algorithms like CHARM, CLOSET+ and CARPENTER.", "title": "" }, { "docid": "563c0eeaeaaf4cbb005d97814be35aea", "text": "Current multiprocessor systems execute parallel and concurrent software nondeterministically: even when given precisely the same input, two executions of the same program may produce different output. This severely complicates debugging, testing, and automatic replication for fault-tolerance. Previous efforts to address this issue have focused primarily on record and replay, but making execution actually deterministic would address the problem at the root. Our goals in this work are twofold: (1) to provide fully deterministic execution of arbitrary, unmodified, multithreaded programs as an OS service; and (2) to make all sources of intentional nondeterminism, such as network I/O, be explicit and controllable. To this end we propose a new OS abstraction, the Deterministic Process Group (DPG). All communication between threads and processes internal to a DPG happens deterministically, including implicit communication via sharedmemory accesses, as well as communication via OS channels such as pipes, signals, and the filesystem. To deal with fundamentally nondeterministic external events, our abstraction includes the shim layer, a programmable interface that interposes on all interaction between a DPG and the external world, making determinism useful even for reactive applications. We implemented the DPG abstraction as an extension to Linux and demonstrate its benefits with three use cases: plain deterministic execution; replicated execution; and record and replay by logging just external input. We evaluated our implementation on both parallel and reactive workloads, including Apache, Chromium, and PARSEC.", "title": "" }, { "docid": "efa4f154549c81a31421d32ad44267b9", "text": "PURPOSE OF REVIEW\nDespite the American public following recommendations to decrease absolute dietary fat intake and specifically decrease saturated fat intake, we have seen a dramatic rise over the past 40 years in the rates of non-communicable diseases associated with obesity and overweight, namely cardiovascular disease. The development of the diet-heart hypothesis in the mid twentieth century led to faulty but long-held beliefs that dietary intake of saturated fat led to heart disease. Saturated fat can lead to increased LDL cholesterol levels, and elevated plasma cholesterol levels have been shown to be a risk factor for cardiovascular disease; however, the correlative nature of their association does not assign causation.\n\n\nRECENT FINDINGS\nAdvances in understanding the role of various lipoprotein particles and their atherogenic risk have been helpful for understanding how different dietary components may impact CVD risk. Numerous meta-analyses and systematic reviews of both the historical and current literature reveals that the diet-heart hypothesis was not, and still is not, supported by the evidence. There appears to be no consistent benefit to all-cause or CVD mortality from the reduction of dietary saturated fat. Further, saturated fat has been shown in some cases to have an inverse relationship with obesity-related type 2 diabetes. Rather than focus on a single nutrient, the overall diet quality and elimination of processed foods, including simple carbohydrates, would likely do more to improve CVD and overall health. It is in the best interest of the American public to clarify dietary guidelines to recognize that dietary saturated fat is not the villain we once thought it was.", "title": "" }, { "docid": "285f57b2b37636c417459f5d886a7982", "text": "We have prepared a set of notes incorporating the visual aids used during the Information Extraction Tu-torial for the IJCAI-99 tuto-rial series. This document also contains additional information , such as the URLs of stes on the World Wide Web containing additional information likely to be of interest. If you are reading this document using an appropriately configured Acrobat Reader (available free from Adobe at http:// w w w. a d o b e. c o m / p r o d i n d e x / a c r o b a t / readstep.html) is appropriately configured, you can go directly to these URLs in your web browser by clicking them. This tutorial is designed to introduce you to the fundamental concepts of information extraction (IE) technology, and to give you an idea of what the state of the art performance in extraction technology is, what is involved in building IE systems, and various approaches taken to their design and implementation, and the kinds of resources and tools that are available to assist in constructing information extraction systems, including linguistic resources such as lexicons and name lists, as well as tools for annotating training data for automatically trained systems. Most IE systems process texts in sequential steps (or \" phases \") ranging from lexical and morphological processing, recognition and typing of proper names, parsing of larger syntactic constituents, resolution of anaphora and coreference, and the ultimate extraction of domain-relevent events and relationships from the text. We discuss each of these system components and various approaches to their design. 2 In addition to these tutorial notes, the authors have prepared several other resources related to information extraction of which you may wish to avail yourself. We have created a web page for this tutorial at the URL mentioned in the Power Point slide in the next illustration. This page provides many links of interest to anyone wanting more information about the field of information extraction, including pointers to research sites, commercial sites, and system development tools. We felt that providing this resource would be appreciated by those taking the tutorial, however, we subject ourselves to the risk that some interesting and relevant information has been inadvertently omitted during our preparations. Please do not interpret the presence or absence of a link to any system or research paper to be a positive or negative evaluation of the system or …", "title": "" }, { "docid": "fcea8882b303897fd47cbece47271512", "text": "Inference in the presence of outliers is an important field of research as outliers are ubiquitous and may arise across a variety of problems and domains. Bayesian optimization is method that heavily relies on probabilistic inference. This allows outstanding sample efficiency because the probabilistic machinery provides a memory of the whole optimization process. However, that virtue becomes a disadvantage when the memory is populated with outliers, inducing bias in the estimation. In this paper, we present an empirical evaluation of Bayesian optimization methods in the presence of outliers. The empirical evidence shows that Bayesian optimization with robust regression often produces suboptimal results. We then propose a new algorithm which combines robust regression (a Gaussian process with Student-t likelihood) with outlier diagnostics to classify data points as outliers or inliers. By using an scheduler for the classification of outliers, our method is more efficient and has better convergence over the standard robust regression. Furthermore, we show that even in controlled situations with no expected outliers, our method is able to produce better results.", "title": "" }, { "docid": "db94999f5e24f511b857f208d1a29459", "text": "A survey of video databases that can be used within a continuous sign language recognition scenario to measure the performance of head and hand tracking algorithms either w.r.t. a tracking error rate or w.r.t. a word error rate criterion is presented in this work. Robust tracking algorithms are required as the signing hand frequently moves in front of the face, may temporarily disappear, or cross the other hand. Only few studies consider the recognition of continuous sign language, and usually special devices such as colored gloves or blue-boxing environments are used to accurately track the regions-of-interest in sign language processing. Ground-truth labels for hand and head positions have been annotated for more than 30k frames in several publicly available video databases of different degrees of difficulty, and preliminary tracking results are presented.", "title": "" } ]
scidocsrr
6de825b40672b3580dffec878926b5ad
Social media analytics for competitive advantage
[ { "docid": "8cda36e81db2bce7f9b648a20c0a55a5", "text": "Scalable and effective analysis of large text corpora remains a challenging problem as our ability to collect textual data continues to increase at an exponential rate. To help users make sense of large text corpora, we present a novel visual analytics system, Parallel-Topics, which integrates a state-of-the-art probabilistic topic model Latent Dirichlet Allocation (LDA) with interactive visualization. To describe a corpus of documents, ParallelTopics first extracts a set of semantically meaningful topics using LDA. Unlike most traditional clustering techniques in which a document is assigned to a specific cluster, the LDA model accounts for different topical aspects of each individual document. This permits effective full text analysis of larger documents that may contain multiple topics. To highlight this property of the model, ParallelTopics utilizes the parallel coordinate metaphor to present the probabilistic distribution of a document across topics. Such representation allows the users to discover single-topic vs. multi-topic documents and the relative importance of each topic to a document of interest. In addition, since most text corpora are inherently temporal, ParallelTopics also depicts the topic evolution over time. We have applied ParallelTopics to exploring and analyzing several text corpora, including the scientific proposals awarded by the National Science Foundation and the publications in the VAST community over the years. To demonstrate the efficacy of ParallelTopics, we conducted several expert evaluations, the results of which are reported in this paper.", "title": "" } ]
[ { "docid": "a94d8b425aed0ade657aa1091015e529", "text": "Generative models for source code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs. We present a novel model for this problem that uses a graph to represent the intermediate state of the generated output. Our model generates code by interleaving grammar-driven expansion steps with graph augmentation and neural message passing steps. An experimental evaluation shows that our new model can generate semantically meaningful expressions, outperforming a range of strong baselines.", "title": "" }, { "docid": "612665818a7134a9ad8bfac472d021cf", "text": "Matrix decomposition methods represent a data matrix as a product of two factor matrices: one containing basis vectors that represent meaningful concepts in the data, and another describing how the observed data can be expressed as combinations of the basis vectors. Decomposition methods have been studied extensively, but many methods return real-valued matrices. Interpreting real-valued factor matrices is hard if the original data is Boolean. In this paper, we describe a matrix decomposition formulation for Boolean data, the Discrete Basis Problem. The problem seeks for a Boolean decomposition of a binary matrix, thus allowing the user to easily interpret the basis vectors. We also describe a variation of the problem, the Discrete Basis Partitioning Problem. We show that both problems are NP-hard. For the Discrete Basis Problem, we give a simple greedy algorithm for solving it; for the Discrete Basis Partitioning Problem we show how it can be solved using existing methods. We present experimental results for the greedy algorithm and compare it against other, well known methods. Our algorithm gives intuitive basis vectors, but its reconstruction error is usually larger than with the real-valued methods. We discuss about the reasons for this behavior.", "title": "" }, { "docid": "b67fadb3f5dca0e74bebc498260f99a4", "text": "The interactive computation paradigm is reviewed and a particular example is extended to form the stochastic analog of a computational process via a transcription of a minimal Turing Machine into an equivalent asynchronous Cellular Automaton with an exponential waiting times distribution of effective transitions. Furthermore, a special toolbox for analytic derivation of recursive relations of important statistical and other quantities is introduced in the form of an Inductive Combinatorial Hierarchy.", "title": "" }, { "docid": "5c86ff18054344fe8c8b1911bbb56997", "text": "Nearest neighbor search methods based on hashing have attracted considerable attention for effective and efficient large-scale similarity search in computer vision and information retrieval community. In this paper, we study the problems of learning hash functions in the context of multimodal data for cross-view similarity search. We put forward a novel hashing method, which is referred to Collective Matrix Factorization Hashing (CMFH). CMFH learns unified hash codes by collective matrix factorization with latent factor model from different modalities of one instance, which can not only supports cross-view search but also increases the search accuracy by merging multiple view information sources. We also prove that CMFH, a similarity-preserving hashing learning method, has upper and lower boundaries. Extensive experiments verify that CMFH significantly outperforms several state-of-the-art methods on three different datasets.", "title": "" }, { "docid": "bfa2f3edf0bd1c27bfe3ab90dde6fd75", "text": "Sophorolipids are biosurfactants belonging to the class of the glycolipid, produced mainly by the osmophilic yeast Candida bombicola. Structurally they are composed by a disaccharide sophorose (2’-O-β-D-glucopyranosyl-β-D-glycopyranose) which is linked β -glycosidically to a long fatty acid chain with generally 16 to 18 atoms of carbon with one or more unsaturation. They are produced as a complex mix containing up to 40 molecules and associated isomers, depending on the species which produces it, the substrate used and the culture conditions. They present properties which are very similar or superior to the synthetic surfactants and other biosurfactants with the advantage of presenting low toxicity, higher biodegradability, better environmental compatibility, high selectivity and specific activity in a broad range of temperature, pH and salinity conditions. Its biological activities are directly related with its chemical structure. Sophorolipids possess a great potential for application in areas such as: food; bioremediation; cosmetics; pharmaceutical; biomedicine; nanotechnology and enhanced oil recovery.", "title": "" }, { "docid": "0441fb016923cd0b7676d3219951c230", "text": "Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet [15, 16], ResNeXt [33], SE-Net [18] and DPN [9], for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task.", "title": "" }, { "docid": "8a20ea85c44f66c0f63ee25f1abd0630", "text": "In this study, a human tissue-implantable compact folded dipole antenna of 19.6 * 2 * 0.254 mm3 operating in the Medical Implant Communication Service (MICS) frequency band (402-405 MHz) is presented. The antenna design and analysis is carried out inside a homogeneous flat phantom with electrical properties equivalent to those of 2/3 human muscle tissue. The dipole antenna, printed on a high-dielectric substrate layer, exhibits a frequency resonance at 402 MHz with a wide 10-dB impedance bandwidth of 105 MHz. The proposed antenna radiates an omnidirectional far-field radiation pattern with a maximum realized gain of -31.2 dB. In addition, the Specific Absorption Rate (SAR) assessment indicates the maximum input power deliverable to the antenna in order to meet the required safety regulations.", "title": "" }, { "docid": "39f51064adf460624a35fb00a730a715", "text": "For most outdoor applications, systems such as GPS provide users with accurate position estimates. However, reliable range-based localization using radio signals in indoor or urban environments can be a problem due to multipath fading and line-of-sight (LOS) blockage. The measurement bias introduced by these delays causes significant localization error, even when using additional sensors such as an inertial measurement unit (IMU) to perform outlier rejection. We describe an algorithm for accurate indoor localization of a sensor in a network of known beacons. The sensor measures the range to the beacons using an Ultra-Wideband (UWB) signal and uses statistical inference to infer and correct for the bias due to LOS blockage in the range measurements. We show that a particle filter can be used to estimate the joint distribution over both pose and beacon biases. We use the particle filter estimation technique specifically to capture the non-linearity of transitions in the beacon bias as the sensor moves. Results using real-world and simulated data are presented.", "title": "" }, { "docid": "8da0d4884947d973a9121ea8f726ea61", "text": "Soil and water pollution is becoming one of major burden in modern Indian society due to industrialization. Though there are many methods to remove the heavy metal from soil and water pollution but biosorption is one of the best scientific methods to remove heavy metal from water sample by using biomolecules and bacteria. Biosorbent have the ability to bind the heavy metal and therefore can remove from polluted water. Currently, we have taken the water sample from Ballendur Lake, Bangalore. Which is highly polluted due to industries besides this lake. This sample of water was serially diluted to 10-7. 10-4 and 10-5 diluted sample was allowed to stand in Tryptone Glucose Extract agar media mixed with the different concentrations of lead acetate for 24 hours. Microflora growth was observed. Then we cultured in different temperature, pH and different age of culture media. Finally, we did the biochemical test to identify the bacteria isolate and we found till genus level, it could be either Streptococcus sp. or Enterococcus sp.", "title": "" }, { "docid": "0f963159aaafc36c8751ae615d0054ef", "text": "Nowadays, a vast amount of spatio-temporal data ar e being generated by devices like cell phones, GPS and remote sensing devices and therefore discovering interesting patterns in such data becam n interesting topics for researchers. One of these topics has been spatio-te mporal clustering which is a novel sub field of data mining and Recent researches in this area has focused on new methods and ways which are adapting previous me thods and solutions to the new problem. In this paper we first define what t e spatio-temporal data is and what different it has with other types of data. Then try to classify the clustering methods and done works in this area base d on the proposed solutions. classification has been made based on this fact tha t how these works import and adapt temporal concept in their solutions.", "title": "" }, { "docid": "c7f1e26d27c87bfa0da637c28dbcdeda", "text": "There has recently been an increased interest in named entity recognition and disambiguation systems at major conferences such as WWW, SIGIR, ACL, KDD, etc. However, most work has focused on algorithms and evaluations, leaving little space for implementation details. In this paper, we discuss some implementation and data processing challenges we encountered while developing a new multilingual version of DBpedia Spotlight that is faster, more accurate and easier to configure. We compare our solution to the previous system, considering time performance, space requirements and accuracy in the context of the Dutch and English languages. Additionally, we report results for 9 additional languages among the largest Wikipedias. Finally, we present challenges and experiences to foment the discussion with other developers interested in recognition and disambiguation of entities in natural language text.", "title": "" }, { "docid": "1fc6b2ffedfddb0dc476c3470c52fb13", "text": "Exponential growth in Electronic Healthcare Records (EHR) has resulted in new opportunities and urgent needs for discovery of meaningful data-driven representations and patterns of diseases in Computational Phenotyping research. Deep Learning models have shown superior performance for robust prediction in computational phenotyping tasks, but suffer from the issue of model interpretability which is crucial for clinicians involved in decision-making. In this paper, we introduce a novel knowledge-distillation approach called Interpretable Mimic Learning, to learn interpretable phenotype features for making robust prediction while mimicking the performance of deep learning models. Our framework uses Gradient Boosting Trees to learn interpretable features from deep learning models such as Stacked Denoising Autoencoder and Long Short-Term Memory. Exhaustive experiments on a real-world clinical time-series dataset show that our method obtains similar or better performance than the deep learning models, and it provides interpretable phenotypes for clinical decision making.", "title": "" }, { "docid": "9ee43cf00ce2be9ca4288ed0e4542b09", "text": "Deep learning systems extensively use convolution operations to process input data. Though convolution is clearly defined for structured data such as 2D images or 3D volumes, this is not true for other data types such as sparse point clouds. Previous techniques have developed approximations to convolutions for restricted conditions. Unfortunately, their applicability is limited and cannot be used for general point clouds. We propose an efficient and effective method to learn convolutions for non-uniformly sampled point clouds, as they are obtained with modern acquisition techniques. Learning is enabled by four key novelties: first, representing the convolution kernel itself as a multilayer perceptron; second, phrasing convolution as a Monte Carlo integration problem, third, using this notion to combine information from multiple samplings at different levels; and fourth using Poisson disk sampling as a scalable means of hierarchical point cloud learning. The key idea across all these contributions is to guarantee adequate consideration of the underlying non-uniform sample distribution function from a Monte Carlo perspective. To make the proposed concepts applicable to real-world tasks, we furthermore propose an efficient implementation which significantly reduces the GPU memory required during the training process. By employing our method in hierarchical network architectures we can outperform most of the state-of-the-art networks on established point cloud segmentation, classification and normal estimation benchmarks. Furthermore, in contrast to most existing approaches, we also demonstrate the robustness of our method with respect to sampling variations, even when training with uniformly sampled data only. To support the direct application of these concepts, we provide a ready-to-use TensorFlow implementation of these layers at https://github.com/viscom-ulm/MCCNN.", "title": "" }, { "docid": "4455233571d9c4fca8cfa2a5eb8ef22f", "text": "This article summarizes the studies of the mechanism of electroacupuncture (EA) in the regulation of the abnormal function of hypothalamic-pituitary-ovarian axis (HPOA) in our laboratory. Clinical observation showed that EA with the effective acupoints could cure some anovulatory patients in a highly effective rate and the experimental results suggested that EA might regulate the dysfunction of HPOA in several ways, which means EA could influence some gene expression of brain, thereby, normalizing secretion of some hormones, such as GnRH, LH and E2. The effects of EA might possess a relative specificity on acupoints.", "title": "" }, { "docid": "500eca6c6fb88958662fd0210927d782", "text": "Purpose – Force output is extremely important for electromagnetic linear machines. The purpose of this study is to explore new permanent magnet (PM) array and winding patterns to increase the magnetic flux density and thus to improve the force output of electromagnetic tubular linear machines. Design/methodology/approach – Based on investigations on various PM patterns, a novel dual Halbach PM array is proposed in this paper to increase the radial component of flux density in three-dimensional machine space, which in turn can increase the force output of tubular linear machine significantly. The force outputs and force ripples for different winding patterns are formulated and analyzed, to select optimized structure parameters. Findings – The proposed dual Halbach array can increase the radial component of flux density and force output of tubular linear machines effectively. It also helps to decrease the axial component of flux density and thus to reduce the deformation and vibration of machines. By using analytical force models, the influence of winding patterns and structure parameters on the machine force output and force ripples can be analyzed. As a result, one set of optimized structure parameters are selected for the design of electromagnetic tubular linear machines. Originality/value – The proposed dual Halbach array and winding patterns are effective ways to improve the linear machine performance. It can also be implemented into rotary machines. The analyzing and design methods could be extended into the development of other electromagnetic machines.", "title": "" }, { "docid": "60acfeef728131aa627a5ab08b95259a", "text": "In this paper, we propose a discriminative aggregation network (DAN) method for video face recognition, which aims to integrate information from video frames effectively and efficiently. Unlike existing aggregation methods, our method aggregates raw video frames directly instead of the features obtained by complex processing. By combining the idea of metric learning and adversarial learning, we learn an aggregation network that produces more discriminative synthesized images compared to raw input frames. Our framework reduces the number of frames to be processed and significantly speed up the recognition procedure. Furthermore, low-quality frames containing misleading information are filtered and denoised during the aggregation process, which makes our system more robust and discriminative. Experimental results show that our method can generate discriminative images from video clips and improve the overall recognition performance in both the speed and accuracy on three widely used datasets.", "title": "" }, { "docid": "745451b3ca65f3388332232b370ea504", "text": "This article develops a framework that applies to single securities to test whether asset pricing models can explain the size, value, and momentum anomalies. Stock level beta is allowed to vary with firm-level size and book-to-market as well as with macroeconomic variables. With constant beta, none of the models examined capture any of the market anomalies. When beta is allowed to vary, the size and value effects are often explained, but the explanatory power of past return remains robust. The past return effect is captured by model mispricing that varies with macroeconomic variables.", "title": "" }, { "docid": "bb492930d57356bd84b2304cfdefa1fb", "text": "To convert wave energy into more suitable forms efficiently, a single-phase permanent magnet (PM) ac linear generator directly coupled to wave energy conversion is presented in this paper. Magnetic field performance of Halbach PM arrays is compared with that of radially magnetized structure. Then, the change of parameters in the geometry of slot and Halbach PM arrays' effect on the electromagnetic properties of the generator are investigated, and the optimization design guides are established for key design parameters. Finally, the simulation results are compared with test results of the prototype in wave energy conversion experimental system. Due to test and theory analysis results of prototype concordant with the finite-element analysis results, the proposed model and analysis method are correct and meet the requirements of direct-drive wave energy conversion system.", "title": "" }, { "docid": "95a3cc864c5f63b87df9c216856dbdb8", "text": "Web Content Management Systems (WCMS) play an increasingly important role in the Internet’s evolution. They are software platforms that facilitate the implementation of a web site or an e-commerce and are gaining popularity due to its flexibility and ease of use. In this work, we explain from a tutorial perspective how to manage WCMS and what can be achieved by using them. With this aim, we select the most popular open-source WCMS; namely, Joomla!, WordPress, and Drupal. Then, we implement three websites that are equal in terms of requirements, visual aspect, and functionality, one for each WCMS. Through a qualitative comparative analysis, we show the advantages and drawbacks of each solution, and the complexity associated. On the other hand, security concerns can arise if WCMS are not appropriately used. Due to the key position that they occupy in today’s Internet, we perform a basic security analysis of the three implement websites in the second part of this work. Specifically, we explain vulnerabilities, security enhancements, which errors should not be done, and which WCMS is initially safer.", "title": "" }, { "docid": "425d927136ad3fc0f967ea8e64d8f209", "text": "UNLABELLED\nThere is a clear need for brief, but sensitive and specific, cognitive screening instruments as evidenced by the popularity of the Addenbrooke's Cognitive Examination (ACE).\n\n\nOBJECTIVES\nWe aimed to validate an improved revision (the ACE-R) which incorporates five sub-domain scores (orientation/attention, memory, verbal fluency, language and visuo-spatial).\n\n\nMETHODS\nStandard tests for evaluating dementia screening tests were applied. A total of 241 subjects participated in this study (Alzheimer's disease=67, frontotemporal dementia=55, dementia of Lewy Bodies=20; mild cognitive impairment-MCI=36; controls=63).\n\n\nRESULTS\nReliability of the ACE-R was very good (alpha coefficient=0.8). Correlation with the Clinical Dementia Scale was significant (r=-0.321, p<0.001). Two cut-offs were defined (88: sensitivity=0.94, specificity=0.89; 82: sensitivity=0.84, specificity=1.0). Likelihood ratios of dementia were generated for scores between 88 and 82: at a cut-off of 82 the likelihood of dementia is 100:1. A comparison of individual age and education matched groups of MCI, AD and controls placed the MCI group performance between controls and AD and revealed MCI patients to be impaired in areas other than memory (attention/orientation, verbal fluency and language).\n\n\nCONCLUSIONS\nThe ACE-R accomplishes standards of a valid dementia screening test, sensitive to early cognitive dysfunction.", "title": "" } ]
scidocsrr
1bab69f63c90b806d21315cb8249ac13
Morphological Operations and Projection Profiles based Segmentation of Handwritten Kannada Document
[ { "docid": "a17241732ee8e9a8bc34caea2f08545d", "text": "Text line segmentation is an essential pre-processing stage for off-line handwriting recognition in many Optical Character Recognition (OCR) systems. It is an important step because inaccurately segmented text lines will cause errors in the recognition stage. Text line segmentation of the handwritten documents is still one of the most complicated problems in developing a reliable OCR. The nature of handwriting makes the process of text line segmentation very challenging. Several techniques to segment handwriting text line have been proposed in the past. This paper seeks to provide a comprehensive review of the methods of off-line handwriting text line segmentation proposed by researchers.", "title": "" } ]
[ { "docid": "23fc59a5a53906429a9e5d9cfb54bdc4", "text": "The greater palatine canal is an important anatomical structure that is often utilized as a pathway for infiltration of local anesthesia to affect sensation and hemostasis. Increased awareness of the length and anatomic variation in the anatomy of this structure is important when performing surgical procedures in this area (e.g., placement of osseointegrated dental implants). We examined the anatomy of the greater palatine canal using data obtained from CBCT scans of 500 subjects. Both right and left canals were viewed (N = 1000) in coronal and sagittal planes, and their paths and lengths determined. The average length of the greater palatine canal was 29 mm (±3 mm), with a range from 22 to 40 mm. Coronally, the most common anatomic pattern consisted of the canal traveling inferior-laterally for a distance then directly inferior for the remainder (43.3%). In the sagittal view, the canal traveled most frequently at an anterior-inferior angle (92.9%).", "title": "" }, { "docid": "1594afac3fe296478bd2a0c5a6ca0bb4", "text": "Executive Summary The market turmoil of 2008 highlighted the importance of risk management to investors in the UK and worldwide. Realized risk levels and risk forecasts from the Barra Europe Equity Model (EUE2L) are both currently at the highest level for the last two decades. According to portfolio theory, institutional investors can gain significant risk-reduction and return-enhancement benefits from venturing out of their domestic markets. These effects from international diversification are due to imperfect correlations among markets. In this paper, we explore the historical diversification effects of an international allocation for UK investors. We illustrate that investing only in the UK market can be considered an active deviation from a global benchmark. Although a domestic allocation to UK large-cap stocks has significant international exposure when revenue sources are taken into account, as an active deviation from a global benchmark a UK domestic strategy has high concentration, leading to high asset-specific risk, and significant style and industry tilts. We show that an international allocation resulted in higher returns and lower risk for a UK investor in the last one, three, five, and ten years. In GBP terms, the MSCI All Country World Investable Market Index (ACWI IMI) — a global index that could be viewed as a proxy for a global portfolio — achieved higher return and lower risk compared to the MSCI UK Index during these periods. A developed market minimum-variance portfolio, represented by the MSCI World Minimum Volatility Index, 1 The market turmoil of 2008 highlighted the importance of risk management to investors in the UK and worldwide. Figure 1 illustrates that the historical standard deviation of the MSCI UK Index is now near the highest level in recent history. The risk forecast for the index, obtained using the Barra Europe Equity Model, typically showed still better risk and return performance during these periods. The decreases in risk represented by allocations to MSCI ACWI IMI and the MSCI World Minimum Volatility Index were robust based on four different measures of portfolio risk. We also consider a stepwise approach to international diversification, sequentially adding small cap and international assets to a large cap UK portfolio. We show that this approach also reduced risk during the observed period, but we did not find evidence that it was more efficient for risk reduction than a passive allocation to MSCI ACWI IMI.", "title": "" }, { "docid": "05ce4be5b7d3c33ba1ebce575aca4fb9", "text": "In this competitive world, business is becoming highly saturated. Especially, the field of telecommunication faces complex challenges due to a number of vibrant competitive service providers. Therefore, it has become very difficult for them to retain existing customers. Since the cost of acquiring new customers is much higher than the cost of retaining the existing customers, it is the time for the telecom industries to take necessary steps to retain the customers to stabilize their market value. This paper explores the application of data mining techniques in predicting the likely churners and attribute selection on identifying the churn. It also compares the efficiency of several classifiers and lists their performances for two real telecom datasets.", "title": "" }, { "docid": "cddc653dc48a094897aa287f95c0d21d", "text": "We present a real-time approach for image-based localization within large scenes that have been reconstructed offline using structure from motion (Sfm). From monocular video, our method continuously computes a precise 6-DOF camera pose, by efficiently tracking natural features and matching them to 3D points in the Sfm point cloud. Our main contribution lies in efficiently interleaving a fast keypoint tracker that uses inexpensive binary feature descriptors with a new approach for direct 2D-to-3D matching. The 2D-to-3D matching avoids the need for online extraction of scale-invariant features. Instead, offline we construct an indexed database containing multiple DAISY descriptors per 3D point extracted at multiple scales. The key to the efficiency of our method lies in invoking DAISY descriptor extraction and matching sparingly during localization, and in distributing this computation over a window of successive frames. This enables the algorithm to run in real-time, without fluctuations in the latency over long durations. We evaluate the method in large indoor and outdoor scenes. Our algorithm runs at over 30 Hz on a laptop and at 12 Hz on a low-power, mobile computer suitable for onboard computation on a quadrotor micro aerial vehicle.", "title": "" }, { "docid": "699c6dbdd58642ec700246a52bc0ce66", "text": "The findings, interpretations and conclusions expressed in this report are those of the authors and do not necessarily imply the expression of any opinion whatsoever on the part of the Management or the Executive Directors of the African Development Bank, nor the Governments they represent, nor of the other institutions mentioned in this study. In the preparation of this report, every effort has been made to provide the most up to date, correct and clearly expressed information as possible; however, the authors do not guarantee accuracy of the data. Rights and Permissions All rights reserved. Reproduction, citation and dissemination of material contained in this information product for educational and non-commercial purposes are authorized without any prior written permission from the publisher, if the source is fully acknowledged. Reproduction of material in this information product for resale or other commercial purposes is prohibited. Since 2000, Africa has been experiencing a remarkable economic growth accompanied by improving democratic environment. Real GDP growth has risen by more than twice its pace in the last decade. Telecommunications, financial services and banking, construction and private-investment inflows have also increased substantially. However, most of the benefits of the high growth rates achieved over the last few years have not reached the rural poor. For this to happen, substantial growth in the agriculture sector will need to be stimulated and sustained, as the sector is key to inclusive growth, given its proven record of contributing to more robust reduction of poverty. This is particularly important when juxtaposed with the fact that the majority of Africa's poor are engaged in agriculture, a sector which supports the livelihoods of 90 percent of Africa's population. The sector also provides employment for about 60 percent of the economically active population, and 70 percent of the continent's poorest communities. In spite of agriculture being an acknowledged leading growth driver for Africa, the potential of the sector's contribution to growth and development has been underexploited mainly due to a variety of challenges, including the widening technology divide, weak infrastructure and declining technical capacity. These challenges have been exacerbated by weak input and output marketing systems and services, slow progress in regional integration, land access and rights issues, limited access to affordable credit, challenging governance issues in some countries, conflicts, effects of climate change, and the scourge of HIV/AIDS and other diseases. Green growth is critical to Africa because of the fragility of the …", "title": "" }, { "docid": "0513ce3971cb0e438598ea6766be19ff", "text": "This paper proposes two interference mitigation strategies that adjust the maximum transmit power of femtocell users to suppress the cross-tier interference at a macrocell base station (BS). The open-loop and the closed-loop control suppress the cross-tier interference less than a fixed threshold and an adaptive threshold based on the noise and interference (NI) level at the macrocell BS, respectively. Simulation results show that both schemes effectively compensate the uplink throughput degradation of the macrocell BS due to the cross-tier interference and that the closed-loop control provides better femtocell throughput than the open-loop control at a minimal cost of macrocell throughput.", "title": "" }, { "docid": "36434d9a36dceb2b3c838f9d8d3ba56f", "text": "Long-duration missions challenge ground robot systems with respect to energy storage and efficient conversion to power on demand. Ground robot systems can contain multiple power sources such as fuel cell, battery and/or ultra capacitor. This paper presents a hybrid systems framework for collectively modeling the dynamics and switching between these different power components. The hybrid system allows modeling power source on/off switching and different regimes of operation, together with continuous parameters such as state of charge, temperature, and power output. We apply this modeling framework to a fuel cell/battery power system applicable to unmanned ground vehicles such as Packbot or TALON. A simulation comparison of different control strategies is presented. These strategies are compared based on maximizing energy efficiency and meeting thermal constraints.", "title": "" }, { "docid": "cb2edc1728a31b3c37ebf636be81f01f", "text": "Optimization problems in the power industry have attracted researchers from engineering, operations research and mathematics for many years. The complex nature of generation, transmission, and distribution of electric power implies ample opportunity of improvement towards the optimal. Mathematical models have proven indispensable in deepening the understanding of these optimization problems. The progress in algorithms and implementations has an essential share in widening the abilities to solve these optimization problems on hardware that is permanently improving. In the present paper we address unit commitment in power operation planning. This problem concerns the scheduling of start-up/shut-down decisions and operation levels for power generation units such that the fuel costs over some time horizon are minimal. The diversity of power systems regarding technological design and economic environment leads to a variety of issues potentially occurring in mathematical models of unit commitment. The ongoing liberalization of electricity markets will add to this by shifting the objective in power planning from fuel cost minimization to revenue maximization. For an introduction into basic aspects of unit commitment the reader is referred to the book by Wood and Wollenberg [35]. A literature synopsis on various traditional methodological approaches has been compiled by Sheble and Fahd [29]. In our paper, we present some of the more recent issues in modeling and algorithms for unit commitment. The present paper grew out of a collaboration with the German utility VEAG Vereinigte Energiewerke AG Berlin whose generation system comprises conventional coal and gas fired thermal units as well as pumped-storage plants. An important", "title": "" }, { "docid": "6a2a7b5831f6b3608eb88f5ccda6d520", "text": "In this paper we examine currently used programming contest systems. We discuss possible reasons why we do not expect any of the currently existing contest systems to be adopted by a major group of different programming contests. We suggest to approach the design of a contest system as a design of a secure IT system, using known methods from the area of computer", "title": "" }, { "docid": "e60d699411055bf31316d468226b7914", "text": "Tabular data is difficult to analyze and to search through, yielding for new tools and interfaces that would allow even non tech-savvy users to gain insights from open datasets without resorting to specialized data analysis tools and without having to fully understand the dataset structure. The goal of our demonstration is to showcase answering natural language questions from tabular data, and to discuss related system configuration and model training aspects. Our prototype is publicly available and open-sourced (see demo )", "title": "" }, { "docid": "247eced239dfd8c1631d80a592593471", "text": "In this paper, we propose new algorithms for learning segmentation strategies for simultaneous speech translation. In contrast to previously proposed heuristic methods, our method finds a segmentation that directly maximizes the performance of the machine translation system. We describe two methods based on greedy search and dynamic programming that search for the optimal segmentation strategy. An experimental evaluation finds that our algorithm is able to segment the input two to three times more frequently than conventional methods in terms of number of words, while maintaining the same score of automatic evaluation.1", "title": "" }, { "docid": "f93e72b45a185e06d03d15791d312021", "text": "BACKGROUND\nAbnormal scar development following burn injury can cause substantial physical and psychological distress to children and their families. Common burn scar prevention and management techniques include silicone therapy, pressure garment therapy, or a combination of both. Currently, no definitive, high-quality evidence is available for the effectiveness of topical silicone gel or pressure garment therapy for the prevention and management of burn scars in the paediatric population. Thus, this study aims to determine the effectiveness of these treatments in children.\n\n\nMETHODS\nA randomised controlled trial will be conducted at a large tertiary metropolitan children's hospital in Australia. Participants will be randomised to one of three groups: Strataderm® topical silicone gel only, pressure garment therapy only, or combined Strataderm® topical silicone gel and pressure garment therapy. Participants will include 135 children (45 per group) up to 16 years of age who are referred for scar management for a new burn. Children up to 18 years of age will also be recruited following surgery for burn scar reconstruction. Primary outcomes are scar itch intensity and scar thickness. Secondary outcomes include scar characteristics (e.g. colour, pigmentation, pliability, pain), the patient's, caregiver's and therapist's overall opinion of the scar, health service costs, adherence, health-related quality of life, treatment satisfaction and adverse effects. Measures will be completed on up to two sites per person at baseline and 1 week post scar management commencement, 3 months and 6 months post burn, or post burn scar reconstruction. Data will be analysed using descriptive statistics and univariate and multivariate regression analyses.\n\n\nDISCUSSION\nResults of this study will determine the effectiveness of three noninvasive scar interventions in children at risk of, and with, scarring post burn or post reconstruction.\n\n\nTRIAL REGISTRATION\nAustralian New Zealand Clinical Trials Registry, ACTRN12616001100482 . Registered on 5 August 2016.", "title": "" }, { "docid": "bdcb688bc914307d811114b2749e47c2", "text": "E-government initiatives are in their infancy in many developing countries. The success of these initiatives is dependent on government support as well as citizens' adoption of e-government services. This study adopted the unified of acceptance and use of technology (UTAUT) model to explore factors that determine the adoption of e-government services in a developing country, namely Kuwait. 880 students were surveyed, using an amended version of the UTAUT model. The empirical data reveal that performance expectancy, effort expectancy and peer influence determine students' behavioural intention. Moreover, facilitating conditions and behavioural intentions determine students' use of e-government services. Implications for decision makers and suggestions for further research are also considered in this study.", "title": "" }, { "docid": "9cc23cd9bfb3e422e2b4ace1fe816855", "text": "Evaluating surgeon skill has predominantly been a subjective task. Development of objective methods for surgical skill assessment are of increased interest. Recently, with technological advances such as robotic-assisted minimally invasive surgery (RMIS), new opportunities for objective and automated assessment frameworks have arisen. In this paper, we applied machine learning methods to automatically evaluate performance of the surgeon in RMIS. Six important movement features were used in the evaluation including completion time, path length, depth perception, speed, smoothness and curvature. Different classification methods applied to discriminate expert and novice surgeons. We test our method on real surgical data for suturing task and compare the classification result with the ground truth data (obtained by manual labeling). The experimental results show that the proposed framework can classify surgical skill level with relatively high accuracy of 85.7%. This study demonstrates the ability of machine learning methods to automatically classify expert and novice surgeons using movement features for different RMIS tasks. Due to the simplicity and generalizability of the introduced classification method, it is easy to implement in existing trainers. .", "title": "" }, { "docid": "27a583d33644887ad126e8e4844dd2e3", "text": "In this work, we will explore different approaches used in Cross-Lingual Information Retrieval (CLIR) systems. Mainly, CLIR systems which use statistical machine translation (SMT) systems to translate queries into collection language. This will include using SMT systems as a black box or as a white box, also the SMT systems that are tuned towards better CLIR performance. After that, we will present our approach to rerank the alternative translations using machine learning regression model. This includes also introducing our set of features which we used to train the model. After that, we adapt this reranker for new languages. We also present our query expansion approach using word-embeddings model that is trained on medical data. Finally we reinvestigate translating the document collection into query language, then we present our future work.", "title": "" }, { "docid": "456a246b468feb443e0ed576173d6d46", "text": "Automatic person re-identification (re-id) across camera boundaries is a challenging problem. Approaches have to be robust against many factors which influence the visual appearance of a person but are not relevant to the person's identity. Examples for such factors are pose, camera angles, and lighting conditions. Person attributes are a semantic high level information which is invariant across many such influences and contain information which is often highly relevant to a person's identity. In this work we develop a re-id approach which leverages the information contained in automatically detected attributes. We train an attribute classifier on separate data and include its responses into the training process of our person re-id model which is based on convolutional neural networks (CNNs). This allows us to learn a person representation which contains information complementary to that contained within the attributes. Our approach is able to identify attributes which perform most reliably for re-id and focus on them accordingly. We demonstrate the performance improvement gained through use of the attribute information on multiple large-scale datasets and report insights into which attributes are most relevant for person re-id.", "title": "" }, { "docid": "d550f61bb64295cc0cc0389e2dc2ad01", "text": "Routing in Delay Tolerant Networks (DTN) with unpredictable node mobility is a challenging problem because disconnections are prevalent and lack of knowledge about network dynamics hinders good decision making. Current approaches are primarily based on redundant transmissions. They have either high overhead due to excessive transmissions or long delays due to the possibility of making wrong choices when forwarding a few redundant copies. In this paper, we propose a novel forwarding algorithm based on the idea of erasure codes. Erasure coding allows use of a large number of relays while maintaining a constant overhead, which results in fewer cases of long delays.We use simulation to compare the routing performance of using erasure codes in DTN with four other categories of forwarding algorithms proposed in the literature. Our simulations are based on a real-world mobility trace collected in a large outdoor wild-life environment. The results show that the erasure-coding based algorithm provides the best worst-case delay performance with a fixed amount of overhead. We also present a simple analytical model to capture the delay characteristics of erasure-coding based forwarding, which provides insights on the potential of our approach.", "title": "" }, { "docid": "6586fc02e554e58ee1d5a58ef90cc197", "text": "OBJECTIVES\nRecent studies have started to explore the implementation of brain-computer interfaces (BCI) as part of driving assistant systems. The current study presents an EEG-based BCI that decodes error-related brain activity. Such information can be used, e.g., to predict driver's intended turning direction before reaching road intersections.\n\n\nAPPROACH\nWe executed experiments in a car simulator (N = 22) and a real car (N = 8). While subject was driving, a directional cue was shown before reaching an intersection, and we classified the presence or not of an error-related potentials from EEG to infer whether the cued direction coincided with the subject's intention. In this protocol, the directional cue can correspond to an estimation of the driving direction provided by a driving assistance system. We analyzed ERPs elicited during normal driving and evaluated the classification performance in both offline and online tests.\n\n\nRESULTS\nAn average classification accuracy of 0.698 ± 0.065 was obtained in offline experiments in the car simulator, while tests in the real car yielded a performance of 0.682 ± 0.059. The results were significantly higher than chance level for all cases. Online experiments led to equivalent performances in both simulated and real car driving experiments. These results support the feasibility of decoding these signals to help estimating whether the driver's intention coincides with the advice provided by the driving assistant in a real car.\n\n\nSIGNIFICANCE\nThe study demonstrates a BCI system in real-world driving, extending the work from previous simulated studies. As far as we know, this is the first online study in real car decoding driver's error-related brain activity. Given the encouraging results, the paradigm could be further improved by using more sophisticated machine learning approaches and possibly be combined with applications in intelligent vehicles.", "title": "" }, { "docid": "ac4c2f4820496f40e08e587b070d4ef5", "text": "We have developed an implantable fuel cell that generates power through glucose oxidation, producing 3.4 μW cm(-2) steady-state power and up to 180 μW cm(-2) peak power. The fuel cell is manufactured using a novel approach, employing semiconductor fabrication techniques, and is therefore well suited for manufacture together with integrated circuits on a single silicon wafer. Thus, it can help enable implantable microelectronic systems with long-lifetime power sources that harvest energy from their surrounds. The fuel reactions are mediated by robust, solid state catalysts. Glucose is oxidized at the nanostructured surface of an activated platinum anode. Oxygen is reduced to water at the surface of a self-assembled network of single-walled carbon nanotubes, embedded in a Nafion film that forms the cathode and is exposed to the biological environment. The catalytic electrodes are separated by a Nafion membrane. The availability of fuel cell reactants, oxygen and glucose, only as a mixture in the physiologic environment, has traditionally posed a design challenge: Net current production requires oxidation and reduction to occur separately and selectively at the anode and cathode, respectively, to prevent electrochemical short circuits. Our fuel cell is configured in a half-open geometry that shields the anode while exposing the cathode, resulting in an oxygen gradient that strongly favors oxygen reduction at the cathode. Glucose reaches the shielded anode by diffusing through the nanotube mesh, which does not catalyze glucose oxidation, and the Nafion layers, which are permeable to small neutral and cationic species. We demonstrate computationally that the natural recirculation of cerebrospinal fluid around the human brain theoretically permits glucose energy harvesting at a rate on the order of at least 1 mW with no adverse physiologic effects. Low-power brain-machine interfaces can thus potentially benefit from having their implanted units powered or recharged by glucose fuel cells.", "title": "" }, { "docid": "4331057bb0a3f3add576513fa71791a8", "text": "The category theoretic structures of monads and comonads can be used as an abstraction mechanism for simplifying both language semantics and programs. Monads have been used to structure impure computations, whilst comonads have been used to structure context-dependent computations. Interestingly, the class of computations structured by monads and the class of computations structured by comonads are not mutually exclusive. This paper formalises and explores the conditions under which a monad and a comonad can both structure the same notion of computation: when a comonad is left adjoint to a monad. Furthermore, we examine situations where a particular monad/comonad model of computation is deficient in capturing the essence of a computational pattern and provide a technique for calculating an alternative monad or comonad structure which fully captures the essence of the computation. Included is some discussion on how to choose between a monad or comonad structure in the case where either can be used to capture a particular notion of computation.", "title": "" } ]
scidocsrr
52fcac10e3a340aab6653031c2dae94d
Compliant leg behaviour explains basic dynamics of walking and running.
[ { "docid": "5d1e77b6b09ebac609f2e518b316bd49", "text": "Principles of muscle coordination in gait have been based largely on analyses of body motion, ground reaction force and EMG measurements. However, data from dynamical simulations provide a cause-effect framework for analyzing these measurements; for example, Part I (Gait Posture, in press) of this two-part review described how force generation in a muscle affects the acceleration and energy flow among the segments. This Part II reviews the mechanical and coordination concepts arising from analyses of simulations of walking. Simple models have elucidated the basic multisegmented ballistic and passive mechanics of walking. Dynamical models driven by net joint moments have provided clues about coordination in healthy and pathological gait. Simulations driven by muscle excitations have highlighted the partial stability afforded by muscles with their viscoelastic-like properties and the predictability of walking performance when minimization of metabolic energy per unit distance is assumed. When combined with neural control models for exciting motoneuronal pools, simulations have shown how the integrative properties of the neuro-musculo-skeletal systems maintain a stable gait. Other analyses of walking simulations have revealed how individual muscles contribute to trunk support and progression. Finally, we discuss how biomechanical models and simulations may enhance our understanding of the mechanics and muscle function of walking in individuals with gait impairments.", "title": "" } ]
[ { "docid": "d4400c07fe072a841c8f8e910c0e17f0", "text": "In the field of big data applications, lossless data compression and decompression can play an important role in improving the data center's efficiency in storage and distribution of data. To avoid becoming a performance bottleneck, they must be accelerated to have a capability of high speed data processing. As FPGAs begin to be deployed as compute accelerators in the data centers for its advantages of massive parallel customized processing capability, power efficiency and hardware reconfiguration. It is promising and interesting to use FPGAs for acceleration of data compression and decompression. The conventional development of FPGA accelerators using hardware description language costs much more design efforts than that of CPUs or GPUs. High level synthesis (HLS) can be used to greatly improve the design productivity. In this paper, we present a solution for accelerating lossless data decompression on FPGA by using HLS. With a pipelined data-flow structure, the proposed decompression accelerator can perform static Huffman decoding and LZ77 decompression at a very high throughput rate. According to the experimental results conducted on FPGA with the Calgary Corpus data benchmark, the average data throughput of the proposed decompression core achieves to 4.6 Gbps while running at 200 MHz.", "title": "" }, { "docid": "6f9ffe5e1633046418ca0bc4f7089b2f", "text": "This paper presents a new motion planning primitive to be used for the iterative steering of vision-based autonomous vehicles. This primitive is a parameterized quintic spline, denoted as -spline, that allows interpolating an arbitrary sequence of points with overall second-order geometric ( -) continuity. Issues such as completeness, minimality, regularity, symmetry, and flexibility of these -splines are addressed in the exposition. The development of the new primitive is tightly connected to the inversion control of nonholonomic car-like vehicles. The paper also exposes a supervisory strategy for iterative steering that integrates feedback vision data processing with the feedforward inversion control.", "title": "" }, { "docid": "84569374aa1adb152aee714d053b082d", "text": "PURPOSE\nTo describe the insertions of the superficial medial collateral ligament (sMCL) and posterior oblique ligament (POL) and their related osseous landmarks.\n\n\nMETHODS\nInsertions of the sMCL and POL were identified and marked in 22 unpaired human cadaveric knees. The surface area, location, positional relations, and morphology of the sMCL and POL insertions and related osseous structures were analyzed on 3-dimensional images.\n\n\nRESULTS\nThe femoral insertion of the POL was located 18.3 mm distal to the apex of the adductor tubercle (AT). The femoral insertion of the sMCL was located 21.1 mm distal to the AT and 9.2 mm anterior to the POL. The angle between the femoral axis and femoral insertion of the sMCL was 18.6°, and that between the femoral axis and the POL insertion was 5.1°. The anterior portions of the distal fibers of the POL were attached to the fascia cruris and semimembranosus tendon, whereas the posterior fibers were attached to the posteromedial side of the tibia directly. The tibial insertion of the POL was located just proximal and medial to the superior edge of the semimembranosus groove. The tibial insertion of the sMCL was attached firmly and widely to the tibial crest. The mean linear distances between the tibial insertion of the POL or sMCL and joint line were 5.8 and 49.6 mm, respectively.\n\n\nCONCLUSIONS\nThis study used 3-dimensional images to assess the insertions of the sMCL and POL and their related osseous landmarks. The AT was identified clearly as an osseous landmark of the femoral insertions of the sMCL and POL. The tibial crest and semimembranosus groove served as osseous landmarks of the tibial insertions of the sMCL and POL.\n\n\nCLINICAL RELEVANCE\nBy showing further details of the anatomy of the knee, the described findings can assist surgeons in anatomic reconstruction of the sMCL and POL.", "title": "" }, { "docid": "9c01496a3f3c52705671553165aa2024", "text": "Fiberoptic bronchoscopy is a widely performed procedure that is generally considered to be safe. The first performed bronchoscopy was done by Gustav Killian in 1897; however, the development of flexible fiberoptic bronchoscopy was accomplished by Ikeda in 1964(1). Flexible fiberoptic bronchoscopy is a key diagnostic and therapeutic procedure(2). It is estimated that more than 500,000 of these procedures are performed each year by pulmonologists, otolaryngologists, anesthesiologists, and cardiothoracic and trauma surgeons(3). Despite the widespread practice of diagnostic flexible bronchoscopy, there are no firm guidelines that assure a uniform acquisition of basic skills and competency in this procedure, nor are there guidelines to ensure uniform training and competency in advanced diagnostic flexible bronchoscopic techniques(4). The purpose of this review is to provide an update on 1) tracheobronchial anatomy, 2) flexible fiberoptic bronchoscopy exam, 3) training and competence on fiberoptic bronchoscopy, and 4) application of flexible fiberoptic bronchoscopy in thoracic anesthesia.", "title": "" }, { "docid": "631dc14ab0df1e5def0998bcf6ad016e", "text": "This study investigates the performance of two open source intrusion detection systems (IDSs) namely Snort and Suricata for accurately detecting the malicious traffic on computer networks. Snort and Suricata were installed on two different but identical computers and the performance was evaluated at 10 Gbps network speed. It was noted that Suricata could process a higher speed of network traffic than Snort with lower packet drop rate but it consumed higher computational resources. Snort had higher detection accuracy and was thus selected for further experiments. It was observed that Snort triggered a high rate of false positive alarms. To solve this problem a Snort adaptive plug-in was developed. To select the best performing algorithm for the Snort adaptive plug-in, an empirical study was carried out with different learning algorithms and Support Vector Machine (SVM) was selected. A hybrid version of SVM and Fuzzy logic produced a better detection accuracy. But the best result was achieved using an optimized SVM with the firefly algorithm with FPR (false positive rate) as 8.6% and FNR (false negative rate) as 2.2%, which is a good result. The novelty of this work is the performance comparison of two IDSs at 10 Gbps and the application of hybrid and optimized machine learning algorithms to Snort.", "title": "" }, { "docid": "d29ad30492b084cbcd2e6ede4665f483", "text": "K-means algorithm has been widely used in machine learning and data mining due to its simplicity and good performance. However, the standard k-means algorithm would be quite slow for clustering millions of data into thousands of or even tens of thousands of clusters. In this paper, we propose a fast k-means algorithm named multi-stage k-means (MKM) which uses a multi-stage filtering approach. The multi-stage filtering approach greatly accelerates the k-means algorithm via a coarse-to-fine search strategy. To further speed up the algorithm, hashing is introduced to accelerate the assignment step which is the most time-consuming part in k-means. Extensive experiments on several massive datasets show that the proposed algorithm can obtain up to 600X speed-up over the k-means algorithm with comparable accuracy.", "title": "" }, { "docid": "264c63f249f13bf3eb4fd5faac8f4fa0", "text": "This paper presents the study to investigate the possibility of the stand-alone micro hydro for low-cost electricity production which can satisfy the energy load requirements of a typical remote and isolated rural area. In this framework, the feasibility study in term of the technical and economical performances of the micro hydro system are determined according to the rural electrification concept. The proposed axial flux permanent magnet (AFPM) generator will be designed for micro hydro under sustainable development to optimize between cost and efficiency by using the local materials and basic engineering knowledge. First of all, the simple simulation of micro hydro model for lighting system is developed by considering the optimal size of AFPM generator. The simulation results show that the optimal micro hydro power plant with 70 W can supply the 9 W compact fluorescent up to 20 set for 8 hours by using pressure of water with 6 meters and 0.141 m3/min of flow rate. Lastly, a proposed micro hydro power plant can supply lighting system for rural electrification up to 525.6 kWh/year or 1,839.60 Baht/year and reduce 0.33 ton/year of CO2 emission.", "title": "" }, { "docid": "150a09dbdbc53282a23a2e99e4509255", "text": "The reductionist approach has revolutionized biology in the past 50 years. Yet its limits are being felt as the complexity of cellular interactions is gradually revealed by high-throughput technology. In order to make sense of the deluge of \"omic data\", a hypothesis-driven view is needed to understand how biomolecular interactions shape cellular networks. We review recent efforts aimed at building in vitro biochemical networks that reproduce the flow of genetic regulation. We highlight how those efforts have culminated in the rational construction of biochemical oscillators and bistable memories in test tubes. We also recapitulate the lessons learned about in vivo biochemical circuits such as the importance of delays and competition, the links between topology and kinetics, as well as the intriguing resemblance between cellular reaction networks and ecosystems.", "title": "" }, { "docid": "747ca83d8a4be084a30bbba3e96f248c", "text": "Introduction to chapter. Due to its cryptographic and operational key features such as the one-way function property, high speed and a fixed output size independent of input size the hash algorithm is one of the most important cryptographic primitives. A critical drawback of most cryptographic algorithms is the large computational overheads. This is getting more critical since the data amount to process or communicate is dramatically increasing. In many of such cases, a proper use of the hash algorithm effectively reduces the computational overhead. Digital signature algorithm and the message authentication are the most common applications of the hash algorithms. The increasing data size also motivates hardware designers to have a throughput optimal architecture of a given hash algorithm. In this chapter, some popular hash algorithms and their cryptanalysis are briefly introduced, and a design methodology for throughput optimal architectures of MD4-based hash algorithms is described in detail.", "title": "" }, { "docid": "d159ddace8c8d33963a304e04484aeff", "text": "This work addresses the problem of semantic scene understanding under fog. Although marked progress has been made in semantic scene understanding, it is mainly concentrated on clear-weather scenes. Extending semantic segmentation methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both labeled synthetic foggy data and unlabeled real foggy data. The method is based on the fact that the results of semantic segmentation in moderately adverse conditions (light fog) can be bootstrapped to solve the same problem in highly adverse conditions (dense fog). CMAda is extensible to other adverse conditions and provides a new paradigm for learning with synthetic data and unlabeled real data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) a novel fog densification method to densify the fog in real foggy scenes without known depth; and 4) the Foggy Zurich dataset comprising 3808 real foggy images, with pixel-level semantic annotations for 40 images under dense fog. Our experiments show that 1) our fog simulation and fog density estimator outperform their state-of-theart counterparts with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly, benefiting both from our synthetic and real foggy data. The datasets and code are available at the project website. D. Dai · C. Sakaridis · S. Hecker · L. Van Gool ETH Zürich, Zurich, Switzerland L. Van Gool KU Leuven, Leuven, Belgium", "title": "" }, { "docid": "3b05004828d71f1b69d80cb25e165d7f", "text": "Mapping in the GPS-denied environment is an important and challenging task in the field of robotics. In the large environment, mapping can be significantly accelerated by multiple robots exploring different parts of the environment. Accordingly, a key problem is how to integrate these local maps built by different robots into a single global map. In this paper, we propose an approach for simultaneous merging of multiple grid maps by the robust motion averaging. The main idea of this approach is to recover all global motions for map merging from a set of relative motions. Therefore, it firstly adopts the pair-wise map merging method to estimate relative motions for grid map pairs. To obtain as many reliable relative motions as possible, a graph-based sampling scheme is utilized to efficiently remove unreliable relative motions obtained from the pair-wise map merging. Subsequently, the accurate global motions can be recovered from the set of reliable relative motions by the motion averaging. Experimental results carried on real robot data sets demonstrate that proposed approach can achieve simultaneous merging of multiple grid maps with good performances.", "title": "" }, { "docid": "18a483a6f8ce4f20a6e5209ca6dd4808", "text": "OBJECTIVE\nCurrent mainstream EEG electrode setups permit efficient recordings, but are often bulky and uncomfortable for subjects. Here we introduce a novel type of EEG electrode, which is designed for an optimal wearing comfort. The electrode is referred to as C-electrode where \"C\" stands for comfort.\n\n\nMETHODS\nThe C-electrode does not require any holder/cap for fixation on the head nor does it use traditional pads/lining of disposable electrodes - thus, it does not disturb subjects. Fixation of the C-electrode on the scalp is based entirely on the adhesive interaction between the very light C-electrode/wire construction (<35 mg) and a droplet of EEG paste/gel. Moreover, because of its miniaturization, both C-electrode (diameter 2-3mm) and a wire (diameter approximately 50 microm) are minimally (or not at all) visible to an external observer. EEG recordings with standard and C-electrodes were performed during rest condition, self-paced movements and median nerve stimulation.\n\n\nRESULTS\nThe quality of EEG recordings for all three types of experimental conditions was similar for standard and C-electrodes, i.e., for near-DC recordings (Bereitschaftspotential), standard rest EEG spectra (1-45 Hz) and very fast oscillations approximately 600 Hz (somatosensory evoked potentials). The tests showed also that once being placed on a subject's head, C-electrodes can be used for 9h without any loss in EEG recording quality. Furthermore, we showed that C-electrodes can be effectively utilized for Brain-Computer Interfacing. C-electrodes proved to posses a high stability of mechanical fixation (stayed attached with 2.5 g accelerations). Subjects also reported not having any tactile sensations associated with wearing of C-electrodes.\n\n\nCONCLUSION\nC-electrodes provide optimal wearing comfort without any loss in the quality of EEG recordings.\n\n\nSIGNIFICANCE\nWe anticipate that C-electrodes can be used in a wide range of clinical, research and emerging neuro-technological environments.", "title": "" }, { "docid": "c9b4366d56a889b5f25c92fe45898c08", "text": "Studies of the clinical correlates of the subtypes of Attention-Deficit/Hyperactivity Disorder (ADHD) have identified differences in the representation of age, gender, prevalence, comorbidity, and treatment. We report retrospective chart review data detailing the clinical characteristics of the Inattentive (IA) and Combined (C) subtypes of ADHD in 143 cases of ADHD-IA and 133 cases of ADHD-C. The children with ADHD-IA were older, more likely to be female, and had more comorbid internalizing disorders and learning disabilities. Individuals in the ADHD-IA group were two to five times as likely to have a referral for speech and language problems. The children with ADHD-IA were rated as having less overall functional impairment, but did have difficulty with academic achievement. Children with ADHD-IA were less likely to be treated with stimulants. One eighth of the children with ADHD-IA still had significant symptoms of hyperactivity/impulsivity, but did not meet the DSM-IV threshold for diagnosis of ADHD-Combined Type. The ADHD-IA subtype includes children with no hyperactivity and children who still manifest clinically significant hyperactive symptomatology but do not meet DSM-IV criteria for Combined Type. ADHD-IA children are often seen as having speech and language problems, and are less likely to receive medication treatment, but respond to medical treatment with improvement both in attention and residual hyperactive/impulsive symptoms.", "title": "" }, { "docid": "9507febd41296b63e8a6434eb27400f9", "text": "This paper presents a new approach for automatic concept extraction, using grammatical parsers and Latent Semantic Analysis. The methodology is described, also the tool used to build the benchmarkingcorpus. The results obtained on student essays shows good inter-rater agreement and promising machine extraction performance. Concept extraction is the first step to automatically extract concept maps fromstudent’s essays or Concept Map Mining.", "title": "" }, { "docid": "8bd9a5cf3ca49ad8dd38750410a462b0", "text": "Most regional anesthesia in breast surgeries is performed as postoperative pain management under general anesthesia, and not as the primary anesthesia. Regional anesthesia has very few cardiovascular or pulmonary side-effects, as compared with general anesthesia. Pectoral nerve block is a relatively new technique, with fewer complications than other regional anesthesia. We performed Pecs I and Pec II block simultaneously as primary anesthesia under moderate sedation with dexmedetomidine for breast conserving surgery in a 49-year-old female patient with invasive ductal carcinoma. Block was uneventful and showed no complications. Thus, Pecs block with sedation could be an alternative to general anesthesia for breast surgeries.", "title": "" }, { "docid": "66423bc00bb724d1d0c616397d898dd0", "text": "Background\nThere is a growing trend for patients to seek the least invasive treatments with less risk of complications and downtime for facial rejuvenation. Thread embedding acupuncture has become popular as a minimally invasive treatment. However, there is little clinical evidence in the literature regarding its effects.\n\n\nMethods\nThis single-arm, prospective, open-label study recruited participants who were women aged 40-59 years, with Glogau photoaging scale III-IV. Fourteen participants received thread embedding acupuncture one time and were measured before and after 1 week from the procedure. The primary outcome was a jowl to subnasale vertical distance. The secondary outcomes were facial wrinkle distances, global esthetic improvement scale, Alexiades-Armenakas laxity scale, and patient-oriented self-assessment scale.\n\n\nResults\nFourteen participants underwent thread embedding acupuncture alone, and 12 participants revisited for follow-up outcome measures. For the primary outcome measure, both jowls were elevated in vertical height by 1.87 mm (left) and 1.43 mm (right). Distances of both melolabial and nasolabial folds showed significant improvement. In the Alexiades-Armenakas laxity scale, each evaluator evaluated for four and nine participants by 0.5 grades improved. In the global aesthetic improvement scale, improvement was graded as 1 and 2 in nine and five cases, respectively. The most common adverse events were mild bruising, swelling, and pain. However, adverse events occurred, although mostly minor and of short duration.\n\n\nConclusion\nIn this study, thread embedding acupuncture showed clinical potential for facial wrinkles and laxity. However, further large-scale trials with a controlled design and objective measurements are needed.", "title": "" }, { "docid": "eb3a993e5302a45c11daa8d3482468c7", "text": "Network structure determination is an important issue in pattern classification based on a probabilistic neural network. In this study, a supervised network structure determination algorithm is proposed. The proposed algorithm consists of two parts and runs in an iterative way. The first part identifies an appropriate smoothing parameter using a genetic algorithm, while the second part determines suitable pattern layer neurons using a forward regression orthogonal algorithm. The proposed algorithm is capable of offering a fairly small network structure with satisfactory classification accuracy.", "title": "" }, { "docid": "2ce6a8dfe133da8a4486e2aca3487a03", "text": "This paper responds to research into the aerodynamics of flapping wings and to the problem of the lack of an adequate method which accommodates large-scale trailing vortices. A comparative review is provided of prevailing aerodynamic methods, highlighting their respective limitations as well as strengths. The main advantages of an unsteady aerodynamic panel method are then introduced and illustrated by modelling the flapping wings of a tethered sphingid moth and comparing the results with those generated using a quasi-steady method. The improved correlations of the aerodynamic forces and the resultant graphics clearly demonstrate the advantages of the unsteady panel method (namely, its ability to detail the trailing wake and to include dynamic effects in a distributed manner).", "title": "" }, { "docid": "2bee8125c2a8a1c85ab7f044e28e2191", "text": "To achieve instantaneous control of induction motor torque using field-orientation techniques, it is necessary that the phase currents be controlled to maintain precise instantaneous relationships. Failure to do so results in a noticeable degradation in torque response. Most of the currently used approaches to achieve this control employ classical control strategies which are only correct for steady-state conditions. A modern control theory approach which circumvents these limitations is developed. The approach uses a state-variable feedback control model of the field-oriented induction machine. This state-variable controller is shown to be intrinsically more robust than PI regulators. Experimental verification of the performance of this state-variable control strategy in achieving current-loop performance and torque control at high operating speeds is included.", "title": "" }, { "docid": "ea87229e46fd049930c75a9d5187fd6c", "text": "Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.", "title": "" } ]
scidocsrr
a38f6d41736bd645d3707d56a7729e47
Automatic breast ultrasound image segmentation: A survey
[ { "docid": "4c945988031de61c4d57a586be542bec", "text": "In this paper we describe a novel algorithm for interactive multilabel segmentation of N-dimensional images. Given a small number of user-labelled pixels, the rest of the image is segmented automatically by a Cellular Automaton. The process is iterative, as the automaton labels the image, user can observe the segmentation evolution and guide the algorithm with human input where the segmentation is difficult to compute. In the areas, where the segmentation is reliably computed automatically no additional user effort is required. Results of segmenting generic photos and medical images are presented. Our experiments show that modest user effort is required for segmentation of moderately hard images.", "title": "" } ]
[ { "docid": "68a3f9fb186289f343b34716b2e087f6", "text": "User interface (UI) is one of the most important components of a mobile app and strongly influences users' perception of the app. However, UI design tasks are typically manual and time-consuming. This paper proposes a novel approach to (semi)-automate those tasks. Our key idea is to develop and deploy advanced deep learning models based on recurrent neural networks (RNN) and generative adversarial networks (GAN) to learn UI design patterns from millions of currently available mobile apps. Once trained, those models can be used to search for UI design samples given user-provided descriptions written in natural language and generate professional-looking UI designs from simpler, less elegant design drafts.", "title": "" }, { "docid": "938afbc53340a3aa6e454d17789bf021", "text": "BACKGROUND\nAll cultural groups in the world place paramount value on interpersonal trust. Existing research suggests that although accurate judgments of another's trustworthiness require extensive interactions with the person, we often make trustworthiness judgments based on facial cues on the first encounter. However, little is known about what facial cues are used for such judgments and what the bases are on which individuals make their trustworthiness judgments.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIn the present study, we tested the hypothesis that individuals may use facial attractiveness cues as a \"shortcut\" for judging another's trustworthiness due to the lack of other more informative and in-depth information about trustworthiness. Using data-driven statistical models of 3D Caucasian faces, we compared facial cues used for judging the trustworthiness of Caucasian faces by Caucasian participants who were highly experienced with Caucasian faces, and the facial cues used by Chinese participants who were unfamiliar with Caucasian faces. We found that Chinese and Caucasian participants used similar facial cues to judge trustworthiness. Also, both Chinese and Caucasian participants used almost identical facial cues for judging trustworthiness and attractiveness.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThe results suggest that without opportunities to interact with another person extensively, we use the less racially specific and more universal attractiveness cues as a \"shortcut\" for trustworthiness judgments.", "title": "" }, { "docid": "7704b6baee77726a546b49bc0376d8cf", "text": "The increase in high-precision, high-sample-rate telemetry timeseries poses a problem for existing timeseries databases which can neither cope with the throughput demands of these streams nor provide the necessary primitives for effective analysis of them. We present a novel abstraction for telemetry timeseries data and a data structure for providing this abstraction: a timepartitioning version-annotated copy-on-write tree. An implementation in Go is shown to outperform existing solutions, demonstrating a throughput of 53 million inserted values per second and 119 million queried values per second on a four-node cluster. The system achieves a 2.9x compression ratio and satisfies statistical queries spanning a year of data in under 200ms, as demonstrated on a year-long production deployment storing 2.1 trillion data points. The principles and design of this database are generally applicable to a large variety of timeseries types and represent a significant advance in the development of technology for the Internet of Things.", "title": "" }, { "docid": "2a1bee8632e983ca683cd5a9abc63343", "text": "Phrase browsing techniques use phrases extracted automatically from a large information collection as a basis for browsing and accessing it. This paper describes a case study that uses an automatically constructed phrase hierarchy to facilitate browsing of an ordinary large Web site. Phrases are extracted from the full text using a novel combination of rudimentary syntactic processing and sequential grammar induction techniques. The interface is simple, robust and easy to use.\nTo convey a feeling for the quality of the phrases that are generated automatically, a thesaurus used by the organization responsible for the Web site is studied and its degree of overlap with the phrases in the hierarchy is analyzed. Our ultimate goal is to amalgamate hierarchical phrase browsing and hierarchical thesaurus browsing: the latter provides an authoritative domain vocabulary and the former augments coverage in areas the thesaurus does not reach.", "title": "" }, { "docid": "809046f2f291ce610938de209d98a6f2", "text": "Pregnancy loss before 20 weeks’ gestation without outside intervention is termed spontaneous abortion and may be encountered in as many as 20% of clinically diagnosed pregnancies.1 It is said to be complete when all products of conception are expelled, the uterus is in a contracted state, and the cervix is closed. On the other hand, retention of part of products of conception inside the uterus, cervix, or vagina results in incomplete abortion. Although incomplete spontaneous miscarriages are commonly encountered in early pregnancy,2 traumatic fetal decapitation has not been mentioned in the medical literature as a known complication of spontaneous abortion. We report an extremely rare and unusual case of traumatic fetal decapitation due to self-delivery during spontaneous abortion in a 26-year-old woman who presented at 15 weeks’ gestation with gradually worsening vaginal bleeding and lower abdominal pain and with the fetal head still lying in the uterine cavity. During our search for similar cases, we came across just 1 other case report describing traumatic fetal decapitation after spontaneous abortion,3 although there are reports of fetal decapitation from amniotic band syndrome, vacuum-assisted deliveries, and destructive operations.4–8 A 26-year-old woman, gravida 2, para 0, presented to the emergency department with vaginal bleeding and cramping pain in her lower abdomen, both of which had gradually increased in severity over the previous 2 days. Her pulse and blood pressure were 86 beats per minute and 100/66 mm Hg, respectively, and her respiratory rate was 26 breaths per minute. She had a high-grade fever; her temperature was 103°F (39.4°C), recorded orally. There was suprapubic tenderness on palpation. About 8 or 9 days before presentation, she had severe pain in the lower abdomen, followed by vaginal bleeding. She gave a history of passing brown to black clots, one of which was particularly large, and she had to pull it out herself as if it was stuck. It resembled “an incomplete very small baby” in her own words. Although not sure, she could not make out the head of the “baby,” although she could appreciate the limbs and trunk. Thereafter, the bleeding gradually decreased over the next 2 days, but her lower abdominal pain persisted. However, after 1 day, she again started bleeding, and her pain increased in intensity. Meanwhile she also developed fever. She gave a history of recent cocaine use and alcohol drinking occasionally. No history of smoking was present. According to her last menstrual period, the gestational age was at 15 weeks, and during this pregnancy, she never had a sonographic examination. She reported taking a urine test for pregnancy at home 4 weeks before, which showed positive results. She gave a history of being pregnant 11⁄2 years before. At that time, also, she aborted spontaneously at 9 weeks’ gestation. No complications were seen at that time. She resumed her menses normally after about 2 months and was regular until 3 months back. The patient was referred for emergency sonography, which revealed that the fetal head was lying in the uterine cavity (Figure 1, A and B) along with the presence of fluid/ hemorrhage in the cervix and upper vagina (Figure 1C). No other definite fetal part could be identified. The placenta was also seen in the uterine cavity, and it was upper anterior and fundic (Figure 1D). No free fluid in abdomen was seen. Subsequently after stabilization, the patient underwent dilation and evacuation and had an uneventful postoperative course. As mentioned earlier, traumatic fetal decapitation accompanying spontaneous abortion is a very rare occurrence; we came across only 1 other case3 describing similar findings. Patients presenting to the emergency department with features suggestive of abortion, whether threatened, incomplete, or complete, should be thoroughly evaluated by both pelvic and sonographic examinations to check for any retained products of conception with frequent followups in case of threatened or incomplete abortions.", "title": "" }, { "docid": "4e97003a5609901f1f18be1ccbf9db46", "text": "Fog computing is strongly emerging as a relevant and interest-attracting paradigm+technology for both the academic and industrial communities. However, architecture and methodological approaches are still prevalent in the literature, while few research activities have specifically targeted so far the issues of practical feasibility, cost-effectiveness, and efficiency of fog solutions over easily-deployable environments. In this perspective, this paper originally presents i) our fog-oriented framework for Internet-of-Things applications based on innovative scalability extensions of the open-source Kura gateway and ii) its Docker-based containerization over challenging and resource-limited fog nodes, i.e., RaspberryPi devices. Our practical experience and experimental work show the feasibility of using even extremely constrained nodes as fog gateways; the reported results demonstrate that good scalability and limited overhead can be coupled, via proper configuration tuning and implementation optimizations, with the significant advantages of containerization in terms of flexibility and easy deployment, also when working on top of existing, off-the-shelf, and limited-cost gateway nodes.", "title": "" }, { "docid": "e035233d3787ea79c446d1716553d41e", "text": "In this paper, we propose a method of detecting and classifying web application attacks. In contrast to current signature-based security methods, our solution is an ontology based technique. It specifies web application attacks by using semantic rules, the context of consequence and the specifications of application protocols. The system is capable of detecting sophisticated attacks effectively and efficiently by analyzing the specified portion of a user request where attacks are possible. Semantic rules help to capture the context of the application, possible attacks and the protocol that was used. These rules also allow inference to run over the ontological models in order to detect, the often complex polymorphic variations of web application attacks. The ontological model was developed using Description Logic that was based on the Web Ontology Language (OWL). The inference rules are Horn Logic statements and are implemented using the Apache JENA framework. The system is therefore platform and technology independent. Prior to the evaluation of the system the knowledge model was validated by using OntoClean to remove inconsistency, incompleteness and redundancy in the specification of ontological concepts. The experimental results show that the detection capability and performance of our system is significantly better than existing state of the art solutions. The system successfully detects web application attacks whilst generating few false positives. The examples that are presented demonstrate that a semantic approach can be used to effectively detect zero day and more sophisticated attacks in a real-world environment. 2013 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "f6dbce178e428522c80743e735920875", "text": "With the recent advancement in deep learning, we have witnessed a great progress in single image super-resolution. However, due to the significant information loss of the image downscaling process, it has become extremely challenging to further advance the state-of-theart, especially for large upscaling factors. This paper explores a new research direction in super resolution, called reference-conditioned superresolution, in which a reference image containing desired high-resolution texture details is provided besides the low-resolution image. We focus on transferring the high-resolution texture from reference images to the super-resolution process without the constraint of content similarity between reference and target images, which is a key difference from previous example-based methods. Inspired by recent work on image stylization, we address the problem via neural texture transfer. We design an end-to-end trainable deep model which generates detail enriched results by adaptively fusing the content from the low-resolution image with the texture patterns from the reference image. We create a benchmark dataset for the general research of reference-based super-resolution, which contains reference images paired with low-resolution inputs with varying degrees of similarity. Both objective and subjective evaluations demonstrate the great potential of using reference images as well as the superiority of our results over other state-of-the-art methods.", "title": "" }, { "docid": "850f5af0ac8fe1e2eb318acf00a14a55", "text": "VOS is a new mapping technique that can serve as an alternative to the wellknown technique of multidimensional scaling. We present an extensive comparison between the use of multidimensional scaling and the use of VOS for constructing bibliometric maps. In our theoretical analysis, we show the mathematical relation between the two techniques. In our experimental analysis, we use the techniques for constructing maps of authors, journals, and keywords. Two commonly used approaches to bibliometric mapping, both based on multidimensional scaling, turn out to produce maps that suffer from artifacts. Maps constructed using VOS turn out not to have this problem. We conclude that in general maps constructed using VOS provide a more satisfactory representation of a data set than maps constructed using well-known multidimensional scaling approaches.", "title": "" }, { "docid": "095f150e3b4551443720f42466789073", "text": "OBJECTIVE\nTo describe a new sign of cleft lip and palate (CLP), the maxillary gap, which is visible in the mid-sagittal plane of the fetal face used routinely for measurement of nuchal translucency thickness.\n\n\nMETHODS\nThis was a retrospective study of stored images of the mid-sagittal view of the fetal face at 11-13 weeks' gestation in 86 cases of CLP and 86 normal controls. The images were examined to determine if a maxillary gap was present, in which case its size was measured.\n\n\nRESULTS\nIn 37 (43.0%) cases of CLP the defect was isolated and in 49 (57.0%) there were additional fetal defects. In the isolated CLP group, the diagnosis of facial cleft was made in the first trimester in nine (24.3%) cases and in the second trimester in 28 (75.7%). In the group with additional defects, the diagnosis of facial cleft was made in the first trimester in 46 (93.9%) cases and in the second trimester in three (6.1%). A maxillary gap was observed in 96% of cases of CLP with additional defects, in 65% of those with isolated CLP and in 7% of normal fetuses. There was a large gap (>1.5 mm) or complete absence of signals from the maxilla in the midline in 69% of cases of CLP with additional defects, in 35% of those with isolated CLP and in none of the normal controls.\n\n\nCONCLUSIONS\nThe maxillary gap is a new simple marker of possible CLP, which could increase the detection rate of CLP, especially in isolated cases.", "title": "" }, { "docid": "91fdd315f12d8192e0cdada412abfda4", "text": "The design of neural architectures for structured objects is typically guided by experimental insights rather than a formal process. In this work, we appeal to kernels over combinatorial structures, such as sequences and graphs, to derive appropriate neural operations. We introduce a class of deep recurrent neural operations and formally characterize their associated kernel spaces. Our recurrent modules compare the input to virtual reference objects (cf. filters in CNN) via the kernels. Similar to traditional neural operations, these reference objects are parameterized and directly optimized in end-to-end training. We empirically evaluate the proposed class of neural architectures on standard applications such as language modeling and molecular graph regression, achieving state-of-the-art or competitive results across these applications. We also draw connections to existing architectures such as LSTMs.", "title": "" }, { "docid": "26deedfae0fd167d35df79f28c75e09c", "text": "In content-based image retrieval, SIFT feature and the feature from deep convolutional neural network (CNN) have demonstrated promising performance. To fully explore both visual features in a unified framework for effective and efficient retrieval, we propose a collaborative index embedding method to implicitly integrate the index matrices of them. We formulate the index embedding as an optimization problem from the perspective of neighborhood sharing and solve it with an alternating index update scheme. After the iterative embedding, only the embedded CNN index is kept for on-line query, which demonstrates significant gain in retrieval accuracy, with very economical memory cost. Extensive experiments have been conducted on the public datasets with million-scale distractor images. The experimental results reveal that, compared with the recent state-of-the-art retrieval algorithms, our approach achieves competitive accuracy performance with less memory overhead and efficient query computation.", "title": "" }, { "docid": "e46b79180d2e7f1afdd0f144fef3f976", "text": "The recognition of disease and chemical named entities in scientific articles is a very important subtask in information extraction in the biomedical domain. Due to the diversity and complexity of disease names, the recognition of named entities of diseases is rather tougher than those of chemical names. Although there are some remarkable chemical named entity recognition systems available online such as ChemSpot and tmChem, the publicly available recognition systems of disease named entities are rare. This article presents a system for disease named entity recognition (DNER) and normalization. First, two separate DNER models are developed. One is based on conditional random fields model with a rule-based post-processing module. The other one is based on the bidirectional recurrent neural networks. Then the named entities recognized by each of the DNER model are fed into a support vector machine classifier for combining results. Finally, each recognized disease named entity is normalized to a medical subject heading disease name by using a vector space model based method. Experimental results show that using 1000 PubMed abstracts for training, our proposed system achieves an F1-measure of 0.8428 at the mention level and 0.7804 at the concept level, respectively, on the testing data of the chemical-disease relation task in BioCreative V.Database URL: http://219.223.252.210:8080/SS/cdr.html.", "title": "" }, { "docid": "cf0d47466adec1adebeb14f89f0009cb", "text": "We developed a novel learning-based human detection system, which can detect people having different sizes and orientations, under a wide variety of backgrounds or even with crowds. To overcome the affects of geometric and rotational variations, the system automatically assigns the dominant orientations of each block-based feature encoding by using the rectangularand circulartype histograms of orientated gradients (HOG), which are insensitive to various lightings and noises at the outdoor environment. Moreover, this work demonstrated that Gaussian weight and tri-linear interpolation for HOG feature construction can increase detection performance. Particularly, a powerful feature selection algorithm, AdaBoost, is performed to automatically select a small set of discriminative HOG features with orientation information in order to achieve robust detection results. The overall computational time is further reduced significantly without any performance loss by using the cascade-ofrejecter structure, whose hyperplanes and weights of each stage are estimated by using the AdaBoost approach.", "title": "" }, { "docid": "afb0d6a917fd0c19aaaa045c145a60d3", "text": "This paper proposes a new approach to using machine learning to detect grasp poses on novel objects presented in clutter. The input to our algorithm is a point cloud and the geometric parameters of the robot hand. The output is a set of hand poses that are expected to be good grasps. There are two main contributions. First, we identify a set of necessary conditions on the geometry of a grasp that can be used to generate a set of grasp hypotheses. This helps focus grasp detection away from regions where no grasp can exist. Second, we show how geometric grasp conditions can be used to generate labeled datasets for the purpose of training the machine learning algorithm. This enables us to generate large amounts of training data and it grounds our training labels in grasp mechanics. Overall, our method achieves an average grasp success rate of 88% when grasping novels objects presented in isolation and an average success rate of 73% when grasping novel objects presented in dense clutter. This system is available as a ROS package at http://wiki.ros.org/agile_grasp.", "title": "" }, { "docid": "be6e3666eba5752a59605a86e5bd932f", "text": "Accurate knowledge on the absolute or true speed of a vehicle, if and when available, can be used to enhance advanced vehicle dynamics control systems such as anti-lock brake systems (ABS) and auto-traction systems (ATS) control schemes. Current conventional method uses wheel speed measurements to estimate the speed of the vehicle. As a result, indication of the vehicle speed becomes erroneous and, thus, unreliable when large slips occur between the wheels and terrain. This paper describes a fuzzy rule-based Kalman filtering technique which employs an additional accelerometer to complement the wheel-based speed sensor, and produce an accurate estimation of the true speed of a vehicle. We use the Kalman filters to deal with the noise and uncertainties in the speed and acceleration models, and fuzzy logic to tune the covariances and reset the initialization of the filter according to slip conditions detected and measurement-estimation condition. Experiments were conducted using an actual vehicle to verify the proposed strategy. Application of the fuzzy logic rule-based Kalman filter shows that accurate estimates of the absolute speed can be achieved euen under sagnapcant brakang skzd and traction slip conditions.", "title": "" }, { "docid": "5dd9c07946288d8fced7802b00d811bd", "text": "In the period 1890 to 1895, Willem Einthoven greatly improved the quality of tracings that could be directly obtained with the capillary electrometer. He then introduced an ingenious correction for the poor frequency response of these instruments, using differential equations. This method allowed him to predict the correct form of the human electrocardiogram, as subsequently revealed by the new string galvanometer that he introduced in 1902. For Einthoven, who won the Nobel Prize for the development of the electrocardiogram in 1924, one of the most rewarding aspects of the high fidelity recording of the human electrocardiogram was its validation of his earlier theoretical predictions regarding the electrical activity of the heart.", "title": "" }, { "docid": "914e130236cccd4c661134051c0d9e0b", "text": "We investigate neural models’ ability to capture lexicosyntactic inferences: inferences triggered by the interaction of lexical and syntactic information. We take the task of event factuality prediction as a case study and build a factuality judgment dataset for all English clause-embedding verbs in various syntactic contexts. We use this dataset, which we make publicly available, to probe the behavior of current state-of-the-art neural systems, showing that these systems make certain systematic errors that are clearly visible through the lens of factuality prediction.", "title": "" }, { "docid": "16ff5b993508f962550b6de495c9d651", "text": "Finding similar procedures in stripped binaries has various use cases in the domains of cyber security and intellectual property. Previous works have attended this problem and came up with approaches that either trade throughput for accuracy or address a more relaxed problem.\n In this paper, we present a cross-compiler-and-architecture approach for detecting similarity between binary procedures, which achieves both high accuracy and peerless throughput. For this purpose, we employ machine learning alongside similarity by composition: we decompose the code into smaller comparable fragments, transform these fragments to vectors, and build machine learning-based predictors for detecting similarity between vectors that originate from similar procedures.\n We implement our approach in a tool called Zeek and evaluate it by searching similarities in open source projects that we crawl from the world-wide-web. Our results show that we perform 250X faster than state-of-the-art tools without harming accuracy.", "title": "" }, { "docid": "d4896aa12be18aea9a6639422ee12d92", "text": "Recently, tag recommendation (TR) has become a very hot research topic in data mining and related areas. However, neither co-occurrence based methods which only use the item-tag matrix nor content based methods which only use the item content information can achieve satisfactory performance in real TR applications. Hence, how to effectively combine the item-tag matrix, item content information, and other auxiliary information into the same recommendation framework is the key challenge for TR. In this paper, we first adapt the collaborative topic regression (CTR) model, which has been successfully applied for article recommendation, to combine both item-tag matrix and item content information for TR. Furthermore, by extending CTR we propose a novel hierarchical Bayesian model, called CTR with social regularization (CTR-SR), to seamlessly integrate the item-tag matrix, item content information, and social networks between items into the same principled model. Experiments on real data demonstrate the effectiveness of our proposed models.", "title": "" } ]
scidocsrr
100cb9db89c6d73c190af415c731c5ef
Stratification, Imaging, and Management of Acute Massive and Submassive Pulmonary Embolism.
[ { "docid": "b32286014bb7105e62fba85a9aab9019", "text": "PURPOSE\nSystemic thrombolysis for the treatment of acute pulmonary embolism (PE) carries an estimated 20% risk of major hemorrhage, including a 3%-5% risk of hemorrhagic stroke. The authors used evidence-based methods to evaluate the safety and effectiveness of modern catheter-directed therapy (CDT) as an alternative treatment for massive PE.\n\n\nMATERIALS AND METHODS\nThe systematic review was initiated by electronic literature searches (MEDLINE, EMBASE) for studies published from January 1990 through September 2008. Inclusion criteria were applied to select patients with acute massive PE treated with modern CDT. Modern techniques were defined as the use of low-profile devices (< or =10 F), mechanical fragmentation and/or aspiration of emboli including rheolytic thrombectomy, and intraclot thrombolytic injection if a local drug was infused. Relevant non-English language articles were translated into English. Paired reviewers assessed study quality and abstracted data. Meta-analysis was performed by using random effects models to calculate pooled estimates for complications and clinical success rates across studies. Clinical success was defined as stabilization of hemodynamics, resolution of hypoxia, and survival to hospital discharge.\n\n\nRESULTS\nFive hundred ninety-four patients from 35 studies (six prospective, 29 retrospective) met the criteria for inclusion. The pooled clinical success rate from CDT was 86.5% (95% confidence interval [CI]: 82.1%, 90.2%). Pooled risks of minor and major procedural complications were 7.9% (95% CI: 5.0%, 11.3%) and 2.4% (95% CI: 1.9%, 4.3%), respectively. Data on the use of systemic thrombolysis before CDT were available in 571 patients; 546 of those patients (95%) were treated with CDT as the first adjunct to heparin without previous intravenous thrombolysis.\n\n\nCONCLUSIONS\nModern CDT is a relatively safe and effective treatment for acute massive PE. At experienced centers, CDT should be considered as a first-line treatment for patients with massive PE.", "title": "" } ]
[ { "docid": "8093101949a96d27082712ce086bf11f", "text": "Transition-based dependency parsers often need sequences of local shift and reduce operations to produce certain attachments. Correct individual decisions hence require global information about the sentence context and mistakes cause error propagation. This paper proposes a novel transition system, arc-swift, that enables direct attachments between tokens farther apart with a single transition. This allows the parser to leverage lexical information more directly in transition decisions. Hence, arc-swift can achieve significantly better performance with a very small beam size. Our parsers reduce error by 3.7–7.6% relative to those using existing transition systems on the Penn Treebank dependency parsing task and English Universal Dependencies.", "title": "" }, { "docid": "443df7fa37723021c2079fd524f199ab", "text": "OBJECTIVE\nCircumcision, performed for religious or medical reasons is the procedure of surgical excision of the skin covering the glans penis, preputium in a certain shape and dimension so as to expose the tip of the glans penis. Short- and long- term complication rates of up to 50% have been reported, varying due to the recording system of different countries in which the procedure has been accepted as a widely performed simple surgical procedure. In this study, treatment procedures in patients presented to our clinic with complications after circumcision are described and methods to decrease the rate of the complications are reviewed.\n\n\nMATERIAL AND METODS\nCases that presented to our clinic between 2010 and 2013 with early complications of circumcision were retrospectively reviewed. Cases with acceptedly major complications as excess skin excision, skin necrosis and total amputation of the glans were included in the study, while cases with minor complications such as bleeding, hematoma and infection were excluded from the study.\n\n\nRESULTS\nRepair with full- thickness skin grafts was performed in patients with excess skin excision. In cases with skin necrosis, following the debridement of the necrotic skin, primary repair or repair with full- thickness graft was performed in cases where full- thickness skin defects developed and other cases with partial skin loss were left to secondary healing. Repair with an inguinal flap was performed in the case with glans amputation.\n\n\nCONCLUSION\nCircumcisions performed by untrained individuals are to be blamed for the complications of circumcision reported in this country. The rate of complications increases during the \"circumcision feasts\" where multiple circumcisions were performed. This also predisposes to transmission of various diseases, primarily hepatitis B/C and AIDS. Circumcision is a surgical procedure that should be performed by specialists under appropriate sterile circumstances in which the rate of complications would be decreased. The child may be exposed to recurrent psychosocial and surgical trauma when it is performed by incompetent individuals.", "title": "" }, { "docid": "88163c30fdafafcec1b69eaa995e3a99", "text": "Managing privacy in the IoT presents a significant challenge. We make the case that information obtained by auditing the flows of data can assist in demonstrating that the systems handling personal data satisfy regulatory and user requirements. Thus, components handling personal data should be audited to demonstrate that their actions comply with all such policies and requirements. A valuable side-effect of this approach is that such an auditing process will highlight areas where technical enforcement has been incompletely or incorrectly specified. There is a clear role for technical assistance in aligning privacy policy enforcement mechanisms with data protection regulations. The first step necessary in producing technology to accomplish this alignment is to gather evidence of data flows. We describe our work producing, representing and querying audit data and discuss outstanding challenges.", "title": "" }, { "docid": "eced9f448727b7461e253f48d9cf8505", "text": "Near-range videos contain objects that are close to the camera. These videos often contain discontinuous depth variation (DDV), which is the main challenge to the existing video stabilization methods. Traditionally, 2D methods are robust to various camera motions (e.g., quick rotation and zooming) under scenes with continuous depth variation (CDV). However, in the presence of DDV, they often generate wobbled results due to the limited ability of their 2D motion models. Alternatively, 3D methods are more robust in handling near-range videos. We show that, by compensating rotational motions and ignoring translational motions, near-range videos can be successfully stabilized by 3D methods without sacrificing the stability too much. However, it is time-consuming to reconstruct the 3D structures for the entire video and sometimes even impossible due to rapid camera motions. In this paper, we combine the advantages of 2D and 3D methods, yielding a hybrid approach that is robust to various camera motions and can handle the near-range scenarios well. To this end, we automatically partition the input video into CDV and DDV segments. Then, the 2D and 3D approaches are adopted for CDV and DDV clips, respectively. Finally, these segments are stitched seamlessly via a constrained optimization. We validate our method on a large variety of consumer videos.", "title": "" }, { "docid": "902f4f012c6e0f86228bea2f35cc691c", "text": "Research on personality’s role in coping is inconclusive. Proactive coping ability is one’s tendency to expect and prepare for life’s challenges (Schwarzer & Taubert, 2002). This type of coping provides a refreshing conceptualization of coping that allows an examination of personality’s role in coping that transcends the current situational versus dispositional coping conundrum. Participants (N = 49) took the Proactive Coping Inventory (Greenglass, Schwarzer, & Taubert, 1999) and their results were correlated with all domains and facets of the Five-Factor Model (FFM; Costa & McCrae, 1995). Results showed strong correlations between a total score (which encompassed 6 proactive coping scales), and Extraversion, Agreeableness, Conscientiousness, and Neuroticism, as well as between several underlying domain facets. Results also showed strong correlations between specific proactive coping subscales and several domains and facets of the FFM. Implications for the influence of innate personality factors in one’s ability to cope are discussed. An individual’s methods of coping with adversity are important aspects of their overall adaptation. Although characteristic ways of coping likely reflect learned experiences and situational factors to some degree, it is also likely that innate dispositions contribute to specific coping styles and overall ability to cope. Thus, there may be systematic relationships between enduring personality traits and coping ability. To show the theoretical importance of such a relationship, an account of empirical data that highlights the fundamental role of personality will develop a rationale for the hypothesized influence of personality on overall adaptation, and reasons why personality is likely to affect coping ability. Personality Until recently, the field has lacked consensus regarding an overall, comprehensive theory of personality. The emergence of the Five-Factor Model (FFM) over the past 10 to 15 years has provided a valuable paradigm from which to gain deeper understanding of important adaptational characteristics. Though there is still some disparity with regard to the comprehensiveness and conversely the succinctness of the model, there is no other model as well supported by research than the FFM (McCrae & John, 1992). The FiveFactor Model (FFM) consists of five broad domains and 30 lower-order facets that surfaced over decades of research and factor analysis (see Cattell, 1943, for an in-depth review). Though debate ensues concerning the exact name of each domain (Loehlin, Hambrick... / Individual Differences Research, 2010, Vol. 8, No. 2, pp. 67-77 68 1992), it is generally agreed that five is the true number of mutually exclusive domains. The five domain names used by Costa and McCrae (1995) will be described for our purposes: Neuroticism, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness. Neuroticism is best understood as “individual differences in the tendency to experience distress” (McCrae & John, 1992, p. 195). Further, Neuroticism is ways in which a person thinks about and deals with problems and experiences that arise due to their susceptibility to unpleasant experiences. The definition of Extraversion is historically not as parsimonious as that of Neuroticism, because Extraversion encompasses a broader theme. The tendency toward social interaction and positive affect (Watson & Clark, 1997) is usually evident in a person who is highly extraverted. The next domain, Openness to Experience, encompasses intellectual curiosity as well as other affinities that are not related to intellect; for example, this domain has shown to describe a person who appreciates aesthetic value and who has a creative lifestyle (McCrae & John, 1992). Agreeableness is a domain that has often been associated with morality and the ability to get along with others (McCrae & John, 1992). An agreeable person would tend to work well in a group setting, because agreeableness is often expressed as a person’s tendency toward pro-social behavior (Graziano & Eisenberg, 1997). The final domain is Conscientiousness. Conscientious persons are “governed by conscious” and “diligent and thorough” (McCrae & John, 1992, p. 197). Further, Conscientiousness is often used to describe one’s ability to be in command of their behavior; i.e., driven and goal oriented (Hogan & Ones, 1997). The FFM is robust in several respects. First, the model suggests that personality is related to temperament, and is not influenced by environmental factors (McCrae et al., 2000). Instead, the ways traits are expressed are affected by culture, developmental influences, and situational factors. For example, a person’s personality can produce several different response patterns depending on the environment. Therefore, personality can be considered an enduring and relatively stable trait. Second, research on the FFM shows that the five factors are legitimate in a crosscultural context (McCrae & Costa, 1987). McCrae and Costa showed that six different translations of their FFM-based personality test, the NEO-PI-R, supported the validity of the previously described five factors. Moreover, the same five factors were evident and dominant in many different cultures that utilize extremely diverse linguistic patterns (1987). In a more recent study (McCrae et al., 2000) that investigated “intrinsic maturation”, pan-cultural age-related changes in personality profiles were evidenced. The implication is that as people in diverse cultures age, uniform changes in their personality profiles are observed. The emergent pattern showed that levels of Neuroticism, Extraversion, and Openness to Experience decrease with age, and that levels of Agreeableness and Conscientiousness increase with age in many cultures (McCrae et al., 2000). Gender differences in personality also seem to be cross-cultural. Williams, Satterwhite, and Best (1999) used data from 25 countries that had previously been used in the identification of gender stereotypes. A re-analysis of these data in the context of the FFM showed that the cross-cultural gender stereotype for females was higher on Agreeableness than it was for males, and the cross-cultural gender stereotype for males Hambrick... / Individual Differences Research, 2010, Vol. 8, No. 2, pp. 67-77 69 was higher than females on the other four domains. Though these data do not represent actual male and female responses on a personality inventory, it is remarkable that gender stereotypes alone would relate so distinctly to the FFM. The FFM has amassed plenty of evidence that personality is pervasive, enduring, and basic. Though individuals experience circumstances that cultivate certain abstract characteristics and promote particular outcomes, these tendencies and outcomes are derivatives of a diathesis that is created by personality traits (Costa & McCrae, 1992). Thus, it is practical to use personality to predict adaptational characteristics, such as coping ability. Coping Folkman and Lazarus (1980) defined coping as “the cognitive and behavioral efforts made to master, tolerate, or reduce external and internal demands and conflicts among them” (p. 223). The cognitive aspect of coping ability pertains to how threatening or important to the well-being of a person a stressful event is considered to be. The behavioral aspect of coping ability refers to the actual strategies and techniques a person employs to either change the situation (problem-focused coping) or to deal with the distressful emotions that arose due to the situation (emotion-focused coping). Clearly, the concept of coping is multi-faceted. The ways in which people appraise situations vary, the ways in which situations influence the options a person has to contend with situations vary, and the person-centered characteristics that predispose a person to certain appraisals and responses at each stage of the coping situation vary. Accordingly, Lazarus and Folkman (1987) formulated a transactional theory of coping that considers both a person’s coping response and their cognitive appraisal of the situation. This theory suggests that the person-environment interaction is dynamic and largely unpredictable. Despite evidence for coping as a process and the impact of situational factors on coping, it is important to realize that exact strategies employed are highly variable from person to person (Folkman & Lazarus, 1985). In addition, Lazarus and Folkman (1987) suggest that person-centered characteristics are influential to coping at the most basic level. For example, they recognize that emotion-focused coping tends to be related to person-centered characteristics; for example, some people are not able to cognitively reduce their stress or anxiety, while others are. In addition, the concept of cognitive appraisal creates the possibility that some people will appraise events to be more threatening or more amenable than others. Moreover, different people employ diverse behavioral styles to cope with the same situation (Folkman & Lazarus, 1985). Since the emergence and prominence of the FFM, the focus in coping research has moved increasingly toward an attempt to understand the dispositional basis of coping. Studies that employ dispositional coping measures (see Carver, Scheier, & Weintraub, 1989, for one such scale) have examined the relationship of self-reported coping tendencies to the FFM. One study (Watson & Hubbard, 1996) found that Neuroticism relates to maladaptive coping styles, Conscientiousness relates to problem-focused, action-oriented coping styles, Extraversion relates to social-support seeking, and Agreeableness shows only a modest correlation to coping style. O’Brien and DeLongis (1996) observed similar results, but continued to assert that the best understanding of the Hambrick... / Individual Differences Research, 2010, Vol. 8, No. 2, pp. 67-77 70 role of personality in the coping process is one that takes situational and dispositional ", "title": "" }, { "docid": "b39afe542e7c1a05f18de205d9588e0c", "text": "Transmission of Web3D media over the Internet can be slow, especially when downloading huge 3D models through relatively limited bandwidth. Currently, 3D compression and progressive meshes are used to alleviate the problem, but these schemes do not consider similarity among the 3D components, leaving rooms for improvement in terms of efficiency. This paper proposes a similarity-aware 3D model reduction method, called Lightweight Progressive Meshes (LPM). The key idea of LPM is to search similar components in a 3D model, and reuse them through the construction of a Lightweight Scene Graph (LSG). The proposed LPM offers three significant benefits. First, the size of 3D models can be reduced for transmission without almost any precision loss of the original models. Second, when rendering, decompression is not needed to restore the original model, and instanced rendering can be fully exploited. Third, it is extremely efficient under very limited bandwidth, especially when transmitting large 3D scenes. Performance on real data justifies the effectiveness of our LPM, which improves the state-of-the-art in Web3D media transmission.", "title": "" }, { "docid": "644729aad373c249100181fa0b0775e8", "text": "Cloud broker is an entity that manages the use, performance and delivery of cloud services and negotiates relationships between cloud providers and cloud consumers. In real life scenarios, automated cloud service brokering is often challenging because the service descriptions may involve complex constraints and require flexible semantic matching. Furthermore, cloud providers often use non-standard formats leading to semantic interoperability issues. In this paper, we formulate cloud service brokering under a service oriented framework, and propose a novel OWL-S based semantic cloud service discovery and selection system. The proposed system supports dynamic semantic matching of cloud services described with complex constraints. We consider a practical cloud service brokering scenario, and show with detailed illustration that our system is promising for real-life applications.", "title": "" }, { "docid": "1b0abb269fcfddc9dd00b3f8a682e873", "text": "Fully convolutional neural networks (F-CNNs) have set the state-of-the-art in image segmentation for a plethora of applications. Architectural innovations within F-CNNs have mainly focused on improving spatial encoding or network connectivity to aid gradient flow. In this paper, we explore an alternate direction of recalibrating the feature maps adaptively, to boost meaningful features, while suppressing weak ones. We draw inspiration from the recently proposed squeeze & excitation (SE) module for channel recalibration of feature maps for image classification. Towards this end, we introduce three variants of SE modules for image segmentation, (i) squeezing spatially and exciting channel-wise (cSE), (ii) squeezing channel-wise and exciting spatially (sSE) and (iii) concurrent spatial and channel squeeze & excitation (scSE). We effectively incorporate these SE modules within three different state-of-theart F-CNNs (DenseNet, SD-Net, U-Net) and observe consistent improvement of performance across all architectures, while minimally effecting model complexity. Evaluations are performed on two challenging applications: whole brain segmentation on MRI scans and organ segmentation on whole body contrast enhanced CT scans.", "title": "" }, { "docid": "8ba94bf9142c924aaf131c5571a5a661", "text": "Worldwide, 30% – 40% of women and 13% of men suffer from osteoporotic fractures of the bone, particularly the older people. Doctors in the hospitals need to manually inspect a large number of x-ray images to identify the fracture cases. Automated detection of fractures in x-ray images can help to lower the workload of doctors by screening out the easy cases, leaving a small number of difficult cases and the second confirmation to the doctors to examine more closely. To our best knowledge, such a system does not exist as yet. This paper describes a method of measuring the neck-shaft angle of the femur, which is one of the main diagnostic rules that doctors use to determine whether a fracture is present at the femur. Experimental tests performed on test images confirm that the method is accurate in measuring neck-shaft angle and detecting certain types of femur fractures.", "title": "" }, { "docid": "903a5b7fb82d3d46b02e720b2db9c982", "text": "A heuristic recursive algorithm for the two-dimensional rectangular strip packing problem is presented. It is based on a recursive structure combined with branch-and-bound techniques. Several lengths are tried to determine the minimal plate length to hold all the items. Initially the plate is taken as a block. For the current block considered, the algorithm selects an item, puts it at the bottom-left corner of the block, and divides the unoccupied region into two smaller blocks with an orthogonal cut. The dividing cut is vertical if the block width is equal to the plate width; otherwise it is horizontal. Both lower and upper bounds are used to prune unpromising branches. The computational results on a class of benchmark problems indicate that the algorithm performs better than several recently published algorithms. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6e47d81ddb9a1632d0ef162c92b0a454", "text": "Neural machine translation (NMT) systems have recently achieved results comparable to the state of the art on a few translation tasks, including English→French and English→German. The main purpose of the Montreal Institute for Learning Algorithms (MILA) submission to WMT’15 is to evaluate this new approach on a greater variety of language pairs. Furthermore, the human evaluation campaign may help us and the research community to better understand the behaviour of our systems. We use the RNNsearch architecture, which adds an attention mechanism to the encoderdecoder. We also leverage some of the recent developments in NMT, including the use of large vocabularies, unknown word replacement and, to a limited degree, the inclusion of monolingual language models.", "title": "" }, { "docid": "d5238992b0433383023df48fd99fd656", "text": "We compute upper and lower bounds on the VC dimension and pseudodimension of feedforward neural networks composed of piecewise polynomial activation functions. We show that if the number of layers is fixed, then the VC dimension and pseudo-dimension grow as W log W, where W is the number of parameters in the network. This result stands in opposition to the case where the number of layers is unbounded, in which case the VC dimension and pseudo-dimension grow as W2. We combine our results with recently established approximation error rates and determine error bounds for the problem of regression estimation by piecewise polynomial networks with unbounded weights.", "title": "" }, { "docid": "7749b46bc899b3d876d63d8f3d0981ea", "text": "This paper details the control and guidance architecture for the T-wing tail-sitter unmanned air vehicle, (UAV). The T-wing is a vertical take off and landing (VTOL) UAV that is capable of both wing-born horizontal flight and propeller born vertical mode flight including hover and descent. During low-speed vertical flight the T-wing uses propeller wash over its aerodynamic surfaces to effect control. At the lowest level, the vehicle uses a mixture of classical and LQR controllers for angular rate and translational velocity control. These low-level controllers are directed by a series of proportional guidance controllers for the vertical, horizontal and transition flight modes that allow the vehicle to achieve autonomous waypoint navigation. The control design for the T-wing is complicated by the large differences in vehicle dynamics between vertical and horizontal flight; the difficulty of accurately predicting the low-speed vehicle aerodynamics; and the basic instability of the vertical flight mode. This paper considers the control design problem for the T-wing in light of these factors. In particular it focuses on the integration of all the different types and levels of controllers into a full flight-vehicle control system.", "title": "" }, { "docid": "a0fcd09ea8f29a0827385ae9f48ddd44", "text": "Networks play a central role in modern data analysis, enabling us to reason about systems by studying the relationships between their parts. Most often in network analysis, the edges are given. However, in many systems it is difficult or impossible to measure the network directly. Examples of latent networks include economic interactions linking financial instruments and patterns of reciprocity in gang violence. In these cases, we are limited to noisy observations of events associated with each node. To enable analysis of these implicit networks, we develop a probabilistic model that combines mutuallyexciting point processes with random graph models. We show how the Poisson superposition principle enables an elegant auxiliary variable formulation and a fully-Bayesian, parallel inference algorithm. We evaluate this new model empirically on several datasets.", "title": "" }, { "docid": "ec44e814277dd0d45a314c42ef417cbe", "text": "INTRODUCTION Oxygen support therapy should be given to the patients with acute hypoxic respiratory insufficiency in order to provide oxygenation of the tissues until the underlying pathology improves. The inspiratory flow rate requirement of patients with respiratory insufficiency varies between 30 and 120 L/min. Low flow and high flow conventional oxygen support systems produce a maximum flow rate of 15 L/min, and FiO2 changes depending on the patient’s peak inspiratory flow rate, respiratory pattern, the mask that is used, or the characteristics of the cannula. The inability to provide adequate airflow leads to discomfort in tachypneic patients. With high-flow nasal oxygen (HFNO) cannulas, warmed and humidified air matching the body temperature can be regulated at flow rates of 5–60 L/min, and oxygen delivery varies between 21% and 100%. When HFNO, first used in infants, was reported to increase the risk of infection, its long-term use was stopped. This problem was later eliminated with the use of sterile water, and its use has become a current issue in critical adult patients as well. Studies show that HFNO treatment improves physiological parameters when compared to conventional oxygen systems. Although there are studies indicating successful applications in different patient groups, there are also studies indicating that it does not create any difference in clinical parameters, but patient comfort is better in HFNO when compared with standard oxygen therapy and noninvasive mechanical ventilation (NIMV) (1-6). In this compilation, the physiological effect mechanisms of HFNO treatment and its use in various clinical situations are discussed in the light of current studies.", "title": "" }, { "docid": "e4c27a97a355543cf113a16bcd28ca50", "text": "A metamaterial-based broadband low-profile grid-slotted patch antenna is presented. By slotting the radiating patch, a periodic array of series capacitor loaded metamaterial patch cells is formed, and excited through the coupling aperture in a ground plane right underneath and parallel to the slot at the center of the patch. By exciting two adjacent resonant modes simultaneously, broadband impedance matching and consistent radiation are achieved. The dispersion relation of the capacitor-loaded patch cell is applied in the mode analysis. The proposed grid-slotted patch antenna with a low profile of 0.06 λ0 (λ0 is the center operating wavelength in free space) achieves a measured bandwidth of 28% for the |S11| less than -10 dB and maximum gain of 9.8 dBi.", "title": "" }, { "docid": "708d024f7fccc00dd3961ecc9aca1893", "text": "Transportation networks play a crucial role in human mobility, the exchange of goods and the spread of invasive species. With 90 per cent of world trade carried by sea, the global network of merchant ships provides one of the most important modes of transportation. Here, we use information about the itineraries of 16 363 cargo ships during the year 2007 to construct a network of links between ports. We show that the network has several features that set it apart from other transportation networks. In particular, most ships can be classified into three categories: bulk dry carriers, container ships and oil tankers. These three categories do not only differ in the ships' physical characteristics, but also in their mobility patterns and networks. Container ships follow regularly repeating paths whereas bulk dry carriers and oil tankers move less predictably between ports. The network of all ship movements possesses a heavy-tailed distribution for the connectivity of ports and for the loads transported on the links with systematic differences between ship types. The data analysed in this paper improve current assumptions based on gravity models of ship movements, an important step towards understanding patterns of global trade and bioinvasion.", "title": "" }, { "docid": "77bdd6c3f5065ef4abfaa70d34bc020a", "text": "The discovery of disease-causing mutations typically requires confirmation of the variant or gene in multiple unrelated individuals, and a large number of rare genetic diseases remain unsolved due to difficulty identifying second families. To enable the secure sharing of case records by clinicians and rare disease scientists, we have developed the PhenomeCentral portal (https://phenomecentral.org). Each record includes a phenotypic description and relevant genetic information (exome or candidate genes). PhenomeCentral identifies similar patients in the database based on semantic similarity between clinical features, automatically prioritized genes from whole-exome data, and candidate genes entered by the users, enabling both hypothesis-free and hypothesis-driven matchmaking. Users can then contact other submitters to follow up on promising matches. PhenomeCentral incorporates data for over 1,000 patients with rare genetic diseases, contributed by the FORGE and Care4Rare Canada projects, the US NIH Undiagnosed Diseases Program, the EU Neuromics and ANDDIrare projects, as well as numerous independent clinicians and scientists. Though the majority of these records have associated exome data, most lack a molecular diagnosis. PhenomeCentral has already been used to identify causative mutations for several patients, and its ability to find matching patients and diagnose these diseases will grow with each additional patient that is entered.", "title": "" }, { "docid": "9efa07624d538272a5da844c74b2f56d", "text": "Electronic health records (EHRs), digitization of patients’ health record, offer many advantages over traditional ways of keeping patients’ records, such as easing data management and facilitating quick access and real-time treatment. EHRs are a rich source of information for research (e.g. in data analytics), but there is a risk that the published data (or its leakage) can compromise patient privacy. The k-anonymity model is a widely used privacy model to study privacy breaches, but this model only studies privacy against identity disclosure. Other extensions to mitigate existing limitations in k-anonymity model include p-sensitive k-anonymity model, p+-sensitive k-anonymity model, and (p, α)-sensitive k-anonymity model. In this paper, we point out that these existing models are inadequate in preserving the privacy of end users. Specifically, we identify situations where p+sensitive k-anonymity model is unable to preserve the privacy of individuals when an adversary can identify similarities among the categories of sensitive values. We term such attack as Categorical Similarity Attack (CSA). Thus, we propose a balanced p+-sensitive k-anonymity model, as an extension of the p+-sensitive k-anonymity model. We then formally analyze the proposed model using High-Level Petri Nets (HLPN) and verify its properties using SMT-lib and Z3 solver.We then evaluate the utility of release data using standard metrics and show that our model outperforms its counterparts in terms of privacy vs. utility tradeoff. © 2017 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
a8d541d183f0204e030d10c8cfd1d0aa
Using pseudo-senses for improving the extraction of synonyms from word embeddings
[ { "docid": "57ab94ce902f4a8b0082cc4f42cd3b3f", "text": "In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors’ capability for judging semantic similarity. Applying this method to publicly available pre-trained word vectors leads to a new state of the art performance on the SimLex-999 dataset. We also show how the method can be used to tailor the word vector space for the downstream task of dialogue state tracking, resulting in robust improvements across different dialogue domains.", "title": "" }, { "docid": "3021a6be2aab29e18f1fe7e77c59a1d8", "text": "We demonstrate the advantage of specializing semantic word embeddings for either similarity or relatedness. We compare two variants of retrofitting and a joint-learning approach, and find that all three yield specialized semantic spaces that capture human intuitions regarding similarity and relatedness better than unspecialized spaces. We also show that using specialized spaces in NLP tasks and applications leads to clear improvements, for document classification and synonym selection, which rely on either similarity or relatedness but not both.", "title": "" } ]
[ { "docid": "d27f416f291f814d157d0413e9c06d29", "text": "Injury prediction is one of the most challenging issues in sports and a key component for injury prevention. Sports injuries aetiology investigations have assumed a reductionist view in which a phenomenon has been simplified into units and analysed as the sum of its basic parts and causality has been seen in a linear and unidirectional way. This reductionist approach relies on correlation and regression analyses and, despite the vast effort to predict sports injuries, it has been limited in its ability to successfully identify predictive factors. The majority of human health conditions are complex. In this sense, the multifactorial complex nature of sports injuries arises not from the linear interaction between isolated and predictive factors, but from the complex interaction among a web of determinants. Thus, the aim of this conceptual paper was to propose a complex system model for sports injuries and to demonstrate how the implementation of complex system thinking may allow us to better address the complex nature of the sports injuries aetiology. According to this model, we should identify features that are hallmarks of complex systems, such as the pattern of relationships (interactions) among determinants, the regularities (profiles) that simultaneously characterise and constrain the phenomenon and the emerging pattern that arises from the complex web of determinants. In sports practice, this emerging pattern may be related to injury occurrence or adaptation. This novel view of preventive intervention relies on the identification of regularities or risk profile, moving from risk factors to risk pattern recognition.", "title": "" }, { "docid": "e1675cec4200848b32e7d5ffefaf2846", "text": "Although information retrieval models based on Markov Random Fields (MRF), such as Sequential Dependence Model and Weighted Sequential Dependence Model (WSDM), have been shown to outperform bag-of-words probabilistic and language modeling retrieval models by taking into account term dependencies, it is not known how to effectively account for term dependencies in query expansion methods based on pseudo-relevance feedback (PRF) for retrieval models of this type. In this paper, we propose Semantic Weighted Dependence Model (SWDM), a PRF based query expansion method for WSDM, which utilizes distributed low-dimensional word representations (i.e., word embeddings). Our method finds the closest unigrams to each query term in the embedding space and top retrieved documents and directly incorporates them into the retrieval function of WSDM. Experiments on TREC datasets indicate statistically significant improvement of SWDM over state-of-the-art MRF retrieval models, PRF methods for MRF retrieval models and embedding based query expansion methods for bag-of-words retrieval models.", "title": "" }, { "docid": "c680940ecbeb9e74236c63e3de1429c7", "text": "Parallel Analysis is a Monte Carlo simulation technique that aids researchers in determining the number of factors to retain in Principal Component and Exploratory Factor Analysis. This method provides a superior alternative to other techniques that are commonly used for the same purpose, such as the Scree test or the Kaiser’s eigenvalue-greater-than-one rule. Nevertheless, Parallel Analysis is not well known among researchers, in part because it is not included as an analysis option in the most popular statistical packages. This paper describes and illustrates how to apply Parallel Analysis with an easy-to-use computer program called ViSta-PARAN. ViSta-PARAN is a user-friendly application that can compute and interpret Parallel Analysis. Its user interface is fully graphic and includes a dialog box to specify parameters, and specialized graphics to visualize the analysis output.", "title": "" }, { "docid": "c45a494afc622ec7ab5af78098945eeb", "text": "While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood. In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly classified image. In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step. We demonstrate our results on MNIST with convolutional neural networks and on ImageNet with Inception-v3 and ResNet-101.", "title": "" }, { "docid": "21520ed93571c1f77b566e20ef73b7a3", "text": "BACKGROUND\nOsteoporosis is a common disease in elderly, characterized by poor bone quality as a result of alterations affecting trabecular bone. However, recent studies have described also an important role of alterations of cortical bone in the physiopathology of osteoporosis. Although dual-energy X-ray absorptiometry (DXA) is a valid method to assess bone mineral density, in the presence of comorbidities real bone fragility is unable to be evaluated. The number of hip fractures is rising, especially in people over 85 years old.\n\n\nAIMS\nThe aim is to evaluate an alternative method so that it can indicate fracture risk, independent of bone mineral density (BMD). Femoral cortical index (FCI) assesses cortical bone stock using femur X-ray.\n\n\nMETHODS\nA retrospective study has been conducted on 152 patients with hip fragility fractures. FCI has been calculated on fractured femur and on the opposite side. The presence of comorbidities, osteoporosis risk factors, vitamin D levels, and BMD have been analyzed for each patient.\n\n\nRESULTS\nAverage values of FCI have been 0.42 for fractured femurs and 0.48 at the opposite side with a statistically significant difference (p = 0.002). Patients with severe hypovitaminosis D had a minor FCI compared to those with moderate deficiency (0.41 vs. 0.46, p < 0.011). 42 patients (27.6%) with osteopenic or normal BMD have presented low values of FCI.\n\n\nDISCUSSION AND CONCLUSION\nA significant correlation among low values of FCI, comorbidities, severe hypovitaminosis D. and BMD in patients with hip fractures has been found. FCI could be a useful tool to evaluate bone fragility and to predict fracture risk even in the normal and osteopenic BMD patients.", "title": "" }, { "docid": "997a1ec16394a20b3a7f2889a583b09d", "text": "This second article of our series looks at the process of designing a survey. The design process begins with reviewing the objectives, examining the target population identified by the objectives, and deciding how best to obtain the information needed to address those objectives. However, we also need to consider factors such as determining the appropriate sample size and ensuring the largest possible response rate.To illustrate our ideas, we use the three surveys described in Part 1 of this series to suggest good and bad practice in software engineering survey research.", "title": "" }, { "docid": "b7b8b850659367695ca3d2eb3d0f710c", "text": "Human face-to-face communication is a complex multimodal signal. We use words (language modality), gestures (vision modality) and changes in tone (acoustic modality) to convey our intentions. Humans easily process and understand faceto-face communication, however, comprehending this form of communication remains a significant challenge for Artificial Intelligence (AI). AI must understand each modality and the interactions between them that shape human communication. In this paper, we present a novel neural architecture for understanding human communication called the Multiattention Recurrent Network (MARN). The main strength of our model comes from discovering interactions between modalities through time using a neural component called the Multi-attention Block (MAB) and storing them in the hybrid memory of a recurrent component called the Long-short Term Hybrid Memory (LSTHM). We perform extensive comparisons on six publicly available datasets for multimodal sentiment analysis, speaker trait recognition and emotion recognition. MARN shows state-of-the-art performance on all the datasets.", "title": "" }, { "docid": "7ae8e78059bb710c46b66c120e1ff0ef", "text": "Biometrics technology is keep growing substantially in the last decades with great advances in biometric applications. An accurate personal authentication or identification has become a critical step in a wide range of applications such as national ID, electronic commerce, and automated and remote banking. The recent developments in the biometrics area have led to smaller, faster, and cheaper systems such as mobile device systems. As a kind of human biometrics for personal identification, fingerprint is the dominant trait due to its simplicity to be captured, processed, and extracted without violating user privacy. In a wide range of applications of fingerprint recognition, including civilian and forensics implementations, a large amount of fingerprints are collected and stored everyday for different purposes. In Automatic Fingerprint Identification System (AFIS) with a large database, the input image is matched with all fields inside the database to identify the most potential identity. Although satisfactory performances have been reported for fingerprint authentication (1:1 matching), both time efficiency and matching accuracy deteriorate seriously by simple extension of a 1:1 authentication procedure to a 1:N identification system (Manhua, 2010). The system response time is the key issue of any AFIS, and it is often improved by controlling the accuracy of the identification to satisfy the system requirement. In addition to developing new technologies, it is necessary to make clear the trade-off between the response time and the accuracy in fingerprint identification systems. Moreover, from the versatility and developing cost points of view, the trade-off should be realized in terms of system design, implementation, and usability. Fingerprint classification is one of the standard approaches to speed up the matching process between the input sample and the collected database (K. Jain et al., 2007). Fingerprint classification is considered as indispensable step toward reducing the search time through large fingerprint databases. It refers to the problem of assigning fingerprint to one of several pre-specified classes, and it presents an interesting problem in pattern recognition, especially in the real and time sensitive applications that require small response time. Fingerprint classification process works on narrowing down the search domain into smaller database subsets, and hence speeds up the total response time of any AFIS. Even for", "title": "" }, { "docid": "4f3936b753abd2265d867c0937aec24c", "text": "A weighted constraint satisfaction problem (WCSP) is a constraint satisfaction problem in which preferences among solutions can be expressed. Bucket elimination is a complete technique commonly used to solve this kind of constraint satisfaction problem. When the memory required to apply bucket elimination is too high, a heuristic method based on it (denominated mini-buckets) can be used to calculate bounds for the optimal solution. Nevertheless, the curse of dimensionality makes these techniques impractical on large scale problems. In response to this situation, we present a memetic algorithm for WCSPs in which bucket elimination is used as a mechanism for recombining solutions, providing the best possible child from the parental set. Subsequently, a multi-level model in which this exact/metaheuristic hybrid is further hybridized with branch-and-bound techniques and mini-buckets is studied. As a case study, we have applied these algorithms to the resolution of the maximum density still life problem, a hard constraint optimization problem based on Conway’s game of life. The resulting algorithm consistently finds optimal patterns for up to date solved instances in less time than current approaches. Moreover, it is shown that this proposal provides new best known solutions for very large instances.", "title": "" }, { "docid": "e35d304b73fc8e7b848a154f547c976d", "text": "While neural machine translation (NMT) provides high-quality translation, it is still hard to interpret and analyze its behavior. We present an interactive interface for visualizing and intervening behavior of NMT, specifically concentrating on the behavior of beam search mechanism and attention component. The tool (1) visualizes search tree and attention and (2) provides interface to adjust search tree and attention weight (manually or automatically) at real-time. We show the tool help users understand NMT in various ways.", "title": "" }, { "docid": "23527243a9ccb9feaa24ccc7ac38f05d", "text": "BACKGROUND\nElectrosurgical units are the most common type of electrical equipment in the operating room. A basic understanding of electricity is needed to safely apply electrosurgical technology for patient care.\n\n\nMETHODS\nWe reviewed the literature concerning the essential biophysics, the incidence of electrosurgical injuries, and the possible mechanisms for injury. Various safety guidelines pertaining to avoidance of injuries were also reviewed.\n\n\nRESULTS\nElectrothermal injury may result from direct application, insulation failure, direct coupling, capacitive coupling, and so forth.\n\n\nCONCLUSION\nA thorough knowledge of the fundamentals of electrosurgery by the entire team in the operating room is essential for patient safety and for recognizing potential complications. Newer hemostatic technologies can be used to decrease the incidence of complications.", "title": "" }, { "docid": "474572cef9f1beb875d3ae012e06160f", "text": "Published attacks against smartphones have concentrated on software running on the application processor. With numerous countermeasures like ASLR, DEP and code signing being deployed by operating system vendors, practical exploitation of memory corruptions on this processor has become a time-consuming endeavor. At the same time, the cellular baseband stack of most smartphones runs on a separate processor and is significantly less hardened, if at all. In this paper we demonstrate the risk of remotely exploitable memory corruptions in cellular baseband stacks. We analyze two widely deployed baseband stacks and give exemplary cases of memory corruptions that can be leveraged to inject and execute arbitrary code on the baseband processor. The vulnerabilities can be triggered over the air interface using a rogue GSM base station, for instance using OpenBTS together with a USRP software defined radio.", "title": "" }, { "docid": "69bb9ac73e4135dbfa00084a734adfa7", "text": "Mobility-assistive device such as powered wheelchair is very useful for disabled people, to gain some physical independence. The three main functions of the proposed system are, 1) wheelchair navigation using multiple input, 2) obstacle detection using IR sensors, 3) home automation for disable person. Wheelchair can be navigated through i)voice command or ii) moving head or hand in four fixed position which is captured using accelerometer sensor built in android phone. Using 4 IR sensors we can avoid the risk of collision and injury and can maintain some safer distance from the objects. Disable person cannot stand up and switch on-off the light or fan every time. So to give them more relaxation this system offers home automation by giving voice command to the android phone or by manually swipe the button on the screen. The system can be available at very low cost so that more number of disable persons can get benefits.", "title": "" }, { "docid": "7ea6a5d576e84e15d1da5c2256592fa5", "text": "Context An optimal software development process is regarded as being dependent on the situational characteristics of individual software development settings. Such characteristics include the nature of the application(s) under development, team size, requirements volatility and personnel experience. However, no comprehensive reference framework of the situational factors affecting the software development process is presently available. Objective The absence of such a comprehensive reference framework of the situational factors affecting the software development process is problematic not just because it inhibits our ability to optimise the software development process, but perhaps more importantly, because it potentially undermines our capacity to ascertain the key constraints and characteristics of a software development setting. Method To address this deficiency, we have consolidated a substantial body of related research into an initial reference framework of the situational factors affecting the software development process. To support the data consolidation, we have applied rigorous data coding techniques from Grounded Theory and we believe that the resulting framework represents an important contribution to the software engineering field of knowledge. Results The resulting reference framework of situational factors consists of 8 classifications and 44 factors that inform the software process. We believe that the situational factor reference framework presented herein represents a sound initial reference framework for the key situational elements affecting the software process definition. Conclusion In addition to providing a useful reference listing for the research community and for committees engaged in the development of standards, the reference framework also provides support for practitioners who are challenged with defining and maintaining software development processes. Furthermore, this framework can be used to develop a profile of the situational characteristics of a software development setting, which in turn provides a sound foundation for software development process definition and optimisation.", "title": "" }, { "docid": "82f1e278631c4ee6cd253842a7b9697a", "text": "The introduction of smart mobile devices has radically redesigned user interaction, as these devices are equipped with numerous sensors, making applications context-aware. To further improve user experience, most mobile operating systems and service providers are gradually shipping smart devices with voice controlled intelligent personal assistants, reaching a new level of human and technology convergence. While these systems facilitate user interaction, it has been recently shown that there is a potential risk regarding devices, which have such functionality. Our independent research indicates that this threat is not merely potential, but very real and more dangerous than initially perceived, as it is augmented by the inherent mechanisms of the underlying operating systems, the increasing capabilities of these assistants, and the proximity with other devices in the Internet of Things (IoT) era. In this paper, we discuss and demonstrate how these attacks can be launched, analysing their impact in real world scenarios.", "title": "" }, { "docid": "4d0bfb1eead0886e4196d61cf698aac5", "text": "We use machine learning for designing a medium frequency trading strategy for a portfolio of 5 year and 10 year US Treasury note futures. We formulate this as a classification problem where we predict the weekly direction of movement of the portfolio using features extracted from a deep belief network trained on technical indicators of the portfolio constituents. The experimentation shows that the resulting pipeline is effective in making a profitable trade.", "title": "" }, { "docid": "a76a21657810caa31c6a01d2fddef013", "text": "Intermittent scan chain hold-time fault is discussed in this paper and a method to diagnose the faulty site in a scan chain is proposed as well. Unlike the previous scan chain diagnosis methods that targeted permanent faults only, the proposed method targets both permanent faults and intermittent faults. Three ideas are presented in this paper. First an enhanced upper bound on the location of candidate faulty scan cells is obtained. Second a new method to determine a lower bound is proposed. Finally a statistical diagnosis algorithm is proposed to calculate the probabilities of the bounded set of candidate faulty scan cells. The proposed algorithm is shown to be efficient and effective for large industrial designs with multiple faulty scan chains.", "title": "" }, { "docid": "b25ea93053a18c17c7c53d4f08a0715c", "text": "The ministry of education is launching an overall project to implement the use of ICT in the Israeli education system. To prepare pre-service teachers with whom we work for this kind of implementation, we designed a model, which prepares them to use digital tools effectively while integrating particular pedagogy for teaching a specific mathematics or science content. The goal of the present research is to study the development of these pre-service teachers' TPACK (technological, pedagogical and content knowledge), attitudes toward computers and their ICT proficiency. For this purpose, we used questionnaires developed by the MOFET institute and by previous studies. The research results show significant improvement in the TPACK level and ICT proficiency, but no significant effect of the preparation on most of the components of the teachers' attitudes toward computers, being positively high before and after the preparation.", "title": "" }, { "docid": "c20c9c299bd4dd57c1fef5107e31a99a", "text": "The evolution of SAW oscillator technology over the past 17 years is described and a review of the current state of the art for high-performance SAW oscillators is presented. This review draws heavily upon the authors' own experience and efforts, which have focused upon the development of a wide variety of SAW oscillators in response to numerous high-performance military system requirements.<<ETX>>", "title": "" }, { "docid": "e303eddacfdce272b8e71dc30a507020", "text": "As new media are becoming daily fare, Internet addiction appears as a potential problem in adolescents. From the reported negative consequences, it appears that Internet addiction can have a variety of detrimental outcomes for young people that may require professional intervention. Researchers have now identified a number of activities and personality traits associated with Internet addiction. This study aimed to synthesise previous findings by (i) assessing the prevalence of potential Internet addiction in a large sample of adolescents, and (ii) investigating the interactions between personality traits and the usage of particular Internet applications as risk factors for Internet addiction. A total of 3,105 adolescents in the Netherlands filled out a self-report questionnaire including the Compulsive Internet Use Scale and the Quick Big Five Scale. Results indicate that 3.7% of the sample were classified as potentially being addicted to the Internet. The use of online gaming and social applications (online social networking sites and Twitter) increased the risk for Internet addiction, whereas agreeableness and resourcefulness appeared as protective factors in high frequency online gamers. The findings support the inclusion of ‘Internet addiction’ in the DSM-V. Vulnerability and resilience appear as significant aspects that require consideration in", "title": "" } ]
scidocsrr
45dc63c7962642655573a02aa9d0857a
Detecting energy-greedy anomalies and mobile malware variants
[ { "docid": "db89d618c127dbf45cac1062ae5117ab", "text": "A language-independent means of gauging topical similarity in unrestricted text is described. The method combines information derived from n-grams (consecutive sequences of n characters) with a simple vector-space technique that makes sorting, categorization, and retrieval feasible in a large multilingual collection of documents. No prior information about document content or language is required. Context, as it applies to document similarity, can be accommodated by a well-defined procedure. When an existing document is used as an exemplar, the completeness and accuracy with which topically related documents are retrieved is comparable to that of the best existing systems. The results of a formal evaluation are discussed, and examples are given using documents in English and Japanese.", "title": "" } ]
[ { "docid": "53fca78f9ecbfe0a88eb1df8596976e1", "text": "As there has been an explosive increase in wireless data traffic, mmw communication has become one of the most attractive techniques in the 5G mobile communications systems. Although mmw communication systems have been successfully applied to indoor scenarios, various external factors in an outdoor environment limit the applications of mobile communication systems working at the mmw bands. In this article, we discuss the issues involved in the design of antenna array architecture for future 5G mmw systems, in which the antenna elements can be deployed in the shapes of a cross, circle, or hexagon, in addition to the conventional rectangle. The simulation results indicate that while there always exists a non-trivial gain fluctuation in other regular antenna arrays, the circular antenna array has a flat gain in the main lobe of the radiation pattern with varying angles. This makes the circular antenna array more robust to angle variations that frequently occur due to antenna vibration in an outdoor environment. In addition, in order to guarantee effective coverage of mmw communication systems, possible solutions such as distributed antenna systems and cooperative multi-hop relaying are discussed, together with the design of mmw antenna arrays. Furthermore, other challenges for the implementation of mmw cellular networks, for example, blockage, communication security, hardware development, and so on, are discussed, as are potential solutions.", "title": "" }, { "docid": "8dde3827552256660089847a547e3c80", "text": "Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques. Much of the progress in open-domain QA systems has been realized through advances in information retrieval methods and corpus construction. In this paper, we focus on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for gradeschool science exams. These questions are selected to be the most challenging for current QA systems, and current state of the art performance is only slightly better than random chance. We present a system that rewrites a given question into queries that are used to retrieve supporting text from a large corpus of science-related text. Our rewriter is able to incorporate background knowledge from ConceptNet and – in tandem with a generic textual entailment system trained on SciTail that identifies support in the retrieved results – outperforms several strong baselines on the end-to-end QA task despite only being trained to identify essential terms in the original source question. We use a generalizable decision methodology over the retrieved evidence and answer candidates to select the best answer. By combining query rewriting, background knowledge, and textual entailment our system is able to outperform several strong baselines on the ARC dataset.", "title": "" }, { "docid": "4edcd69dbda61f234db6798f930947a5", "text": "Deep reinforcement learning has shown its success in game playing. However, 2.5D fighting games would be a challenging task to handle due to ambiguity in visual appearances like height or depth of the characters. Moreover, actions in such games typically involve particular sequential action orders, which also makes the network design very difficult. Based on the network of Asynchronous Advantage Actor-Critic (A3C), we create an OpenAI-gym-like gaming environment with the game of Little Fighter 2 (LF2), and present a novel A3C+ network for learning RL agents. The introduced model includes a Recurrent Info network, which utilizes game-related info features with recurrent layers to observe combo skills for fighting. In the experiments, we consider LF2 in different settings, which successfully demonstrates the use of our proposed model for learning 2.5D fighting games.", "title": "" }, { "docid": "47afccb5e7bcdade764666f3b5ab042e", "text": "Social media comprises interactive applications and platforms for creating, sharing and exchange of user-generated contents. The past ten years have brought huge growth in social media, especially online social networking services, and it is changing our ways to organize and communicate. It aggregates opinions and feelings of diverse groups of people at low cost. Mining the attributes and contents of social media gives us an opportunity to discover social structure characteristics, analyze action patterns qualitatively and quantitatively, and sometimes the ability to predict future human related events. In this paper, we firstly discuss the realms which can be predicted with current social media, then overview available predictors and techniques of prediction, and finally discuss challenges and possible future directions.", "title": "" }, { "docid": "1669a9cb7dabaa778fbb367bbba77232", "text": "Functional significance of delta oscillations is not fully understood. One way to approach this question would be from an evolutionary perspective. Delta oscillations dominate the EEG of waking reptiles. In humans, they are prominent only in early developmental stages and during slow-wave sleep. Increase of delta power has been documented in a wide array of developmental disorders and pathological conditions. Considerable evidence on the association between delta waves and autonomic and metabolic processes hints that they may be involved in integration of cerebral activity with homeostatic processes. Much evidence suggests the involvement of delta oscillations in motivation. They increase during hunger, sexual arousal, and in substance users. They also increase during panic attacks and sustained pain. In cognitive domain, they are implicated in attention, salience detection, and subliminal perception. This evidence shows that delta oscillations are associated with evolutionary old basic processes, which in waking adults are overshadowed by more advanced processes associated with higher frequency oscillations. The former processes rise in activity, however, when the latter are dysfunctional.", "title": "" }, { "docid": "fc18c8900e4b5e9264d78a414fef152a", "text": "Development of Web 2.0 has resulted in enormous increase in the vast source of opinionated user generated data. Sentiment Analysis includes extracting, grasping, arranging and presenting the feelings or suppositions communicated in the information gathered from the clients. This paper exhibits an efficient writing survey of different strategies of sentiment analysis. A model for sentiment analysis of twitter data using existing techniques is constructed for comparative analysis of various approaches. Dataset is pre-processed for noise removal and unigrams as well as bigrams are used for feature extraction with term frequency as weighting criteria. Maximum accuracy is achieved b y using a combination of SVM and Naïve Bayes at 78.60% employing unigrams and 81.40% employing bigrams as features. Keywords— Sentiment Analysis, Crowdsourced data, Twitter, Machine Learning Techniques.", "title": "" }, { "docid": "fd8f5dc4264464cd8f978872d58aaf19", "text": "OBJECTIVES\nTo determine the capacity of black soldier fly larvae (BSFL) (Hermetia illucens) to convert fresh human faeces into larval biomass under different feeding regimes, and to determine how effective BSFL are as a means of human faecal waste management.\n\n\nMETHODS\nBlack soldier fly larvae were fed fresh human faeces. The frequency of feeding, number of larvae and feeding ratio were altered to determine their effects on larval growth, prepupal weight, waste reduction, bioconversion and feed conversion rate (FCR).\n\n\nRESULTS\nThe larvae that were fed a single lump amount of faeces developed into significantly larger larvae and prepupae than those fed incrementally every 2 days; however, the development into pre-pupae took longer. The highest waste reduction was found in the group containing the most larvae, with no difference between feeding regimes. At an estimated 90% pupation rate, the highest bioconversion (16-22%) and lowest, most efficient FCR (2.0-3.3) occurred in groups that contained 10 and 100 larvae, when fed both the lump amount and incremental regime.\n\n\nCONCLUSION\nThe prepupal weight, bioconversion and FCR results surpass those from previous studies into BSFL management of swine, chicken manure and municipal organic waste. This suggests that the use of BSFL could provide a solution to the health problems associated with poor sanitation and inadequate human waste management in developing countries.", "title": "" }, { "docid": "002f49b0aa994b286a106d6b75ec8b2a", "text": "We introduce a library of geometric voxel features for CAD surface recognition/retrieval tasks. Our features include local versions of the intrinsic volumes (the usual 3D volume, surface area, integrated mean and Gaussian curvature) and a few closely related quantities. We also compute Haar wavelet and statistical distribution features by aggregating raw voxel features. We apply our features to object classification on the ESB data set and demonstrate accurate results with a small number of shallow decision trees.", "title": "" }, { "docid": "fe35799be26543a90b4d834e41b492eb", "text": "Social Web stands for the culture of participation and collaboration on the Web. Structures emerge from social interactions: social tagging enables a community of users to assign freely chosen keywords to Web resources. The structure that evolves from social tagging is called folksonomy and recent research has shown that the exploitation of folksonomy structures is beneficial to information systems. In this thesis we propose models that better capture usage context of social tagging and develop two folksonomy systems that allow for the deduction of contextual information from tagging activities. We introduce a suite of ranking algorithms that exploit contextual information embedded in folksonomy structures and prove that these contextsensitive ranking algorithms significantly improve search in Social Web systems. We setup a framework of user modeling and personalization methods for the Social Web and evaluate this framework in the scope of personalized search and social recommender systems. Extensive evaluation reveals that our context-based user modeling techniques have significant impact on the personalization quality and clearly improve regular user modeling approaches. Finally, we analyze the nature of user profiles distributed on the Social Web, implement a service that supports cross-system user modeling and investigate the impact of cross-system user modeling methods on personalization. In different experiments we prove that our cross-system user modeling strategies solve cold-start problems in social recommender systems and that intelligent re-use of external profile information improves the recommendation quality also beyond the cold-start.", "title": "" }, { "docid": "dbaadbff5d9530c3b33ae1231eeec217", "text": "A group of 1st-graders who were administered a battery of reading tasks in a previous study were followed up as 11th graders. Ten years later, they were administered measures of exposure to print, reading comprehension, vocabulary, and general knowledge. First-grade reading ability was a strong predictor of all of the 11th-grade outcomes and remained so even when measures of cognitive ability were partialed out. First-grade reading ability (as well as 3rd- and 5th-grade ability) was reliably linked to exposure to print, as assessed in the 11th grade, even after 11th-grade reading comprehension ability was partialed out, indicating that the rapid acquisition of reading ability might well help develop the lifetime habit of reading, irrespective of the ultimate level of reading comprehension ability that the individual attains. Finally, individual differences in exposure to print were found to predict differences in the growth in reading comprehension ability throughout the elementary grades and thereafter.", "title": "" }, { "docid": "0b4f44030a922ba2c970c263583e8465", "text": "BACKGROUND\nSmoking remains one of the few potentially preventable factors associated with low birthweight, preterm birth and perinatal death.\n\n\nOBJECTIVES\nTo assess the effects of smoking cessation programs implemented during pregnancy on the health of the fetus, infant, mother, and family.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Pregnancy and Childbirth Group trials register and the Cochrane Tobacco Addiction Group trials register (July 2003), MEDLINE (January 2002 to July 2003), EMBASE (January 2002 to July 2003), PsychLIT (January 2002 to July 2003), CINAHL (January 2002 to July 2003), and AUSTHEALTH (January 2002 to 2003). We contacted trial authors to locate additional unpublished data. We handsearched references of identified trials and recent obstetric journals.\n\n\nSELECTION CRITERIA\nRandomised and quasi-randomised trials of smoking cessation programs implemented during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nFour reviewers assessed trial quality and extracted data independently.\n\n\nMAIN RESULTS\nThis review included 64 trials. Fifty-one randomised controlled trials (20,931 women) and six cluster-randomised trials (over 7500 women) provided data on smoking cessation and/or perinatal outcomes. Despite substantial variation in the intensity of the intervention and the extent of reminders and reinforcement through pregnancy, there was an increase in the median intensity of both 'usual care' and interventions over time. There was a significant reduction in smoking in the intervention groups of the 48 trials included: (relative risk (RR) 0.94, 95% confidence interval (CI) 0.93 to 0.95), an absolute difference of six in 100 women continuing to smoke. The 36 trials with validated smoking cessation had a similar reduction (RR 0.94, 95% CI 0.92 to 0.95). Smoking cessation interventions reduced low birthweight (RR 0.81, 95% CI 0.70 to 0.94) and preterm birth (RR 0.84, 95% CI 0.72 to 0.98), and there was a 33 g (95% CI 11 g to 55 g) increase in mean birthweight. There were no statistically significant differences in very low birthweight, stillbirths, perinatal or neonatal mortality but these analyses had very limited power. One intervention strategy, rewards plus social support (two trials), resulted in a significantly greater smoking reduction than other strategies (RR 0.77, 95% CI 0.72 to 0.82). Five trials of smoking relapse prevention (over 800 women) showed no statistically significant reduction in relapse.\n\n\nREVIEWERS' CONCLUSIONS\nSmoking cessation programs in pregnancy reduce the proportion of women who continue to smoke, and reduce low birthweight and preterm birth. The pooled trials have inadequate power to detect reductions in perinatal mortality or very low birthweight.", "title": "" }, { "docid": "bb24e185c02dd096ba12654392181774", "text": "The authors examined 2 ways reward might increase creativity. First, reward contingent on creativity might increase extrinsic motivation. Studies 1 and 2 found that repeatedly giving preadolescent students reward for creative performance in 1 task increased their creativity in subsequent tasks. Study 3 reported that reward promised for creativity increased college students' creative task performance. Second, expected reward for high performance might increase creativity by enhancing perceived self-determination and, therefore, intrinsic task interest. Study 4 found that employees' intrinsic job interest mediated a positive relationship between expected reward for high performance and creative suggestions offered at work. Study 5 found that employees' perceived self-determination mediated a positive relationship between expected reward for high performance and the creativity of anonymous suggestions for helping the organization.", "title": "" }, { "docid": "5bfb666403fce6277f1e274fe3696cbf", "text": "We study node similarity in labeled networks, using the label sequences found in paths of bounded length q leading to the nodes. (This recalls the q-grams employed in document resemblance, based on the Jaccard distance.) When applied to networks, the challenge is two-fold: the number of q-grams generated from labeled paths grows exponentially with q, and their frequency should be taken into account: this leads to a variation of the Jaccard index known as Bray-Curtis index for multisets. We describe nSimGram, a suite of fast algorithms for node similarity with q-grams, based on a novel blend of color coding, probabilistic counting, sketches, and string algorithms, where the universe of elements to sample is exponential. We provide experimental evidence that our measure is effective and our running times scale to deal with large real-world networks.", "title": "" }, { "docid": "0b19bd9604fae55455799c39595c8016", "text": "Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) λ -coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The λ-coverage problem is concerned with finding a set of key nodes having minimal size that can influence a given percentage λ of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the λ -coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient.", "title": "" }, { "docid": "b75336a7470fe2b002e742dbb6bfa8d5", "text": "In Intelligent Tutoring System (ITS), tracing the student's knowledge state during learning has been studied for several decades in order to provide more supportive learning instructions. In this paper, we propose a novel model for knowledge tracing that i) captures students' learning ability and dynamically assigns students into distinct groups with similar ability at regular time intervals, and ii) combines this information with a Recurrent Neural Network architecture known as Deep Knowledge Tracing. Experimental results confirm that the proposed model is significantly better at predicting student performance than well known state-of-the-art techniques for student modelling.", "title": "" }, { "docid": "d23df7fd9a9a0e847604bbdbe8ce04e8", "text": "In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.", "title": "" }, { "docid": "99a9dd7ed22351a1b33528f878537da8", "text": "The aim of single image super-resolution is to reconstruct a high-resolution image from a single low-resolution input. Although the task is ill-posed it can be seen as finding a non-linear mapping from a low to high-dimensional space. Recent methods that rely on both neighborhood embedding and sparse-coding have led to tremendous quality improvements. Yet, many of the previous approaches are hard to apply in practice because they are either too slow or demand tedious parameter tweaks. In this paper, we propose to directly map from low to high-resolution patches using random forests. We show the close relation of previous work on single image super-resolution to locally linear regression and demonstrate how random forests nicely fit into this framework. During training the trees, we optimize a novel and effective regularized objective that not only operates on the output space but also on the input space, which especially suits the regression task. During inference, our method comprises the same well-known computational efficiency that has made random forests popular for many computer vision problems. In the experimental part, we demonstrate on standard benchmarks for single image super-resolution that our approach yields highly accurate state-of-the-art results, while being fast in both training and evaluation.", "title": "" }, { "docid": "57b84ac6866e3e60aae874c4d00e5815", "text": "A large class of problems can be formulated in terms of the assignment of labels to objects. Frequently, processes are needed which reduce ambiguity and noise, and select the best label among several possible choices. Relaxation labeling processes are just such a class of algorithms. They are based on the parallel use of local constraints between labels. This paper develops a theory to characterize the goal of relaxation labeling. The theory is founded on a definition of con-sistency in labelings, extending the notion of constraint satisfaction. In certain restricted circumstances, an explicit functional exists that can be maximized to guide the search for consistent labelings. This functional is used to derive a new relaxation labeling operator. When the restrictions are not satisfied, the theory relies on variational cal-culus. It is shown that the problem of finding consistent labelings is equivalent to solving a variational inequality. A procedure nearly identical to the relaxation operator derived under restricted circum-stances serves in the more general setting. Further, a local convergence result is established for this operator. The standard relaxation labeling formulas are shown to approximate our new operator, which leads us to conjecture that successful applications of the standard methods are explainable by the theory developed here. Observations about con-vergence and generalizations to higher order compatibility relations are described.", "title": "" }, { "docid": "26b415f796b85dea5e63db9c58b6c790", "text": "A predominant portion of Internet services, like content delivery networks, news broadcasting, blogs sharing and social networks, etc., is data centric. A significant amount of new data is generated by these services each day. To efficiently store and maintain backups for such data is a challenging task for current data storage systems. Chunking based deduplication (dedup) methods are widely used to eliminate redundant data and hence reduce the required total storage space. In this paper, we propose a novel Frequency Based Chunking (FBC) algorithm. Unlike the most popular Content-Defined Chunking (CDC) algorithm which divides the data stream randomly according to the content, FBC explicitly utilizes the chunk frequency information in the data stream to enhance the data deduplication gain especially when the metadata overhead is taken into consideration. The FBC algorithm consists of two components, a statistical chunk frequency estimation algorithm for identifying the globally appeared frequent chunks, and a two-stage chunking algorithm which uses these chunk frequencies to obtain a better chunking result. To evaluate the effectiveness of the proposed FBC algorithm, we conducted extensive experiments on heterogeneous datasets. In all experiments, the FBC algorithm persistently outperforms the CDC algorithm in terms of achieving a better dedup gain or producing much less number of chunks. Particularly, our experiments show that FBC produces 2.5 ~ 4 times less number of chunks than that of a baseline CDC which achieving the same Duplicate Elimination Ratio (DER). Another benefit of FBC over CDC is that the FBC with average chunk size greater than or equal to that of CDC achieves up to 50% higher DER than that of a CDC algorithm.", "title": "" } ]
scidocsrr
ca8916e9093b82a22f0eb62bf055f942
Understanding and Designing Complex Systems: Response to "A framework for optimal high-level descriptions in science and engineering - preliminary report"
[ { "docid": "0f9ef379901c686df08dd0d1bb187e22", "text": "This paper studies the minimum achievable source coding rate as a function of blocklength <i>n</i> and probability ϵ that the distortion exceeds a given level <i>d</i> . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by <i>R</i>(<i>d</i>) + √<i>V</i>(<i>d</i>)/(<i>n</i>) <i>Q</i><sup>-1</sup>(ϵ), where <i>R</i>(<i>d</i>) is the rate-distortion function, <i>V</i>(<i>d</i>) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and <i>Q</i><sup>-1</sup>(·) is the inverse of the standard Gaussian complementary cumulative distribution function.", "title": "" } ]
[ { "docid": "58c488555240ded980033111a9657be4", "text": "BACKGROUND\nThe management of opioid-induced constipation (OIC) is often complicated by the fact that clinical measures of constipation do not always correlate with patient perception. As the discomfort associated with OIC can lead to poor compliance with the opioid treatment, a shift in focus towards patient assessment is often advocated.\n\n\nSCOPE\nThe Bowel Function Index * (BFI) is a new patient-assessment scale that has been developed and validated specifically for OIC. It is a physician-administered, easy-to-use scale made up of three items (ease of defecation, feeling of incomplete bowel evacuation, and personal judgement of constipation). An extensive analysis has been performed in order to validate the BFI as reliable, stable, clinically valid, and responsive to change in patients with OIC, with a 12-point change in score constituting a clinically relevant change in constipation.\n\n\nFINDINGS\nThe results of the validation analysis were based on major clinical trials and have been further supported by data from a large open-label study and a pharmaco-epidemiological study, in which the BFI was used effectively to assess OIC in a large population of patients treated with opioids. Although other patient self-report scales exist, the BFI offers several unique advantages. First, by being physician-administered, the BFI minimizes reading and comprehension difficulties; second, by offering general and open-ended questions which capture patient perspective, the BFI is likely to detect most patients suffering from OIC; third, by being short and easy-to-use, it places little burden on the patient, thereby increasing the likelihood of gathering accurate information.\n\n\nCONCLUSION\nAltogether, the available data suggest that the BFI will be useful in clinical trials and in daily practice.", "title": "" }, { "docid": "e6ca00d92f6e54ec66943499fba77005", "text": "This paper covers aspects of governing information data on enterprise level using IBM solutions. In particular it focus on one of the key elements of governance — data lineage for EU GDPR regulations.", "title": "" }, { "docid": "e2ba4f88f4b1a8afcf51882bc7cfa634", "text": "The embodied and situated approach to artificial intelligence (AI) has matured and become a viable alternative to traditional computationalist approaches with respect to the practical goal of building artificial agents, which can behave in a robust and flexible manner under changing real-world conditions. Nevertheless, some concerns have recently been raised with regard to the sufficiency of current embodied AI for advancing our scientific understanding of intentional agency. While from an engineering or computer science perspective this limitation might not be relevant, it is of course highly relevant for AI researchers striving to build accurate models of natural cognition. We argue that the biological foundations of enactive cognitive science can provide the conceptual tools that are needed to diagnose more clearly the shortcomings of current embodied AI. In particular, taking an enactive perspective points to the need for AI to take seriously the organismic roots of autonomous agency and sense-making. We identify two necessary systemic requirements, namely constitutive autonomy and adaptivity, which lead us to introduce two design principles of enactive AI. It is argued that the development of such enactive AI poses a significant challenge to current methodologies. However, it also provides a promising way of eventually overcoming the current limitations of embodied AI, especially in terms of providing fuller models of natural embodied cognition. Finally, some practical implications and examples of the two design principles of enactive AI are also discussed.", "title": "" }, { "docid": "565a8ea886a586dc8894f314fa21484a", "text": "BACKGROUND\nThe Entity Linking (EL) task links entity mentions from an unstructured document to entities in a knowledge base. Although this problem is well-studied in news and social media, this problem has not received much attention in the life science domain. One outcome of tackling the EL problem in the life sciences domain is to enable scientists to build computational models of biological processes with more efficiency. However, simply applying a news-trained entity linker produces inadequate results.\n\n\nMETHODS\nSince existing supervised approaches require a large amount of manually-labeled training data, which is currently unavailable for the life science domain, we propose a novel unsupervised collective inference approach to link entities from unstructured full texts of biomedical literature to 300 ontologies. The approach leverages the rich semantic information and structures in ontologies for similarity computation and entity ranking.\n\n\nRESULTS\nWithout using any manual annotation, our approach significantly outperforms state-of-the-art supervised EL method (9% absolute gain in linking accuracy). Furthermore, the state-of-the-art supervised EL method requires 15,000 manually annotated entity mentions for training. These promising results establish a benchmark for the EL task in the life science domain. We also provide in depth analysis and discussion on both challenges and opportunities on automatic knowledge enrichment for scientific literature.\n\n\nCONCLUSIONS\nIn this paper, we propose a novel unsupervised collective inference approach to address the EL problem in a new domain. We show that our unsupervised approach is able to outperform a current state-of-the-art supervised approach that has been trained with a large amount of manually labeled data. Life science presents an underrepresented domain for applying EL techniques. By providing a small benchmark data set and identifying opportunities, we hope to stimulate discussions across natural language processing and bioinformatics and motivate others to develop techniques for this largely untapped domain.", "title": "" }, { "docid": "e82631018c9bc25098882cc8464a8d7b", "text": "This paper describes several existing data link layer protocols that provide real-time capabilities on wired networks, focusing on token-ring and Carrier Sense Multiple Access based networks. Existing modifications to provide better real-time capabilities and performance are also described. Finally the pros and cons regarding the At-Home Anywhere project are discussed.", "title": "" }, { "docid": "2fde207669557def4e22612d51f31afe", "text": "Using neural networks for learning motion controllers from motion capture data is becoming popular due to the natural and smooth motions they can produce, the wide range of movements they can learn and their compactness once they are trained. Despite these advantages, these systems require large amounts of motion capture data for each new character or style of motion to be generated, and systems have to undergo lengthy retraining, and often reengineering, to get acceptable results. This can make the use of these systems impractical for animators and designers and solving this issue is an open and rather unexplored problem in computer graphics. In this paper we propose a transfer learning approach for adapting a learned neural network to characters that move in different styles from those on which the original neural network is trained. Given a pretrained character controller in the form of a Phase-Functioned Neural Network for locomotion, our system can quickly adapt the locomotion to novel styles using only a short motion clip as an example. We introduce a canonical polyadic tensor decomposition to reduce the amount of parameters required for learning from each new style, which both reduces the memory burden at runtime and facilitates learning from smaller quantities of data. We show that our system is suitable for learning stylized motions with few clips of motion data and synthesizing smooth motions in real-time. CCS Concepts •Computing methodologies → Animation; Neural networks; Motion capture;", "title": "" }, { "docid": "05874da7b27475377dcd8f7afdd1bc5a", "text": "The main aim of this paper is to provide automatic irrigation to the plants which helps in saving money and water. The entire system is controlled using 8051 micro controller which is programmed as giving the interrupt signal to the sprinkler.Temperature sensor and humidity sensor are connected to internal ports of micro controller via comparator,When ever there is a change in temperature and humidity of the surroundings these sensors senses the change in temperature and humidity and gives an interrupt signal to the micro-controller and thus the sprinkler is activated.", "title": "" }, { "docid": "b8dcf30712528af93cb43c5960435464", "text": "The first clinical description of Parkinson's disease (PD) will embrace its two century anniversary in 2017. For the past 30 years, mitochondrial dysfunction has been hypothesized to play a central role in the pathobiology of this devastating neurodegenerative disease. The identifications of mutations in genes encoding PINK1 (PTEN-induced kinase 1) and Parkin (E3 ubiquitin ligase) in familial PD and their functional association with mitochondrial quality control provided further support to this hypothesis. Recent research focused mainly on their key involvement in the clearance of damaged mitochondria, a process known as mitophagy. It has become evident that there are many other aspects of this complex regulated, multifaceted pathway that provides neuroprotection. As such, numerous additional factors that impact PINK1/Parkin have already been identified including genes involved in other forms of PD. A great pathogenic overlap amongst different forms of familial, environmental and even sporadic disease is emerging that potentially converges at the level of mitochondrial quality control. Tremendous efforts now seek to further detail the roles and exploit PINK1 and Parkin, their upstream regulators and downstream signaling pathways for future translation. This review summarizes the latest findings on PINK1/Parkin-directed mitochondrial quality control, its integration and cross-talk with other disease factors and pathways as well as the implications for idiopathic PD. In addition, we highlight novel avenues for the development of biomarkers and disease-modifying therapies that are based on a detailed understanding of the PINK1/Parkin pathway.", "title": "" }, { "docid": "6ad201e411520ff64881b49915415788", "text": "What is the right supervisory signal to train visual representations? Current approaches in computer vision use category labels from datasets such as ImageNet to train ConvNets. However, in case of biological agents, visual representation learning does not require millions of semantic labels. We argue that biological agents use physical interactions with the world to learn visual representations unlike current vision systems which just use passive observations (images and videos downloaded from web). For example, babies push objects, poke them, put them in their mouth and throw them to learn representations. Towards this goal, we build one of the first systems on a Baxter platform that pushes, pokes, grasps and observes objects in a tabletop environment. It uses four different types of physical interactions to collect more than 130K datapoints, with each datapoint providing supervision to a shared ConvNet architecture allowing us to learn visual representations. We show the quality of learned representations by observing neuron activations and performing nearest neighbor retrieval on this learned representation. Quantitatively, we evaluate our learned ConvNet on image classification tasks and show improvements compared to learning without external data. Finally, on the task of instance retrieval, our network outperforms the ImageNet network on recall@1 by 3 %.", "title": "" }, { "docid": "df6f6e52f97cfe2d7ff54d16ed9e2e54", "text": "Example-based texture synthesis algorithms have gained widespread popularity for their ability to take a single input image and create a perceptually similar non-periodic texture. However, previous methods rely on single input exemplars that can capture only a limited band of spatial scales. For example, synthesizing a continent-like appearance at a variety of zoom levels would require an impractically high input resolution. In this paper, we develop a multiscale texture synthesis algorithm. We propose a novel example-based representation, which we call an exemplar graph, that simply requires a few low-resolution input exemplars at different scales. Moreover, by allowing loops in the graph, we can create infinite zooms and infinitely detailed textures that are impossible with current example-based methods. We also introduce a technique that ameliorates inconsistencies in the user's input, and show that the application of this method yields improved interscale coherence and higher visual quality. We demonstrate optimizations for both CPU and GPU implementations of our method, and use them to produce animations with zooming and panning at multiple scales, as well as static gigapixel-sized images with features spanning many spatial scales.", "title": "" }, { "docid": "7526ae3542d1e922bd73be0da7c1af72", "text": "Cooperative coevolutionary algorithms (CCEAs) rely on multiple coevolving populations for the evolution of solutions composed of coadapted components. CCEAs enable, for instance, the evolution of cooperative multiagent systems composed of heterogeneous agents, where each agent is modelled as a component of the solution. Previous works have, however, shown that CCEAs are biased toward stability: the evolutionary process tends to converge prematurely to stable states instead of (near-)optimal solutions. In this study, we show how novelty search can be used to avoid the counterproductive attraction to stable states in coevolution. Novelty search is an evolutionary technique that drives evolution toward behavioural novelty and diversity rather than exclusively pursuing a static objective. We evaluate three novelty-based approaches that rely on, respectively (1) the novelty of the team as a whole, (2) the novelty of the agents’ individual behaviour, and (3) the combination of the two. We compare the proposed approaches with traditional fitness-driven cooperative coevolution in three simulated multirobot tasks. Our results show that team-level novelty scoring is the most effective approach, significantly outperforming fitness-driven coevolution at multiple levels. Novelty-driven cooperative coevolution can substantially increase the potential of CCEAs while maintaining a computational complexity that scales well with the number of populations.", "title": "" }, { "docid": "b5270bbcbe8ed4abf8ae5dabe02bb933", "text": "We address the use of three-dimensional facial shape information for human face identification. We propose a new method to represent faces as 3D registered point clouds. Fine registration of facial surfaces is done by first automatically finding important facial landmarks and then, establishing a dense correspondence between points on the facial surface with the help of a 3D face template-aided thin plate spline algorithm. After the registration of facial surfaces, similarity between two faces is defined as a discrete approximation of the volume difference between facial surfaces. Experiments done on the 3D RMA dataset show that the proposed algorithm performs as good as the point signature method, and it is statistically superior to the point distribution model-based method and the 2D depth imagery technique. In terms of computational complexity, the proposed algorithm is faster than the point signature method.", "title": "" }, { "docid": "ca21a20152eef5081fa51e7f3a5c2d87", "text": "We review some of the most widely used patterns for the programming of microservices: circuit breaker, service discovery, and API gateway. By systematically analysing different deployment strategies for these patterns, we reach new insight especially for the application of circuit breakers. We also evaluate the applicability of Jolie, a language for the programming of microservices, for these patterns and report on other standard frameworks offering similar solutions. Finally, considerations for future developments are presented.", "title": "" }, { "docid": "b75e9077cc745b15fa70267c3b0eba45", "text": "This study explored the relation of shame proneness and guilt proneness to constructive versus destructive responses to anger among 302 children (Grades 4-6), adolescents (Grades 7-11), 176 college students, and 194 adults. Across all ages, shame proneness was clearly related to maladaptive response to anger, including malevolent intentions; direct, indirect, and displaced aggression; self-directed hostility; and negative long-term consequences. In contrast, guilt proneness was associated with constructive means of handling anger, including constructive intentions, corrective action and non-hostile discussion with the target of the anger, cognitive reappraisals of the target's role, and positive long-term consequences. Escapist-diffusing responses showed some interesting developmental trends. Among children, these dimensions were positively correlated with guilt and largely unrelated to shame; among older participants, the results were mixed.", "title": "" }, { "docid": "a07472c2f086332bf0f97806255cb9d5", "text": "The Learning Analytics Dashboard (LAD) is an application to show students’ online behavior patterns in a virtual learning environment. This supporting tool works by tracking students’ log-files, mining massive amounts of data to find meaning, and visualizing the results so they can be comprehended at a glance. This paper reviews previously developed applications to analyze their features. Based on the implications from the review of previous studies as well as a preliminary investigation on the need for such tools, an early version of the LAD was designed and developed. Also, in order to improve the LAD, a usability test incorporating a stimulus recall interview was conducted with 38 college students in two blended learning classes. Evaluation of this tool was performed in an experimental research setting with a control group and additional surveys were conducted asking students’ about perceived usefulness, conformity, level of understanding of graphs, and their behavioral changes. The results indicated that this newly developed learning analytics tool did not significantly impact on their learning achievement. However, lessons learned from the usability and pilot tests support that visualized information impacts on students’ understanding level; and the overall satisfaction with dashboard plays as a covariant that impacts on both the degree of understanding and students’ perceived change of behavior. Taking in the results of the tests and students’ openended responses, a scaffolding strategy to help them understand the meaning of the information displayed was included in each sub section of the dashboard. Finally, this paper discusses future directions in regard to improving LAD so that it better supports students’ learning performance, which might be helpful for those who develop learning analytics applications for students.", "title": "" }, { "docid": "213862a47773c5ad34aa69b8b0a951d1", "text": "The next generation wireless networks are expected to operate in fully automated fashion to meet the burgeoning capacity demand and to serve users with superior quality of experience. Mobile wireless networks can leverage spatio-temporal information about user and network condition to embed the system with end-to-end visibility and intelligence. Big data analytics has emerged as a promising approach to unearth meaningful insights and to build artificially intelligent models with assistance of machine learning tools. Utilizing aforementioned tools and techniques, this paper contributes in two ways. First, we utilize mobile network data (Big Data)—call detail record—to analyze anomalous behavior of mobile wireless network. For anomaly detection purposes, we use unsupervised clustering techniques namely k-means clustering and hierarchical clustering. We compare the detected anomalies with ground truth information to verify their correctness. From the comparative analysis, we observe that when the network experiences abruptly high (unusual) traffic demand at any location and time, it identifies that as anomaly. This helps in identifying regions of interest in the network for special action such as resource allocation, fault avoidance solution, etc. Second, we train a neural-network-based prediction model with anomalous and anomaly-free data to highlight the effect of anomalies in data while training/building intelligent models. In this phase, we transform our anomalous data to anomaly-free and we observe that the error in prediction, while training the model with anomaly-free data has largely decreased as compared to the case when the model was trained with anomalous data.", "title": "" }, { "docid": "76a2bc6a8649ffe9111bfaa911572c9d", "text": "URL shortening services have become extremely popular. However, it is still unclear whether they are an effective and reliable tool that can be leveraged to hide malicious URLs, and to what extent these abuses can impact the end users. With these questions in mind, we first analyzed existing countermeasures adopted by popular shortening services. Surprisingly, we found such countermeasures to be ineffective and trivial to bypass. This first measurement motivated us to proceed further with a large-scale collection of the HTTP interactions that originate when web users access live pages that contain short URLs. To this end, we monitored 622 distinct URL shortening services between March 2010 and April 2012, and collected 24,953,881 distinct short URLs. With this large dataset, we studied the abuse of short URLs. Despite short URLs are a significant, new security risk, in accordance with the reports resulting from the observation of the overall phishing and spamming activity, we found that only a relatively small fraction of users ever encountered malicious short URLs. Interestingly, during the second year of measurement, we noticed an increased percentage of short URLs being abused for drive-by download campaigns and a decreased percentage of short URLs being abused for spam campaigns. In addition to these security-related findings, our unique monitoring infrastructure and large dataset allowed us to complement previous research on short URLs and analyze these web services from the user's perspective.", "title": "" }, { "docid": "c5eb252d17c2bec8ab168ca79ec11321", "text": "Personalization is pervasive in the online space as it leads to higher efficiency and revenue by allowing the most relevant content to be served to each user. However, recent studies suggest that personalization methods can propagate societal or systemic biases and polarize opinions; this has led to calls for regulatory mechanisms and algorithms to combat bias and inequality. Algorithmically, bandit optimization has enjoyed great success in learning user preferences and personalizing content or feeds accordingly. We propose an algorithmic framework that allows for the possibility to control bias or discrimination in such bandit-based personalization. Our model allows for the specification of general fairness constraints on the sensitive types of the content that can be displayed to a user. The challenge, however, is to come up with a scalable and low regret algorithm for the constrained optimization problem that arises. Our main technical contribution is a provably fast and low-regret algorithm for the fairness-constrained bandit optimization problem. Our proofs crucially leverage the special structure of our problem. Experiments on synthetic and real-world data sets show that our algorithmic framework can control bias with only a minor loss to revenue. ∗A short version of this paper appeared in the FAT/ML 2017 workshop (https://arxiv.org/abs/1707.02260) 1 ar X iv :1 80 2. 08 67 4v 1 [ cs .L G ] 2 3 Fe b 20 18", "title": "" }, { "docid": "f66ebffa2efda9a4728a85c0b3a94fc7", "text": "The vulnerability of face recognition systems is a growing concern that has drawn the interest from both academic and research communities. Despite the availability of a broad range of face presentation attack detection (PAD) (or countermeasure or antispoofing) schemes, there exists no superior PAD technique due to evolution of sophisticated presentation attacks (or spoof attacks). In this paper, we present a new perspective for face presentation attack detection by introducing light field camera (LFC). Since the use of a LFC can record the direction of each incoming ray in addition to the intensity, it exhibits an unique characteristic of rendering multiple depth (or focus) images in a single capture. Thus, we present a novel approach that involves exploring the variation of the focus between multiple depth (or focus) images rendered by the LFC that in turn can be used to reveal the presentation attacks. To this extent, we first collect a new face artefact database using LFC that comprises of 80 subjects. Face artefacts are generated by simulating two widely used attacks, such as photo print and electronic screen attack. Extensive experiments carried out on the light field face artefact database have revealed the outstanding performance of the proposed PAD scheme when benchmarked with various well established state-of-the-art schemes.", "title": "" }, { "docid": "5654bea8e2fe999fe52ec7536edd0f52", "text": "Mobile app developers constantly monitor feedback in user reviews with the goal of improving their mobile apps and better meeting user expectations. Thus, automated approaches have been proposed in literature with the aim of reducing the effort required for analyzing feedback contained in user reviews via automatic classification/prioritization according to specific topics. In this paper, we introduce SURF (Summarizer of User Reviews Feedback), a novel approach to condense the enormous amount of information that developers of popular apps have to manage due to user feedback received on a daily basis. SURF relies on a conceptual model for capturing user needs useful for developers performing maintenance and evolution tasks. Then it uses sophisticated summarisation techniques for summarizing thousands of reviews and generating an interactive, structured and condensed agenda of recommended software changes. We performed an end-to-end evaluation of SURF on user reviews of 17 mobile apps (5 of them developed by Sony Mobile), involving 23 developers and researchers in total. Results demonstrate high accuracy of SURF in summarizing reviews and the usefulness of the recommended changes. In evaluating our approach we found that SURF helps developers in better understanding user needs, substantially reducing the time required by developers compared to manually analyzing user (change) requests and planning future software changes.", "title": "" } ]
scidocsrr
4e8dfbbe2aa13df11e8744a3486e1025
GESPAR: Efficient Phase Retrieval of Sparse Signals
[ { "docid": "97e5f2e774b58f7533242114e5e06159", "text": "We address the problem of phase retrieval, which is frequently encountered in optical imaging. The measured quantity is the magnitude of the Fourier spectrum of a function (in optics, the function is also referred to as an object). The goal is to recover the object based on the magnitude measurements. In doing so, the standard assumptions are that the object is compactly supported and positive. In this paper, we consider objects that admit a sparse representation in some orthonormal basis. We develop a variant of the Fienup algorithm to incorporate the condition of sparsity and to successively estimate and refine the phase starting from the magnitude measurements. We show that the proposed iterative algorithm possesses Cauchy convergence properties. As far as the modality is concerned, we work with measurements obtained using a frequency-domain optical-coherence tomography experimental setup. The experimental results on real measured data show that the proposed technique exhibits good reconstruction performance even with fewer coefficients taken into account for reconstruction. It also suppresses the autocorrelation artifacts to a significant extent since it estimates the phase accurately.", "title": "" } ]
[ { "docid": "1eee4b9b835eafe2948b96d0612805a1", "text": "Virtual Machine Introspection (VMI) is a technique that enables monitoring virtual machines at the hypervisor layer. This monitoring concept has gained recently a considerable focus in computer security research due to its complete but semantic less visibility on virtual machines activities and isolation from them. VMI works range from addressing the semantic gap problem to leveraging explored VMI techniques in order to provide novel hypervisor-based services that belong to different fields. This paper aims to survey and classify existing VMI techniques and their applications.", "title": "" }, { "docid": "6b1e67c1768f9ec7a6ab95a9369b92d1", "text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.", "title": "" }, { "docid": "1fc965670f71d9870a4eea93d129e285", "text": "The present study investigates the impact of the experience of role playing a violent character in a video game on attitudes towards violent crimes and criminals. People who played the violent game were found to be more acceptable of crimes and criminals compared to people who did not play the violent game. More importantly, interaction effects were found such that people were more acceptable of crimes and criminals outside the game if the criminals were matched with the role they played in the game and the criminal actions were similar to the activities they perpetrated during the game. The results indicate that people’s virtual experience through role-playing games can influence their attitudes and judgments of similar real-life crimes, especially if the crimes are similar to what they conducted while playing games. Theoretical and practical implications are discussed. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ddfb4c50640ee911b56cb80db3da9838", "text": "Serious concerns have been raised about stealthy disclosures of private user data in smartphone apps, and recent research efforts in mobile security have studied various mechanisms to detect privacy disclosures. However, existing approaches are not effective in informing users and security analysts about potential privacy leakage threats. This is because these methods largely fail to 1) provide highly accurate and inclusive detection of privacy disclosures, and 2) filter out legitimate privacy disclosures that usually dominate detection results and in turn obscure true threats. In this paper, we propose AAPL, an automated system that detects privacy leaks (i.e., truly suspicious privacy disclosures) in Android apps. AAPL is based on multiple special static analysis techniques that we’ve developed for Android apps, including conditional flow identification and joint flow tracking. Furthermore, AAPL employs a new approach called peer voting to filter out most of the legitimate privacy disclosures from the results, purifying the detection results for automatic and easy interpretation. We implemented AAPL and evaluated it over 40, 456 apps. The results indicate that, on average, AAPL achieves an accuracy of 88.7%. For particular disclosures (e.g., contacts), the accuracy is up to 94.6%. Using AAPL, we successfully revealed a collection of unknown privacy leaks. The throughput of our privacy disclosure analysis module is 4.5 apps per minute on a threemachine cluster.", "title": "" }, { "docid": "eb6636299df817817aa49f1f8dad04f5", "text": "This paper introduces a new generative deep learning network for human motion synthesis and control. Our key idea is to combine recurrent neural networks (RNNs) and adversarial training for human motion modeling. We first describe an efficient method for training a RNNs model from prerecorded motion data. We implement recurrent neural networks with long short-term memory (LSTM) cells because they are capable of handling nonlinear dynamics and long term temporal dependencies present in human motions. Next, we train a refiner network using an adversarial loss, similar to Generative Adversarial Networks (GANs), such that the refined motion sequences are indistinguishable from real motion capture data using a discriminative network. We embed contact information into the generative deep learning model to further improve the performance of our generative model. The resulting model is appealing to motion synthesis and control because it is compact, contact-aware, and can generate an infinite number of naturally looking motions with infinite lengths. Our experiments show that motions generated by our deep learning model are always highly realistic and comparable to high-quality motion capture data. We demonstrate the power and effectiveness of our models by exploring a variety of applications, ranging from random motion synthesis, online/offline motion control, and motion filtering. We show the superiority of our generative model by comparison against baseline models.", "title": "" }, { "docid": "71aa2a7cf6cdb8d49bce2f2374504721", "text": "Ubiquitous computing, over a decade in the making, has finally graduated from whacky buzzword through fashionable research topic to something that is definitely and inevitably happening. This will mean revolutionary changes in the way computing affects our society: changes of the same magnitude and scope as those brought about by the World Wide Web. When throw-away computing capabilities are embedded in shoes, drink cans and postage stamps, security and privacy take on entirely new meanings. Programmers, engineers and system designers will have to learn to think in new ways. Ubiquitous computing is not just a wireless version of the Internet with a thousand times more computers, and it would be a naive mistake to imagine that the traditional security solutions for distributed systems will scale to the new scenario. Authentication, authorization, and even concepts as fundamental as ownership require thorough rethinking. At a higher level still, even goals and policies must be revised. One question we should keep asking is simply “Security for whom?” The owner of a device, for example, is no longer necessarily the party whose interests the device will attempt to safeguard. Ubiquitous computing is happening and will affect everyone. By itself it will never be “secure” (whatever this means) if not for the dedicated efforts of people like us who actually do the work. We are the ones who can make the difference. So, before focusing on the implementation details, let’s have a serious look at the big picture. C. Park and S. Chee (Eds.): ICISC 2004, LNCS 3506, p. 2, 2005. c © Springer-Verlag Berlin Heidelberg 2005", "title": "" }, { "docid": "ac4992ca29fb904fb9b52c2a3208bd44", "text": "Rather than striving to be “perfectly agile, ” some organizations desire to be more agile than their competition and/or the industry. The Comparative Agility™ (CA) assessment tool can be used to aid organizations in determining their relative agility compared with other teams who responded to CA. The results of CA can be used by a team to guide process improvement related to the use of agile software development practices. This paper provides an overview of industry trends in agility based upon 1, 235 CA respondents in a range of domains and geographical locations. Additionally, the paper goes further in depth on the results of four industrial teams who responded to the CA, explaining why their results were relatively high or low based upon experiences with the teams. The paper also discusses the resultant process improvement reactions and plans of these teams subsequent to reviewing their CA results.", "title": "" }, { "docid": "de007bc4c5fc33e82c91177e0798cc3b", "text": "Current knowledge delivery methods in education should move away from memory based learning to more motivated and creative education. This paper will emphasize on the advantages tangible interaction can bring to education. Augmented Chemistry provides an efficient way for designing and interacting with the molecules to understand the spatial relations between molecules. For Students it is very informative to see actual molecules representation 3D environment, inspect molecules from multiple viewpoints and control the interaction of molecules. We present in this paper an Augmented Reality system for teaching spatial relationships and chemical-reaction problem-solving skills to school-level students based on the VSEPR theory. Our system is based on inexpensive webcams and open-source software. We hope this willgenerate more ideas for educators and researcher to explore Augmented Reality", "title": "" }, { "docid": "217e3b6bc1ed6a1ef8860efff285f4ab", "text": "Currently, salvage is considered as an effective way for protecting ecosystems of inland water from toxin-producing algal blooms. Yet, the magnitude of algal blooms, which is the essential information required for dispatching salvage boats, cannot be estimated accurately with low cost in real time. In this paper, a data-driven soft sensor is proposed for algal blooms monitoring, which estimates the magnitude of algal blooms using data collected by inexpensive water quality sensors as input. The modeling of the soft sensor consists of two steps: 1) magnitude calculation and 2) regression model training. In the first step, we propose an active learning strategy to construct high-accuracy image classification model with ~50 % less labeled data. Based on this model, we design a new algorithm that recognizes algal blooms and calculates the magnitude using water surface pictures. In the second step, we propose to use Gaussian process to train the regression model that maps the multiparameter water quality sensor data to the calculated magnitude of algal blooms and learn the parameters of the model automatically from the training data. We conduct extensive experiments to evaluate our modeling method, AlgaeSense, based on over 200 000 heterogeneous sensor data records collected in four months from our field-deployed sensor system. The results indicate that the soft sensor can accurately estimate the magnitude of algal blooms in real time using data collected by just three kinds of inexpensive water quality sensors.", "title": "" }, { "docid": "bdd1c64962bfb921762259cca4a23aff", "text": "Ever since the emergence of social networking sites (SNSs), it has remained a question without a conclusive answer whether SNSs make people more or less lonely. To achieve a better understanding, researchers need to move beyond studying overall SNS usage. In addition, it is necessary to attend to personal attributes as potential moderators. Given that SNSs provide rich opportunities for social comparison, one highly relevant personality trait would be social comparison orientation (SCO), and yet this personal attribute has been understudied in social media research. Drawing on literature of psychosocial implications of social media use and SCO, this study explored associations between loneliness and various Instagram activities and the role of SCO in this context. A total of 208 undergraduate students attending a U.S. mid-southern university completed a self-report survey (Mage = 19.43, SD = 1.35; 78 percent female; 57 percent White). Findings showed that Instagram interaction and Instagram browsing were both related to lower loneliness, whereas Instagram broadcasting was associated with higher loneliness. SCO moderated the relationship between Instagram use and loneliness such that Instagram interaction was related to lower loneliness only for low SCO users. The results revealed implications for healthy SNS use and the importance of including personality traits and specific SNS use patterns to disentangle the role of SNS use in psychological well-being.", "title": "" }, { "docid": "31cf550d44266e967716560faeb30f2b", "text": "The explosion in workload complexity and the recent slow-down in Moore’s law scaling call for new approaches towards efficient computing. Researchers are now beginning to use recent advances in machine learning in software optimizations, augmenting or replacing traditional heuristics and data structures. However, the space of machine learning for computer hardware architecture is only lightly explored. In this paper, we demonstrate the potential of deep learning to address the von Neumann bottleneck of memory performance. We focus on the critical problem of learning memory access patterns, with the goal of constructing accurate and efficient memory prefetchers. We relate contemporary prefetching strategies to n-gram models in natural language processing, and show how recurrent neural networks can serve as a drop-in replacement. On a suite of challenging benchmark datasets, we find that neural networks consistently demonstrate superior performance in terms of precision and recall. This work represents the first step towards practical neural-network based prefetching, and opens a wide range of exciting directions for machine learning in computer architecture research.", "title": "" }, { "docid": "e1ced56a089d36438b0e6a20936df1c1", "text": "All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. To my mother Maxine, who gave me a love of learning; to Susan, who is as happy and amazed as I am that The Book is finally completed; to Josh, Tim, and Teddy, who are impressed that their father is an Author; and to my late father George, who would have been proud. To Gale, for putting up with this over the years; to David and Sara, for sometimes letting Daddy do his work; and to my mother Eva, to whom I can finally say \" It's done! \". To Pam, who listened for years to my promises that the book is 90% done; to Aaron, who, I hope, will read this book; to my parents, Zipporah and Pinkhas, who taught me to think; and to my grandparents, who perished in the Holocaust.. iaè knk aè kn yi-m ` , e ` xe ehiad \" Behold and see , if there be any sorrow like unto my sorrow. \" M. Y. V .", "title": "" }, { "docid": "55be87322fb1dca58faab26626369d14", "text": "OBJECTIVE\nThe purpose of this study was to compare parents' perceptions of the responses of their preschool children, with and without attention deficit hyperactivity disorder (ADHD), to sensory events in daily life in Israel. In addition, the relationship between levels of hyperactivity and sensory deficits was examined.\n\n\nMETHOD\nThe Sensory Profile Questionnaire (SP) was completed by mothers of forty-eight 4- to 6-year-old children with ADHD, and mothers of 46 children without disabilities. A matched group comparison design was used to identify possible differences in sensory processing.\n\n\nRESULTS\nBased on the measure of mothers' perceptions, children with ADHD demonstrated statistically significant differences from children without ADHD in their sensory responsiveness as reflected in 6 out of 9 factor scores (p < .001-.05), and on their sensory processing, modulation, and behavioral and emotional responses, as reflected in 11 out of 14 section scores (p < .001-.05). Scores on the SP yielded statistically significant low to moderate correlations with scores on the hyperactive scale of the Preschool Behavior Questionnaire (r = .28-.66).\n\n\nCONCLUSION\nThe findings of the present study suggest that young children with ADHD may be at increased risk of deficits in various sensory processing abilities, over and above the core symptoms of ADHD. Early identification and treatment of sensory processing deficits from a young age may extend our ability to support the successful performance of children with ADHD in meaningful and productive occupations.", "title": "" }, { "docid": "1e0a4246c81896c3fd5175bc10065460", "text": "Automatic modulation recognition (AMR) is becoming more important because it is usable in advanced general-purpose communication such as, cognitive radio, as well as, specific applications. Therefore, developments should be made for widely used modulation types; machine learning techniques should be employed for this problem. In this study, we have evaluated performances of different machine learning algorithms for AMR. Specifically, we have evaluated performances of artificial neural networks, support vector machines, random forest tree, k-nearest neighbor, Hoeffding tree, logistic regression, Naive Bayes and Gradient Boosted Regression Tree methods to obtain comparative results. The most preferred feature extraction methods in the literature have been used for a set of modulation types for general-purpose communication. We have considered AWGN and Rayleigh channel models evaluating their recognition performance as well as having made recognition performance improvement over Rayleigh for low SNR values using the reception diversity technique. We have compared their recognition performance in the accuracy metric, and plotted them as well. Furthermore, we have served confusion matrices for some particular experiments.", "title": "" }, { "docid": "e9676faf7e8d03c64fdcf6aa5e09b008", "text": "In this paper, a novel subspace method called diagonal principal component analysis (DiaPCA) is proposed for face recognition. In contrast to standard PCA, DiaPCA directly seeks the optimal projective vectors from diagonal face images without image-to-vector transformation. While in contrast to 2DPCA, DiaPCA reserves the correlations between variations of rows and those of columns of images. Experiments show that DiaPCA is much more accurate than both PCA and 2DPCA. Furthermore, it is shown that the accuracy can be further improved by combining DiaPCA with 2DPCA.", "title": "" }, { "docid": "88d2fd675e5d0a53ff0834505a438164", "text": "BACKGROUND\nMany healthcare organizations have implemented adverse event reporting systems in the hope of learning from experience to prevent adverse events and medical errors. However, a number of these applications have failed or not been implemented as predicted.\n\n\nOBJECTIVE\nThis study presents an extended technology acceptance model that integrates variables connoting trust and management support into the model to investigate what determines acceptance of adverse event reporting systems by healthcare professionals.\n\n\nMETHOD\nThe proposed model was empirically tested using data collected from a survey in the hospital environment. A confirmatory factor analysis was performed to examine the reliability and validity of the measurement model, and a structural equation modeling technique was used to evaluate the causal model.\n\n\nRESULTS\nThe results indicated that perceived usefulness, perceived ease of use, subjective norm, and trust had a significant effect on a professional's intention to use an adverse event reporting system. Among them, subjective norm had the most contribution (total effect). Perceived ease of use and subjective norm also had a direct effect on perceived usefulness and trust, respectively. Management support had a direct effect on perceived usefulness, perceived ease of use, and subjective norm.\n\n\nCONCLUSION\nThe proposed model provides a means to understand what factors determine the behavioral intention of healthcare professionals to use an adverse event reporting system and how this may affect future use. In addition, understanding the factors contributing to behavioral intent may potentially be used in advance of system development to predict reporting systems acceptance.", "title": "" }, { "docid": "6a51aba04d0af9351e86b8a61b4529cb", "text": "Cloud computing is a newly emerged technology, and the rapidly growing field of IT. It is used extensively to deliver Computing, data Storage services and other resources remotely over internet on a pay per usage model. Nowadays, it is the preferred choice of every IT organization because it extends its ability to meet the computing demands of its everyday operations, while providing scalability, mobility and flexibility with a low cost. However, the security and privacy is a major hurdle in its success and its wide adoption by organizations, and the reason that Chief Information Officers (CIOs) hesitate to move the data and applications from premises of organizations to the cloud. In fact, due to the distributed and open nature of the cloud, resources, applications, and data are vulnerable to intruders. Intrusion Detection System (IDS) has become the most commonly used component of computer system security and compliance practices that defends network accessible Cloud resources and services from various kinds of threats and attacks. This paper presents an overview of different intrusions in cloud, various detection techniques used by IDS and the types of Cloud Computing based IDS. Then, we analyze some pertinent existing cloud based intrusion detection systems with respect to their various types, positioning, detection time and data source. The analysis also gives strengths of each system, and limitations, in order to evaluate whether they carry out the security requirements of cloud computing environment or not. We highlight the deployment of IDS that uses multiple detection approaches to deal with security challenges in cloud.", "title": "" }, { "docid": "57bebb90000790a1d76a400f69d5736d", "text": "In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object's 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region's motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method's application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof.", "title": "" }, { "docid": "58da9f4a32fe0ea42d12718ff825b9b2", "text": "Electroencephalography (EEG) is one fundamental tool for functional brain imaging. EEG signals tend to be represented by a vector or a matrix to facilitate data processing and analysis with generally understood methodologies like time-series analysis, spectral analysis and matrix decomposition. Indeed, EEG signals are often naturally born with more than two modes of time and space, and they can be denoted by a multi-way array called as tensor. This review summarizes the current progress of tensor decomposition of EEG signals with three aspects. The first is about the existing modes and tensors of EEG signals. Second, two fundamental tensor decomposition models, canonical polyadic decomposition (CPD, it is also called parallel factor analysis-PARAFAC) and Tucker decomposition, are introduced and compared. Moreover, the applications of the two models for EEG signals are addressed. Particularly, the determination of the number of components for each mode is discussed. Finally, the N-way partial least square and higher-order partial least square are described for a potential trend to process and analyze brain signals of two modalities simultaneously.", "title": "" }, { "docid": "aa83af152739ac01ba899d186832ee62", "text": "Predicting user \"ratings\" on items is a crucial task in recommender systems. Matrix factorization methods that computes a low-rank approximation of the incomplete user-item rating matrix provide state-of-the-art performance, especially for users and items with several past ratings (warm starts). However, it is a challenge to generalize such methods to users and items with few or no past ratings (cold starts). Prior work [4][32] have generalized matrix factorization to include both user and item features for performing better regularization of factors as well as provide a model for smooth transition from cold starts to warm starts. However, the features were incorporated via linear regression on factor estimates. In this paper, we generalize this process to allow for arbitrary regression models like decision trees, boosting, LASSO, etc. The key advantage of our approach is the ease of computing --- any new regression procedure can be incorporated by \"plugging\" in a standard regression routine into a few intermediate steps of our model fitting procedure. With this flexibility, one can leverage a large body of work on regression modeling, variable selection, and model interpretation. We demonstrate the usefulness of this generalization using the MovieLens and Yahoo! Buzz datasets.", "title": "" } ]
scidocsrr
2164ad1e3a5ebc6195e7ac1f7f7ad1c7
A 3D steady-state model of a tendon-driven continuum soft manipulator inspired by the octopus arm.
[ { "docid": "2089f931cf6fca595898959cbfbca28a", "text": "Continuum robotic manipulators articulate due to their inherent compliance. Tendon actuation leads to compression of the manipulator, extension of the actuators, and is limited by the practical constraint that tendons cannot support compression. In light of these observations, we present a new linear model for transforming desired beam configuration to tendon displacements and vice versa. We begin from first principles in solid mechanics by analyzing the effects of geometrically nonlinear tendon loads. These loads act both distally at the termination point and proximally along the conduit contact interface. The resulting model simplifies to a linear system including only the bending and axial modes of the manipulator as well as the actuator compliance. The model is then manipulated to form a concise mapping from beam configuration-space parameters to n redundant tendon displacements via the internal loads and strains experienced by the system. We demonstrate the utility of this model by implementing an optimal feasible controller. The controller regulates axial strain to a constant value while guaranteeing positive tendon forces and minimizing their magnitudes over a range of articulations. The mechanics-based model from this study provides insight as well as performance gains for this increasingly ubiquitous class of manipulators.", "title": "" }, { "docid": "8bb465b2ec1f751b235992a79c6f7bf1", "text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.", "title": "" } ]
[ { "docid": "25eee8be0a4e4e5dd29fe31ccc902b77", "text": "3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes ('rapid prototyping') before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term 'carbomorph' and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes.", "title": "" }, { "docid": "06e58f46c989f22037f443ccf38198ce", "text": "Many biological surfaces in both the plant and animal kingdom possess unusual structural features at the micro- and nanometre-scale that control their interaction with water and hence wettability. An intriguing example is provided by desert beetles, which use micrometre-sized patterns of hydrophobic and hydrophilic regions on their backs to capture water from humid air. As anyone who has admired spider webs adorned with dew drops will appreciate, spider silk is also capable of efficiently collecting water from air. Here we show that the water-collecting ability of the capture silk of the cribellate spider Uloborus walckenaerius is the result of a unique fibre structure that forms after wetting, with the ‘wet-rebuilt’ fibres characterized by periodic spindle-knots made of random nanofibrils and separated by joints made of aligned nanofibrils. These structural features result in a surface energy gradient between the spindle-knots and the joints and also in a difference in Laplace pressure, with both factors acting together to achieve continuous condensation and directional collection of water drops around spindle-knots. Submillimetre-sized liquid drops have been driven by surface energy gradients or a difference in Laplace pressure, but until now neither force on its own has been used to overcome the larger hysteresis effects that make the movement of micrometre-sized drops more difficult. By tapping into both driving forces, spider silk achieves this task. Inspired by this finding, we designed artificial fibres that mimic the structural features of silk and exhibit its directional water-collecting ability.", "title": "" }, { "docid": "60edfab6fa5f127dd51a015b20d12a68", "text": "We discuss the ethical implications of Natural Language Generation systems. We use one particular system as a case study to identify and classify issues, and we provide an ethics checklist, in the hope that future system designers may benefit from conducting their own ethics reviews based on our checklist.", "title": "" }, { "docid": "2f201cd1fe90e0cd3182c672110ce96d", "text": "BACKGROUND\nFor many years, high dose radiation therapy was the standard treatment for patients with locally or regionally advanced non-small-cell lung cancer (NSCLC), despite a 5-year survival rate of only 3%-10% following such therapy. From May 1984 through May 1987, the Cancer and Leukemia Group B (CALGB) conducted a randomized trial that showed that induction chemotherapy before radiation therapy improved survival during the first 3 years of follow-up.\n\n\nPURPOSE\nThis report provides data for 7 years of follow-up of patients enrolled in the CALGB trial.\n\n\nMETHODS\nThe patient population consisted of individuals who had clinical or surgical stage III, histologically documented NSCLC; a CALGB performance status of 0-1; less than 5% loss of body weight in the 3 months preceding diagnosis; and radiographically visible disease. Patients were randomly assigned to receive either 1) cisplatin (100 mg/m2 body surface area intravenously on days 1 and 29) and vinblastine (5 mg/m2 body surface area intravenously weekly on days 1, 8, 15, 22, and 29) followed by radiation therapy with 6000 cGy given in 30 fractions beginning on day 50 (CT-RT group) or 2) radiation therapy with 6000 cGy alone beginning on day 1 (RT group) for a maximum duration of 6-7 weeks. Patients were evaluated for tumor regression if they had measurable or evaluable disease and were monitored for toxic effects, disease progression, and date of death.\n\n\nRESULTS\nThere were 78 eligible patients randomly assigned to the CT-RT group and 77 randomly assigned to the RT group. Both groups were similar in terms of sex, age, histologic cell type, performance status, substage of disease, and whether staging had been clinical or surgical. All patients had measurable or evaluable disease at the time of random assignment to treatment groups. Both groups received a similar quantity and quality of radiation therapy. As previously reported, the rate of tumor response, as determined radiographically, was 56% for the CT-RT group and 43% for the RT group (P = .092). After more than 7 years of follow-up, the median survival remains greater for the CT-RT group (13.7 months) than for the RT group (9.6 months) (P = .012) as ascertained by the logrank test (two-sided). The percentages of patients surviving after years 1 through 7 were 54, 26, 24, 19, 17, 13, and 13 for the CT-RT group and 40, 13, 10, 7, 6, 6, and 6 for the RT group.\n\n\nCONCLUSIONS\nLong-term follow-up confirms that patients with stage III NSCLC who receive 5 weeks of chemotherapy with cisplatin and vinblastine before radiation therapy have a 4.1-month increase in median survival. The use of sequential chemotherapy-radiotherapy increases the projected proportion of 5-year survivors by a factor of 2.8 compared with that of radiotherapy alone. However, inasmuch as 80%-85% of such patients still die within 5 years and because treatment failure occurs both in the irradiated field and at distant sites in patients receiving either sequential chemotherapy-radiotherapy or radiotherapy alone, the need for further improvements in both the local and systemic treatment of this disease persists.", "title": "" }, { "docid": "b52580bfad9621a1b0537ceed0c912c0", "text": "Partial discharge (PD) detection is an effective method for finding insulation defects in HV and EHV power cables. PD apparent charge is typically expressed in picocoulombs (pC) when the calibration procedure defined in IEC 60270 is applied during off-line tests. During on-line PD detection, measured signals are usually denoted in mV or dB without transforming the measured signal into a charge quantity. For AC XLPE power cable systems, on-line PD detection is conducted primarily with the use of high frequency current transformer (HFCT). The HFCT is clamped around the cross-bonding link of the joint or the grounding wire of termination. In on-line occasion, PD calibration is impossible from the termination. A novel on-line calibration method using HFCT is introduced in this paper. To eliminate the influence of cross-bonding links, the interrupted cable sheath at the joint was reconnected via the high-pass C-arm connector. The calibration signal was injected into the cable system via inductive coupling through the cable sheath. The distributed transmission line equivalent circuit of the cable was used in consideration of the signal attenuation. Both the conventional terminal calibration method and the proposed on-line calibration method were performed on the coaxial cable model loop for experimental verification. The amplitude and polarity of signals that propagate in the cable sheath and the conductor were evaluated. The results indicate that the proposed method can calibrate the measured signal during power cable on-line PD detection.", "title": "" }, { "docid": "55160cc3013b03704555863c710e6d21", "text": "Localization is one of the most important capabilities for autonomous mobile agents. Markov Localization (ML), applied to dense range images, has proven to be an effective technique. But its computational and storage requirements put a large burden on robot systems, and make it difficult to update the map dynamically. In this paper we introduce a new technique, based on correlation of a sensor scan with the map, that is several orders of magnitude more efficient than M L . CBML (correlation-based ML) permits video-rate localization using dense range scans, dynamic map updates, and a more precise error model than M L . In this paper we present the basic method of CBML, and validate its efficiency and correctness in a series of experiments on an implemented mobile robot base.", "title": "" }, { "docid": "755f7e93dbe43a0ed12eb90b1d320cb2", "text": "This paper presents a deep architecture for learning a similarity metric on variablelength character sequences. The model combines a stack of character-level bidirectional LSTM’s with a Siamese architecture. It learns to project variablelength strings into a fixed-dimensional embedding space by using only information about the similarity between pairs of strings. This model is applied to the task of job title normalization based on a manually annotated taxonomy. A small data set is incrementally expanded and augmented with new sources of variance. The model learns a representation that is selective to differences in the input that reflect semantic differences (e.g., “Java developer” vs. “HR manager”) but also invariant to nonsemantic string differences (e.g., “Java developer” vs. “Java programmer”).", "title": "" }, { "docid": "d21e4e55966bac19bbed84b23360b66d", "text": "Smart growth is an approach to urban planning that provides a framework for making community development decisions. Despite its growing use, it is not known whether smart growth can impact physical activity. This review utilizes existing built environment research on factors that have been used in smart growth planning to determine whether they are associated with physical activity or body mass. Searching the MEDLINE, Psycinfo and Web-of-Knowledge databases, 204 articles were identified for descriptive review, and 44 for a more in-depth review of studies that evaluated four or more smart growth planning principles. Five smart growth factors (diverse housing types, mixed land use, housing density, compact development patterns and levels of open space) were associated with increased levels of physical activity, primarily walking. Associations with other forms of physical activity were less common. Results varied by gender and method of environmental assessment. Body mass was largely unaffected. This review suggests that several features of the built environment associated with smart growth planning may promote important forms of physical activity. Future smart growth community planning could focus more directly on health, and future research should explore whether combinations or a critical mass of smart growth features is associated with better population health outcomes.", "title": "" }, { "docid": "63c6d8ed1788f4d803a4c63bd2dd1b2f", "text": "Recommender systems have been developed to overcome the information overload problem by retrieving the most relevant resources. Constructing an appropriate model to estimate the user interests is the major task of recommender systems. The profile matching and latent factors are two main approaches for user modeling. Although a notion of timestamps has already been applied to address the temporary nature of recommender systems, the evolutionary behavior of such systems is less studied. In this paper, we introduce the concept of trend to capture the interests of user in selecting items among different group of similar items. The trend based user model is constructed by incorporating user profile into a new extension of Distance Dependent Chines Restaurant Process (dd-CRP). dd-CRP which is a Bayesian Nonparametric model, provides a framework for constructing an evolutionary user model that captures the dynamics of user interests. We evaluate the proposed method using a real-world data-set that contains news tweets of three news agencies (New York Times, BBC and Associated Press). The experimental results and comparisons show the superior recommendation accuracy of the proposed approach, and its ability to effectively evolve over time. ∗Corresponding author at: Faculty of Computer Engineering and Information Technology, Shahrood University of Technology, P.O. Box 316, Shahrood, Iran. Tel/Fax: +98(23) 32300251. Email addresses: rc_bagher@yahoo.com (Bagher Rahimpour Cami), h.hassanpour@shahroodut.ac.ir (Hamid Hassanpour ), hmashayekhi@shahroodut.ac.ir (Hoda Mashayekhi) Preprint submitted to Expert Systems with Applications June 13, 2017", "title": "" }, { "docid": "42c7c881935df8b22068dabdd48a05e8", "text": "Dropout training, originally designed for deep neural networks, has been successful on high-dimensional single-layer natural language tasks. This paper proposes a theoretical explanation for this phenomenon: we show that, under a generative Poisson topic model with long documents, dropout training improves the exponent in the generalization bound for empirical risk minimization. Dropout achieves this gain much like a marathon runner who practices at altitude: once a classifier learns to perform reasonably well on training examples that have been artificially corrupted by dropout, it will do very well on the uncorrupted test set. We also show that, under similar conditions, dropout preserves the Bayes decision boundary and should therefore induce minimal bias in high dimensions.", "title": "" }, { "docid": "912c213d76bed8d90f636ea5a6220cf1", "text": "Across the world, organizations have teams gathering threat data to protect themselves from incoming cyber attacks and maintain a strong cyber security posture. Teams are also sharing information, because along with the data collected internally, organizations need external information to have a comprehensive view of the threat landscape. The information about cyber threats comes from a variety of sources, including sharing communities, open-source and commercial sources, and it spans many different levels and timescales. Immediately actionable information are often low-level indicators of compromise, such as known malware hash values or command-and-control IP addresses, where an actionable response can be executed automatically by a system. Threat intelligence refers to more complex cyber threat information that has been acquired or inferred through the analysis of existing information. Information such as the different malware families used over time with an attack or the network of threat actors involved in an attack, is valuable information and can be vital to understanding and predicting attacks, threat developments, as well as informing law enforcement investigations. This information is also actionable, but on a longer time scale. Moreover, it requires action and decision-making at the human level. There is a need for effective intelligence management platforms to facilitate the generation, refinement, and vetting of data, post sharing. In designing such a system, some of the key challenges that exist include: working with multiple intelligence sources, combining and enriching data for greater intelligence, determining intelligence relevance based on technical constructs, and organizational input, delivery into organizational workflows and into technological products. This paper discusses these challenges encountered and summarizes the community requirements and expectations for an all-encompassing Threat Intelligence Management Platform. The requirements expressed in this paper, when implemented, will serve as building blocks to create systems that can maximize value out of a set of collected intelligence and translate those findings into action for a broad range of stakeholders.", "title": "" }, { "docid": "535b093171db9cfafba4fc91c4254137", "text": "Millimeter-wave communication is one way to alleviate the spectrum gridlock at lower frequencies while simultaneously providing high-bandwidth communication channels. MmWave makes use of MIMO through large antenna arrays at both the base station and the mobile station to provide sufficient received signal power. This article explains how beamforming and precoding are different in MIMO mmWave systems than in their lower-frequency counterparts, due to different hardware constraints and channel characteristics. Two potential architectures are reviewed: hybrid analog/digital precoding/combining and combining with low-resolution analog- to-digital converters. The potential gains and design challenges for these strategies are discussed, and future research directions are highlighted.", "title": "" }, { "docid": "3bda571d6efd59f451297cab3db79b48", "text": "Our work focuses on robots to be deployed in human environments. These robots, which will need specialized object manipulation skills, should leverage end-users to efficiently learn the affordances of objects in their environment. This approach is promising because people naturally focus on showing salient aspects of the objects [1]. We replicate prior results and build on them to create a combination of self and supervised learning. We present experimental results with a robot learning 5 affordances on 4 objects using 1219 interactions. We compare three conditions: (1) learning through self-exploration, (2) learning from supervised examples provided by 10 naïve users, and (3) self-exploration biased by the user input. Our results characterize the benefits of self and supervised affordance learning and show that a combined approach is the most efficient and successful.", "title": "" }, { "docid": "212536baf7f5bd2635046774436e0dbf", "text": "Mobile devices have already been widely used to access the Web. However, because most available web pages are designed for desktop PC in mind, it is inconvenient to browse these large web pages on a mobile device with a small screen. In this paper, we propose a new browsing convention to facilitate navigation and reading on a small-form-factor device. A web page is organized into a two level hierarchy with a thumbnail representation at the top level for providing a global view and index to a set of sub-pages at the bottom level for detail information. A page adaptation technique is also developed to analyze the structure of an existing web page and split it into small and logically related units that fit into the screen of a mobile device. For a web page not suitable for splitting, auto-positioning or scrolling-by-block is used to assist the browsing as an alterative. Our experimental results show that our proposed browsing convention and developed page adaptation scheme greatly improve the user's browsing experiences on a device with a small display.", "title": "" }, { "docid": "d33b2e5883b14ac771cf128d309eddbf", "text": "Automated lip reading is the process of converting movements of the lips, face and tongue to speech in real time with enhanced accuracy. Although performance of lip reading systems is still not remotely similar to audio speech recognition, recent developments in processor technology and the massive explosion and ubiquity of computing devices accompanied with increased research in this field has reduced the ambiguities of the labial language, making it possible for free speech-to-text conversion. This paper surveys the field of lip reading and provides a detailed discussion of the trade-offs between various approaches. It gives a reverse chronological topic wise listing of the developments in lip reading systems in recent years. With advancement in computer vision and pattern recognition tools, the efficacy of real time, effective conversion has increased. The major goal of this paper is to provide a comprehensive reference source for the researchers involved in lip reading, not just for the esoteric academia but all the people interested in this field regardless of particular application areas.", "title": "" }, { "docid": "5b0842894cbf994c3e63e521f7352241", "text": "The burgeoning field of genomics has revived interest in multiple testing procedures by raising new methodological and computational challenges. For example, microarray experiments generate large multiplicity problems in which thousands of hypotheses are tested simultaneously. Westfall and Young (1993) propose resampling-based p-value adjustment procedures which are highly relevant to microarray experiments. This article discusses different criteria for error control in resampling-based multiple testing, including (a) the family wise error rate of Westfall and Young (1993) and (b) the false discovery rate developed by Benjamini and Hochberg (1995), both from a frequentist viewpoint; and (c) the positive false discovery rate of Storey (2002a), which has a Bayesian motivation. We also introduce our recently developed fast algorithm for implementing the minP adjustment to control family-wise error rate. Adjusted p-values for different approaches are applied to gene expression data from two recently published microarray studies. The properties of these procedures for multiple testing are compared.", "title": "" }, { "docid": "7c56d7bd2ca8e03ba828343dbb6f38bd", "text": "The goal of Spoken Term Detection (STD) technology is to allow open vocabulary search over large collections of speech content. In this paper, we address cases where search term(s) of interest (queries) are acoustic examples. This is provided either by identifying a region of interest in a speech stream or by speaking the query term. Queries often relate to named-entities and foreign words, which typically have poor coverage in the vocabulary of Large Vocabulary Continuous Speech Recognition (LVCSR) systems. Throughout this paper, we focus on query-by-example search for such out-of-vocabulary (OOV) query terms. We build upon a finite state transducer (FST) based search and indexing system [1] to address the query by example search for OOV terms by representing both the query and the index as phonetic lattices from the output of an LVCSR system. We provide results comparing different representations and generation mechanisms for both queries and indexes built with word and combined word and subword units [2]. We also present a two-pass method which uses query-by-example search using the best hit identified in an initial pass to augment the STD search results. The results demonstrate that query-by-example search can yield a significantly better performance, measured using Actual Term-Weighted Value (ATWV), of 0.479 when compared to a baseline ATWV of 0.325 that uses reference pronunciations for OOVs. Further improvements can be obtained with the proposed two pass approach and filtering using the expected unigram counts from the LVCSR system's lexicon.", "title": "" }, { "docid": "a387781a96a39448ca22b49154aaf80c", "text": "LEGO is a globally popular toy composed of colorful interlocking plastic bricks that can be assembled in many ways; however, this special feature makes designing a LEGO sculpture particularly challenging. Building a stable sculpture is not easy for a beginner; even an experienced user requires a good deal of time to build one. This paper provides a novel approach to creating a balanced LEGO sculpture for a 3D model in any pose, using centroid adjustment and inner engraving. First, the input 3D model is transformed into a voxel data structure. Next, the model’s centroid is adjusted to an appropriate position using inner engraving to ensure that the model stands stably. A model can stand stably without any struts when the center of mass is moved to the ideal position. Third, voxels are merged into layer-by-layer brick layout assembly instructions. Finally, users will be able to build a LEGO sculpture by following these instructions. The proposed method is demonstrated with a number of LEGO sculptures and the results of the physical experiments are presented.", "title": "" }, { "docid": "753dd3ac36056bca7eed41ccd11df010", "text": "Neural methods have had several recent successes in semantic parsing, though they have yet to face the challenge of producing meaning representations based on formal semantics. We present a sequence-to-sequence neural semantic parser that is able to produce Discourse Representation Structures (DRSs) for English sentences with high accuracy, outperforming traditional DRS parsers. To facilitate the learning of the output, we represent DRSs as a sequence of flat clauses and introduce a method to verify that produced DRSs are well-formed and interpretable. We compare models using characters and words as input and see (somewhat surprisingly) that the former performs better than the latter. We show that eliminating variable names from the output using De Bruijn indices increases parser performance. Adding silver training data boosts performance even further.", "title": "" } ]
scidocsrr
6da9db0423e381162a4c14864e030624
PAWE: Polysemy Aware Word Embeddings
[ { "docid": "dadd12e17ce1772f48eaae29453bc610", "text": "Publications Learning Word Vectors for Sentiment Analysis. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. The 49 th Annual Meeting of the Association for Computational Linguistics (ACL 2011). Spectral Chinese Restaurant Processes: Nonparametric Clustering Based on Similarities. Richard Socher, Andrew Maas, and Christopher D. Manning. The 15 th International Conference on Artificial Intelligence and Statistics (AISTATS 2010). A Probabilistic Model for Semantic Word Vectors. Andrew L. Maas and Andrew Y. Ng. NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. One-Shot Learning with Bayesian Networks. Andrew L. Maas and Charles Kemp. Proceedings of the 31 st", "title": "" } ]
[ { "docid": "a53caf0e12e25aadb812e9819fa41e27", "text": "Abstact This paper does not pretend either to transform completely the ontological art in engineering or to enumerate xhaustively the complete set of works that has been reported in this area. Its goal is to clarify to readers interested in building ontologies from scratch, the activities they should perform and in which order, as well as the set of techniques to be used in each phase of the methodology. This paper only presents a set of activities that conform the ontology development process, a life cycle to build ontologies based in evolving prototypes, and METHONTOLOGY, a well-structured methodology used to build ontologies from scratch. This paper gathers the experience of the authors on building an ontology in the domain of chemicals.", "title": "" }, { "docid": "ff826e50f789d4e47f30ec22396c365d", "text": "In present Scenario of the world, Internet has almost reached to every aspect of our lives. Due to this, most of the information sharing and communication is carried out using web. With such rapid development of Internet technology, a big issue arises of unauthorized access to confidential data, which leads to utmost need of information security while transmission. Cryptography and Steganography are two of the popular techniques used for secure transmission. Steganography is more reliable over cryptography as it embeds secret data within some cover material. Unlike cryptography, Steganography is not for keeping message hidden from intruders but it does not allow anyone to know that hidden information even exist in communicated material, as the transmitted material looks like any normal message which seem to be of no use for intruders. Although, Steganography covers many types of covers to hide data like text, image, audio, video and protocols but recent developments focuses on Image Steganography due to its large data hiding capacity and difficult identification, also due to their greater scope and bulk sharing within social networks. A large number of techniques are available to hide secret data within digital images such as LSB, ISB, and MLSB etc. In this paper, a detailed review will be presented on Image Steganography and also different data hiding and security techniques using digital images with their scope and features.", "title": "" }, { "docid": "acf578c24c5c6768d99f211c5047beaa", "text": "In indoor environments, there exists a few distinctive indoor spaces' features (ISFs). However, up to our knowledge, there is no algorithm that fully utilizes ISF for accurate 3-D SLAM. In this letter, we suggest a sensor system that efficiently captures ISF and propose an algorithm framework that accurately estimates sensor's 3-D poses by utilizing ISF. Experiments conducted in six representative indoor spaces show that the accuracy of the proposed method is better than the previous method. Furthermore, the proposed method shows robust performances in a sense that a set of adjusted parameters of the related algorithms does not need to be recalibrated as target environment changes. We also demonstrate that the proposed method not only generates 3-D depth maps but also builds a dense 3-D RGB-D map.", "title": "" }, { "docid": "8986de609f238e83623c7130a9ab9253", "text": "The color psychology literature has made a convincing case that color is not just about aesthetics, but also about meaning. This work has involved situational manipulations of color, rendering it uncertain as to whether color-meaning associations can be used to characterize how people differ from each other. The present research focuses on the idea that the color red is linked to, or associated with, individual differences in interpersonal hostility. Across four studies (N = 376 undergraduates), red preferences and perceptual biases were measured along with individual differences in interpersonal hostility. It was found that (a) a preference for the color red was higher as interpersonal hostility increased, (b) hostile people were biased to see the color red more frequently than nonhostile people, and (c) there was a relationship between a preference for the color red and hostile social decision making. These studies represent an important extension of the color psychology literature, highlighting the need to attend to person-based, as well as situation-based, factors.", "title": "" }, { "docid": "44352346cff6da1c4ac010ae932ce6fb", "text": "Most research on intelligent agents centers on the agent and not on the user. We look at the origins of agent-centric research for slotfilling, gaming and chatbot agents. We then argue that it is important to concentrate more on the user. After reviewing relevant literature, some approaches for creating and assessing user-centric systems are proposed.", "title": "" }, { "docid": "b939227b7de6ef57c2d236fcb01b7bfc", "text": "We propose a speed estimation method with human body accelerations measured on the chest by a tri-axial accelerometer. To estimate the speed we segmented the acceleration signal into strides measuring stride time, and applied two neural networks into the patterns parameterized from each stride calculating stride length. The first neural network determines whether the subject walks or runs, and the second neural network with different node interactions according to the subject's status estimates stride length. Walking or running speed is calculated with the estimated stride length divided by the measured stride time. The neural networks were trained by patterns obtained from 15 subjects and then validated by 2 untrained subjects' patterns. The result shows good agreement between actual and estimated speeds presenting the linear correlation coefficient r = 0.9874. We also applied the method to the real field and track data.", "title": "" }, { "docid": "ca75798a9090810682f99400f6a8ff4e", "text": "We present the first empirical analysis of Bitcoin-based scams: operations established with fraudulent intent. By amalgamating reports gathered by voluntary vigilantes and tracked in online forums, we identify 192 scams and categorize them into four groups: Ponzi schemes, mining scams, scam wallets and fraudulent exchanges. In 21% of the cases, we also found the associated Bitcoin addresses, which enables us to track payments into and out of the scams. We find that at least $11 million has been contributed to the scams from 13 000 distinct victims. Furthermore, we present evidence that the most successful scams depend on large contributions from a very small number of victims. Finally, we discuss ways in which the scams could be countered.", "title": "" }, { "docid": "17b7930531d63d51e33c714a072acbe8", "text": "Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network. While AVI has enabled efficient training of deep generative models such as variational autoencoders (VAE), recent empirical work suggests that inference networks can produce suboptimal variational parameters. We propose a hybrid approach, to use AVI to initialize the variational parameters and run stochastic variational inference (SVI) to refine them. Crucially, the local SVI procedure is itself differentiable, so the inference network and generative model can be trained end-to-end with gradient-based optimization. This semi-amortized approach enables the use of rich generative models without experiencing the posterior-collapse phenomenon common in training VAEs for problems like text generation. Experiments show this approach outperforms strong autoregressive and variational baselines on standard text and image datasets.", "title": "" }, { "docid": "71ae8b4cc2f4e531be95cdbb147c75eb", "text": "This paper is to explore the possibility to use alternative data and artificial intelligence techniques to trade stocks. The efficacy of the daily Twitter sentiment on predicting the stock return is examined using machine learning methods. Reinforcement learning(Q-learning) is applied to generate the optimal trading policy based on the sentiment signal. The predicting power of the sentiment signal is more significant if the stock price is driven by the expectation on the company growth and when the company has a major event that draws the public attention. The optimal trading strategy based on reinforcement learning outperforms the trading strategy based on the machine learning prediction.", "title": "" }, { "docid": "9be5326deba6eaab21150edf882188f1", "text": "CARS 2016—Computer Assisted Radiology and Surgery Proceedings of the 30th International Congress and Exhibition Heidelberg, Germany, June 21–25, 2016", "title": "" }, { "docid": "10fdf6862a55decc0a27ea4bc1f426e4", "text": "This paper presents a novel encoderdecoder model for automatically generating market comments from stock prices. The model first encodes both shortand long-term series of stock prices so that it can mention shortand long-term changes in stock prices. In the decoding phase, our model can also generate a numerical value by selecting an appropriate arithmetic operation such as subtraction or rounding, and applying it to the input stock prices. Empirical experiments show that our best model generates market comments at the fluency and the informativeness approaching human-generated reference texts.", "title": "" }, { "docid": "4818e47ceaec70457701649832fb90c4", "text": "Consider a computer system having a CPU that feeds jobs to two input/output (I/O) devices having different speeds. Let &thgr; be the fraction of jobs routed to the first I/O device, so that 1 - &thgr; is the fraction routed to the second. Suppose that α = α(&thgr;) is the steady-sate amount of time that a job spends in the system. Given that &thgr; is a decision variable, a designer might wish to minimize α(&thgr;) over &thgr;. Since α(·) is typically difficult to evaluate analytically, Monte Carlo optimization is an attractive methodology. By analogy with deterministic mathematical programming, efficient Monte Carlo gradient estimation is an important ingredient of simulation-based optimization algorithms. As a consequence, gradient estimation has recently attracted considerable attention in the simulation community. It is our goal, in this article, to describe one efficient method for estimating gradients in the Monte Carlo setting, namely the likelihood ratio method (also known as the efficient score method). This technique has been previously described (in less general settings than those developed in this article) in [6, 16, 18, 21]. An alternative gradient estimation procedure is infinitesimal perturbation analysis; see [11, 12] for an introduction. While it is typically more difficult to apply to a given application than the likelihood ratio technique of interest here, it often turns out to be statistically more accurate.\n In this article, we first describe two important problems which motivate our study of efficient gradient estimation algorithms. Next, we will present the likelihood ratio gradient estimator in a general setting in which the essential idea is most transparent. The section that follows then specializes the estimator to discrete-time stochastic processes. We derive likelihood-ratio-gradient estimators for both time-homogeneous and non-time homogeneous discrete-time Markov chains. Later, we discuss likelihood ratio gradient estimation in continuous time. As examples of our analysis, we present the gradient estimators for time-homogeneous continuous-time Markov chains; non-time homogeneous continuous-time Markov chains; semi-Markov processes; and generalized semi-Markov processes. (The analysis throughout these sections assumes the performance measure that defines α(&thgr;) corresponds to a terminating simulation.) Finally, we conclude the article with a brief discussion of the basic issues that arise in extending the likelihood ratio gradient estimator to steady-state performance measures.", "title": "" }, { "docid": "f910efe3b9bf7450d29c582e83ba0557", "text": "Based on the intuition that frequent patterns can be used to predict the next few items that users would want to access, sequential pattern mining-based next-items recommendation algorithms have performed well in empirical studies including online product recommendation. However, most current methods do not perform personalized sequential pattern mining, and this seriously limits their capability to recommend the best next-items to each specific target user. In this paper, we introduce a personalized sequential pattern mining-based recommendation framework. Using a novel Competence Score measure, the proposed framework effectively learns user-specific sequence importance knowledge, and exploits this additional knowledge for accurate personalized recommendation. Experimental results on real-world datasets demonstrate that the proposed framework effectively improves the efficiency for mining sequential patterns, increases the user-relevance of the identified frequent patterns, and most importantly, generates significantly more accurate next-items recommendation for the target users.", "title": "" }, { "docid": "725bfdbd65a62d3d7ac50fee087d752f", "text": "BACKGROUND\nIndividuals with autism spectrum disorders (ASDs) often display symptoms from other diagnostic categories. Studies of clinical and psychosocial outcome in adult patients with ASDs without concomitant intellectual disability are few. The objective of this paper is to describe the clinical psychiatric presentation and important outcome measures of a large group of normal-intelligence adult patients with ASDs.\n\n\nMETHODS\nAutistic symptomatology according to the DSM-IV-criteria and the Gillberg & Gillberg research criteria, patterns of comorbid psychopathology and psychosocial outcome were assessed in 122 consecutively referred adults with normal intelligence ASDs. The subjects consisted of 5 patients with autistic disorder (AD), 67 with Asperger's disorder (AS) and 50 with pervasive developmental disorder not otherwise specified (PDD NOS). This study group consists of subjects pooled from two studies with highly similar protocols, all seen on an outpatient basis by one of three clinicians.\n\n\nRESULTS\nCore autistic symptoms were highly prevalent in all ASD subgroups. Though AD subjects had the most pervasive problems, restrictions in non-verbal communication were common across all three subgroups and, contrary to current DSM criteria, so were verbal communication deficits. Lifetime psychiatric axis I comorbidity was very common, most notably mood and anxiety disorders, but also ADHD and psychotic disorders. The frequency of these diagnoses did not differ between the ASD subgroups or between males and females. Antisocial personality disorder and substance abuse were more common in the PDD NOS group. Of all subjects, few led an independent life and very few had ever had a long-term relationship. Female subjects more often reported having been bullied at school than male subjects.\n\n\nCONCLUSION\nASDs are clinical syndromes characterized by impaired social interaction and non-verbal communication in adulthood as well as in childhood. They also carry a high risk for co-existing mental health problems from a broad spectrum of disorders and for unfavourable psychosocial life circumstances. For the next revision of DSM, our findings especially stress the importance of careful examination of the exclusion criterion for adult patients with ASDs.", "title": "" }, { "docid": "96cc093006974b0a8a71f514ee10c38a", "text": "Image representation learning is a fundamental problem in understanding semantics of images. However, traditional classification-based representation learning methods face the noisy and incomplete problem of the supervisory labels. In this paper, we propose a general knowledge base embedded image representation learning approach, which uses general knowledge graph, which is a multitype relational knowledge graph consisting of human commonsense beyond image space, as external semantic resource to capture the relations of concepts in image representation learning. A relational regularized regression CNN (R$^3$CNN) model is designed to jointly optimize the image representation learning problem and knowledge graph embedding problem. In this manner, the learnt representation can capture not only labeled tags but also related concepts of images, which involves more precise and complete semantics. Comprehensive experiments are conducted to investigate the effectiveness and transferability of our approach in tag prediction task, zero-shot tag inference task, and content-based image retrieval task. The experimental results demonstrate that the proposed approach performs significantly better than the existing representation learning methods. Finally, observation of the learnt relations show that our approach can somehow refine the knowledge base to describe images and label the images with structured tags.", "title": "" }, { "docid": "28ba1eddc74c930350e1b2df5931fa39", "text": "In this paper, the problem of how to implement the MTPA/MTPV control for an energy efficient operation of a high speed Interior Permanent Magnet Synchronous Motor (IPMSM) used as traction drive is considered. This control method depends on the inductances Ld, Lq, the flux linkage ΨPM and the stator resistance Rs which might vary during operation. The parameter variation causes miscalculation of the set point currents Id and Iq for the inner current control system and thus a wrong torque will be set. Consequently the IPMSM will not be operating in the optimal operation point which yields to a reduction of the total energy efficiency and the performance. As a consequence, this paper proposes the implementation of the the Recursive Least Square Estimation (RLS) for a high speed and high performance IPMSM. With this online identification method the variable parameters are estimated and adapted to the MTPA and MTPV control strategy.", "title": "" }, { "docid": "f329009bbee172c495a441a0ab911e28", "text": "This paper provides an application of game theoretic techniques to the analysis of a class of multiparty cryptographic protocols for secret bit exchange.", "title": "" }, { "docid": "e0fbfac63b894c46e3acda86adb67053", "text": "OBJECTIVE\nTo investigate the effectiveness of acupuncture compared with minimal acupuncture and with no acupuncture in patients with tension-type headache.\n\n\nDESIGN\nThree armed randomised controlled multicentre trial.\n\n\nSETTING\n28 outpatient centres in Germany.\n\n\nPARTICIPANTS\n270 patients (74% women, mean age 43 (SD 13) years) with episodic or chronic tension-type headache.\n\n\nINTERVENTIONS\nAcupuncture, minimal acupuncture (superficial needling at non-acupuncture points), or waiting list control. Acupuncture and minimal acupuncture were administered by specialised physicians and consisted of 12 sessions per patient over eight weeks.\n\n\nMAIN OUTCOME MEASURE\nDifference in numbers of days with headache between the four weeks before randomisation and weeks 9-12 after randomisation, as recorded by participants in headache diaries.\n\n\nRESULTS\nThe number of days with headache decreased by 7.2 (SD 6.5) days in the acupuncture group compared with 6.6 (SD 6.0) days in the minimal acupuncture group and 1.5 (SD 3.7) days in the waiting list group (difference: acupuncture v minimal acupuncture, 0.6 days, 95% confidence interval -1.5 to 2.6 days, P = 0.58; acupuncture v waiting list, 5.7 days, 3.9 to 7.5 days, P < 0.001). The proportion of responders (at least 50% reduction in days with headache) was 46% in the acupuncture group, 35% in the minimal acupuncture group, and 4% in the waiting list group.\n\n\nCONCLUSIONS\nThe acupuncture intervention investigated in this trial was more effective than no treatment but not significantly more effective than minimal acupuncture for the treatment of tension-type headache.\n\n\nTRIAL REGISTRATION NUMBER\nISRCTN9737659.", "title": "" }, { "docid": "9547b04b76e653c8b4854ae193b4319f", "text": "© 2017 Western Digital Corporation or its affiliates. All rights reserved Emerging fast byte-addressable non-volatile memory (eNVM) technologies such as ReRAM and 3D Xpoint are projected to offer two orders of magnitude higher performance than flash. However, the existing solid-state drive (SSD) architecture optimizes for flash characteristics and is not adequate to exploit the full potential of eNVMs due to architectural and I/O interface (e.g., PCIe, SATA) limitations. To improve the storage performance and reduce the host main memory requirement for KVS, we propose a novel SSD architecture that extends the semantic of SSD with the KVS features and implements indexing capability inside SSD. It has in-storage processing engine that implements key-value operations such as get, put and delete to efficiently operate on KV datasets. The proposed system introduces a compute channel interface to offload key-value operations down to the SSD that significantly reduces the operating system, file system and other software overhead. This SSD achieves 4.96 Mops/sec get and 3.44 Mops/sec put operations and shows better scalability with increasing number of keyvalue pairs as compared to flash-based NVMe (flash-NVMe) and DRAMbased NVMe (DRAM-NVMe) devices. With decreasing DRAM size by 75%, its performance decreases gradually, achieving speedup of 3.23x as compared to DRAM-NVMe. This SSD significantly improves performance and reduces memory by exploiting the fine grain parallelism within a controller and keeping data movement local to effectively utilize eNVM bandwidth and eliminating the superfluous data movement between the host and the SSD. Abstract", "title": "" }, { "docid": "8536a89fdc1c3d1556a801b87e80b0c3", "text": "Pattern solutions for software and architectures have significantly reduced design, verification, and validation times by mapping challenging problems into a solved generic problem. In the paper, we present an architecture pattern for ensuring synchronous computation semantics using the PALS protocol. We develop a modeling framework in AADL to automatically transform a synchronous design of a real-time distributed system into an asynchronous design satisfying the PALS protocol. We present a detailed example of how the PALS transformation works for a dual-redundant system. From the example, we also describe the general transformation in terms of intuitively defined AADL semantics. Furthermore, we develop a static analysis checker to find necessary conditions that must be satisfied in order for the PALS transformation to work correctly. The transformations and static checks that we have described are implemented in OSATE using the generated EMF metamodel API for model manipulation.", "title": "" } ]
scidocsrr
87ee184416857fd92728f4da960ad230
Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey and Future Directions
[ { "docid": "bc758b1dd8e3a75df2255bb880a716ef", "text": "In recent years, convolutional neural networks (CNNs) based machine learning algorithms have been widely applied in computer vision applications. However, for large-scale CNNs, the computation-intensive, memory-intensive and resource-consuming features have brought many challenges to CNN implementations. This work proposes an end-to-end FPGA-based CNN accelerator with all the layers mapped on one chip so that different layers can work concurrently in a pipelined structure to increase the throughput. A methodology which can find the optimized parallelism strategy for each layer is proposed to achieve high throughput and high resource utilization. In addition, a batch-based computing method is implemented and applied on fully connected layers (FC layers) to increase the memory bandwidth utilization due to the memory-intensive feature. Further, by applying two different computing patterns on FC layers, the required on-chip buffers can be reduced significantly. As a case study, a state-of-the-art large-scale CNN, AlexNet, is implemented on Xilinx VC709. It can achieve a peak performance of 565.94 GOP/s and 391 FPS under 156MHz clock frequency which outperforms previous approaches.", "title": "" } ]
[ { "docid": "aa12fd5752d85d80ff33f620546cc288", "text": "Sentiment Analysis(SA) is a combination of emotions, opinions and subjectivity of text. Today, social networking sites like Twitter are tremendously used in expressing the opinions about a particular entity in the form of tweets which are limited to 140 characters. Reviews and opinions play a very important role in understanding peoples satisfaction regarding a particular entity. Such opinions have high potential for knowledge discovery. The main target of SA is to find opinions from tweets, extract sentiments from them and then define their polarity, i.e, positive, negative or neutral. Most of the work in this domain has been done for English Language. In this paper, we discuss and propose sentiment analysis using Hindi language. We will discuss an unsupervised lexicon method for classification.", "title": "" }, { "docid": "dcacbed90f45b76e9d40c427e16e89d6", "text": "High torque density and low torque ripple are crucial for traction applications, which allow electrified powertrains to perform properly during start-up, acceleration, and cruising. High-quality anisotropic magnetic materials such as cold-rolled grain-oriented electrical steels can be used for achieving higher efficiency, torque density, and compactness in synchronous reluctance motors equipped with transverse laminated rotors. However, the rotor cylindrical geometry makes utilization of these materials with pole numbers higher than two more difficult. From a reduced torque ripple viewpoint, particular attention to the rotor slot pitch angle design can lead to improvements. This paper presents an innovative rotor lamination design and assembly using cold-rolled grain-oriented electrical steel to achieve higher torque density along with an algorithm for rotor slot pitch angle design for reduced torque ripple. The design methods and prototyping process are discussed, finite-element analyses and experimental examinations are carried out, and the results are compared to verify and validate the proposed methods.", "title": "" }, { "docid": "b5f9d2f5c401be98b5e9546c0abaef22", "text": "This paper describes a new approach for training generative adversarial networks (GAN) to understand the detailed 3D shape of objects. While GANs have been used in this domain previously, they are notoriously hard to train, especially for the complex joint data distribution over 3D objects of many categories and orientations. Our method extends previous work by employing the Wasserstein distance normalized with gradient penalization as a training objective. This enables improved generation from the joint object shape distribution. Our system can also reconstruct 3D shape from 2D images and perform shape completion from occluded 2.5D range scans. We achieve notable quantitative improvements in comparison to existing baselines.", "title": "" }, { "docid": "76d514ee806b154b4fef2fe2c63c8b27", "text": "Attacks on systems and organisations increasingly exploit human actors, for example through social engineering, complicating their formal treatment and automatic identification. Formalisation of human behaviour is difficult at best, and attacks on socio-technical systems are still mostly identified through brainstorming of experts. In this work we formalize attack tree generation including human factors; based on recent advances in system models we develop a technique to identify possible attacks analytically, including technical and human factors. Our systematic attack generation is based on invalidating policies in the system model by identifying possible sequences of actions that lead to an attack. The generated attacks are precise enough to illustrate the threat, and they are general enough to hide the details of individual steps.", "title": "" }, { "docid": "753dcf47f0d1d63d2b93a8f4b5d78a33", "text": "BACKGROUND\nTrichostasis spinulosa (TS) is a common, underdiagnosed cosmetic skin condition.\n\n\nOBJECTIVES\nThe main objectives of this study were to determine the occurrence of TS relative to age and gender, to analyze its cutaneous distribution, and to investigate any possible familial basis for this condition, its impact on patients, and the types and efficacy of previous treatments.\n\n\nMETHODS\nAll patients presenting to the outpatient dermatology clinic at the study institution and their relatives were examined for the presence of TS and were questioned about family history and previous treatment. Photographs and biopsies of suspected cases of TS were obtained.\n\n\nRESULTS\nOf 2400 patients seen between August and December 2013, 286 patients were diagnosed with TS (135 males, 151 females; prevalence: 11.9%). Women presented more frequently than men with complaints of TS (6.3 vs. 4.2%), and more women had received prior treatment for TS (10.5 vs. 2.8%). The most commonly affected sites were the face (100%), interscapular area (10.5%), and arms (3.1%). Lesions involved the nasal alae in 96.2%, the nasal tip in 90.9%, the chin in 55.9%, and the cheeks in 52.4% of patients. Only 15.7% of patients had forehead lesions, and only 4.5% had perioral lesions. Among the 38 previously treated patients, 65.8% reported temporary improvement.\n\n\nCONCLUSIONS\nTrichostasis spinulosa is a common condition that predominantly affects the face in patients of all ages. Additional studies employing larger cohorts from multiple centers will be required to determine the prevalence of TS in the general population.", "title": "" }, { "docid": "ec3f0abd53fa730574a2f23958edf95d", "text": "Does distraction or rumination work better to diffuse anger? Catharsis theory predicts that rumination works best, but empirical evidence is lacking. In this study, angered participants hit a punching bag and thought about the person who had angered them (rumination group) or thought about becoming physically fit (distraction group). After hitting the punching bag, they reported how angry they felt. Next, they were given the chance to administer loud blasts of noise to the person who had angered them. There also was a no punching bag control group. People in the rumination group felt angrier than did people in the distraction or control groups. People in the rumination group were also most aggressive, followed respectively by people in the distraction and control groups. Rumination increased rather than decreased anger and aggression. Doing nothing at all was more effective than venting anger. These results directly contradict catharsis theory.", "title": "" }, { "docid": "41e03f4540a090a9dc4e9551aad99fb6", "text": "• Unlabeled: Context constructed without dependency labels • Simplified: Functionally similar dependency labels are collapsed • Basic: Standard dependency parse • Enhanced and Enhanced++: Dependency trees augmented (e.g., new edges between modifiers and conjuncts with parents’ labels) • Universal Dependencies (UD): Cross-lingual • Stanford Dependencies (SD): English-tailored • Prior work [1] has shown that embeddings trained using dependency contexts distinguish related words better than similar words. • What effects do decisions made with embeddings have on the characteristics of the word embeddings? • Do Universal Dependency (UD) embeddings capture different characteristics than English-tailored Stanford Dependency (SD) embeddings?", "title": "" }, { "docid": "690659887c8261e2984802e2cdb71b5f", "text": "The Discrete Hodge Helmholtz Decomposition (DHHD) is able to locate critical points in a vector field. We explore two novel applications of this technique to image processing problems, viz., hurricane tracking and fingerprint analysis. The eye of the hurricane represents a rotational center, which is shown to be robustly detected using DHHD. This is followed by an automatic segmentation and tracking of the hurricane eye, which does not require manual initializations. DHHD is also used for identification of reference points in fingerprints. The new technique for reference point detection is relatively insensitive to noise in the orientation field. The DHHD based method is shown to detect reference points correctly for 96.25% of the images in the database used.", "title": "" }, { "docid": "65e64a012a064603f65d02881d7d629b", "text": "BACKGROUND\nThere is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak.\n\n\nOBJECTIVE\nTo develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees.\n\n\nMETHODS\nWe developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets.\n\n\nRESULTS\nThe computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy.\n\n\nCONCLUSION\nThe proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models.", "title": "" }, { "docid": "99e47a88f0950c1928557857facb35d5", "text": "We present the NBA framework, which extends the architecture of the Click modular router to exploit modern hardware, adapts to different hardware configurations, and reaches close to their maximum performance without manual optimization. NBA takes advantages of existing performance-excavating solutions such as batch processing, NUMA-aware memory management, and receive-side scaling with multi-queue network cards. Its abstraction resembles Click but also hides the details of architecture-specific optimization, batch processing that handles the path diversity of individual packets, CPU/GPU load balancing, and complex hardware resource mappings due to multi-core CPUs and multi-queue network cards. We have implemented four sample applications: an IPv4 and an IPv6 router, an IPsec encryption gateway, and an intrusion detection system (IDS) with Aho-Corasik and regular expression matching. The IPv4/IPv6 router performance reaches the line rate on a commodity 80 Gbps machine, and the performances of the IPsec gateway and the IDS reaches above 30 Gbps. We also show that our adaptive CPU/GPU load balancer reaches near-optimal throughput in various combinations of sample applications and traffic conditions.", "title": "" }, { "docid": "7d1bdb84425d344155d30f4c26ce47da", "text": "In the information age, data is pervasive. In some applications, data explosion is a significant phenomenon. The massive data volume poses challenges to both human users and computers. In this project, we propose a new model for identifying representative set from a large database. A representative set is a special subset of the original dataset, which has three main characteristics: It is significantly smaller in size compared to the original dataset. It captures the most information from the original dataset compared to other subsets of the same size. It has low redundancy among the representatives it contains. We use information-theoretic measures such as mutual information and relative entropy to measure the representativeness of the representative set. We first design a greedy algorithm and then present a heuristic algorithm that delivers much better performance. We run experiments on two real datasets and evaluate the effectiveness of our representative set in terms of coverage and accuracy. The experiments show that our representative set attains expected characteristics and captures information more efficiently.", "title": "" }, { "docid": "2970f641a9a9b71421783c929d4c8430", "text": "An electron linear accelerator system with several novel features has been developed for radiation therapy. The beam from a 25 cell S-band standing wave structure, operated in the ¿/2 mode with on-axis couplers, is reflected in an achromatic isochronous magnet and reinjected into the accelerator. The second pass doubles the energy while conserving rf power and minimizing the overall length of the unit. The beam is then transported through an annular electron gun and bent into the collimator by an innovative two-element doubly achromatic doubly focusing 270° magnet which allows a significant reduction in unit height. The energy is reduced by adjusting the position of the reflecting magnet with respect to the accelerator. The system generates 5 Gy m2min-1 beams of 25 MV photons and 5 to 25 MeV electrons. Extensive use of tungsten shielding minimizes neutron leakage. The photon mode surface dose is reduced by a carefully optimized electron filter. An improved scanning system gives exceptionally low electron -mode photon contamination.", "title": "" }, { "docid": "33ba3582dc7873a7e14949775a9b26c1", "text": "Few conservation projects consider climate impacts or have a process for developing adaptation strategies. To advance climate adaptation for biodiversity conservation, we tested a step-by-step approach to developing adaptation strategies with 20 projects from diverse geographies. Project teams assessed likely climate impacts using historical climate data, future climate predictions, expert input, and scientific literature. They then developed adaptation strategies that considered ecosystems and species of concern, project goals, climate impacts, and indicators of progress. Project teams identified 176 likely climate impacts and developed adaptation strategies to address 42 of these impacts. The most common impacts were to habitat quantity or quality, and to hydrologic regimes. Nearly half of expected impacts were temperature-mediated. Twelve projects indicated that the project focus, either focal ecosystems and species or project boundaries, need to change as a result of considering climate impacts. More than half of the adaptation strategies were resistance strategies aimed at preserving the status quo. The rest aimed to make ecosystems and species more resilient in the face of expected changes. All projects altered strategies in some way, either by adding new actions, or by adjusting existing actions. Habitat restoration and enactment of policies and regulations were the most frequently prescribed, though every adaptation strategy required a unique combination of actions. While the effectiveness of these adaptation strategies remains to be evaluated, the application of consistent guidance has yielded important early lessons about how, when, and how often conservation projects may need to be modified to adapt to climate change.", "title": "" }, { "docid": "db3abbca12b7a1c4e611aa3707f65563", "text": "This paper describes the background and methods for the prod uction of CIDOC-CRM compliant data sets from diverse collec tions of source data. The construction of such data sets is based on data in column format, typically exported for databases, as well as free text, typically created through scanning and OCR proce ssing or transcription.", "title": "" }, { "docid": "8f4ce2d2ec650a3923d27c3188f30f38", "text": "Synthetic aperture radar (SAR) interferometry is a modern efficient technique that allows reconstructing the height profile of the observed scene. However, apart for the presence of critical nonlinear inversion steps, particularly crucial in abrupt topography scenarios, it does not allow one to separate different scattering mechanisms in the elevation (height) direction within the ground pixel. Overlay of scattering at different elevations in the same azimuth-range resolution cell can be due either to the penetration of the radiation below the surface or to perspective ambiguities caused by the side-looking geometry. Multibaseline three-dimensional (3-D) SAR focusing allows overcoming such a limitation and has thus raised great interest in the recent research. First results with real data have been only obtained in the laboratory and with airborne systems, or with limited time-span and spatial-coverage spaceborne data. This work presents a novel approach for the tomographic processing of European Remote Sensing satellite (ERS) real data for extended scenes and long time span. Besides facing problems common to the airborne case, such as the nonuniformly spaced passes, this processing requires tackling additional difficulties specific to the spaceborne case, in particular a space-varying phase calibration of the data due to atmospheric variations and possible scene deformations occurring for years-long temporal spans. First results are presented that confirm the capability of ERS multipass tomography to resolve multiple targets within the same azimuth-range cell and to map the 3-D scattering properties of the illuminated scene.", "title": "" }, { "docid": "73a5466e9e471a015c601f75d2147ace", "text": "In this paper we have proposed, developed and tested a hardware module based on Arduino Uno Board and Zigbee wireless technology, which measures the meteorological data, including air temperature, dew point temperature, barometric pressure, relative humidity, wind speed and wind direction. This information is received by a specially designed application interface running on a PC connected through Zigbee wireless link. The proposed system is also a mathematical model capable of generating short time local alerts based on the current weather parameters. It gives an on line and real time effect. We have also compared the data results of the proposed system with the data values of Meteorological Station Chandigarh and Snow & Avalanche Study Establishment Chandigarh Laboratory. The results have come out to be very precise. The idea behind to this work is to monitor the weather parameters, weather forecasting, condition mapping and warn the people from its disastrous effects.", "title": "" }, { "docid": "2cbd6b3d19d0cf843a9e18f5b23872d2", "text": "The topic of multi-person pose estimation has been largely improved recently, especially with the development of convolutional neural network. However, there still exist a lot of challenging cases, such as occluded keypoints, invisible keypoints and complex background, which cannot be well addressed. In this paper, we present a novel network structure called Cascaded Pyramid Network (CPN) which targets to relieve the problem from these \"hard\" keypoints. More specifically, our algorithm includes two stages: GlobalNet and RefineNet. GlobalNet is a feature pyramid network which can successfully localize the \"simple\" keypoints like eyes and hands but may fail to precisely recognize the occluded or invisible keypoints. Our RefineNet tries explicitly handling the \"hard\" keypoints by integrating all levels of feature representations from the GlobalNet together with an online hard keypoint mining loss. In general, to address the multi-person pose estimation problem, a top-down pipeline is adopted to first generate a set of human bounding boxes based on a detector, followed by our CPN for keypoint localization in each human bounding box. Based on the proposed algorithm, we achieve state-of-art results on the COCO keypoint benchmark, with average precision at 73.0 on the COCO test-dev dataset and 72.1 on the COCO test-challenge dataset, which is a 19% relative improvement compared with 60.5 from the COCO 2016 keypoint challenge. Code1 and the detection results for person used will be publicly available for further research.", "title": "" }, { "docid": "7d11d25dc6cd2822d7f914b11b7fe640", "text": "The authors analyze three critical components in training word embeddings: model, corpus, and training parameters. They systematize existing neural-network-based word embedding methods and experimentally compare them using the same corpus. They then evaluate each word embedding in three ways: analyzing its semantic properties, using it as a feature for supervised tasks, and using it to initialize neural networks. They also provide several simple guidelines for training good word embeddings.", "title": "" }, { "docid": "20c3a77fd8b9c9ffc722193c4bca2f2a", "text": "The coating quality of a batch of lab-scale, sustained-release coated tablets was analysed by terahertz pulsed imaging (TPI). Terahertz radiation (2 to 120 cm(-1)) is particularly interesting for coating analysis as it has the capability to penetrate through most pharmaceutical excipients, and hence allows non-destructive coating analysis. Terahertz pulsed spectroscopy (TPS) was employed for the determination of the terahertz refractive indices (RI) on the respective sustained-release excipients used in this study. The whole surface of ten tablets with 10 mg/cm(2) coating was imaged using the fully-automated TPI imaga2000 system. Multidimensional coating thickness or signal intensity maps were reconstructed for the analysis of coating layer thickness, reproducibility, and uniformity. The results from the TPI measurements were validated with optical microscopy imaging and were found to be in good agreement with this destructive analytical technique. The coating thickness around the central band was generally 33% thinner than that on the tablet surfaces. Bimodal coating thickness distribution was detected in some tablets, with thicker coatings around the edges relative to the centre. Aspects of coating defects along with their site, depth and size were identified with virtual terahertz cross-sections. The inter-day precision of the TPI measurement was found to be within 0.5%.", "title": "" }, { "docid": "eae0f8a921b301e52c822121de6c6b58", "text": "Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated/Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7% mIoU on PASCAL-Context, 85.9% mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpasses the winning entry of COCO-Place Challenge 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45%, which is comparable with state-of-the-art approaches with over 10× more layers. The source code for the complete system are publicly available1.", "title": "" } ]
scidocsrr
681f3067c39e973ceac1f3e9f4ff26e0
Classifying the Histology Image of Uterine Cervical Cancer
[ { "docid": "effd296da8b20f02658ddb2eb6210fc1", "text": "Multimegawatt wind-turbine systems, often organized in a wind park, are the backbone of the power generation based on renewable-energy systems. This paper reviews the most-adopted wind-turbine systems, the adopted generators, the topologies of the converters, the generator control and grid connection issues, as well as their arrangement in wind parks.", "title": "" }, { "docid": "2048695744ff2a7905622dfe671ddb88", "text": "Many applications call for high step-up dc–dc converters that do not require isolation. Some dc–dc converters can provide high step-up voltage gain, but with the penalty of either an extreme duty ratio or a large amount of circulating energy. DC–DC converters with coupled inductors can provide high voltage gain, but their efficiency is degraded by the losses associated with leakage inductors. Converters with active clamps recycle the leakage energy at the price of increasing topology complexity. A family of high-efficiency, high step-up dc–dc converters with simple topologies is proposed in this paper. The proposed converters, which use diodes and coupled windings instead of active switches to realize functions similar to those of active clamps, perform better than their active-clamp counterparts. High efficiency is achieved because the leakage energy is recycled and the output rectifier reverse-recovery problem is alleviated.", "title": "" } ]
[ { "docid": "22c9f931198f054e7994e7f1db89a194", "text": "Learning a good distance metric plays a vital role in many multimedia retrieval and data mining tasks. For example, a typical content-based image retrieval (CBIR) system often relies on an effective distance metric to measure similarity between any two images. Conventional CBIR systems simply adopting Euclidean distance metric often fail to return satisfactory results mainly due to the well-known semantic gap challenge. In this article, we present a novel framework of Semi-Supervised Distance Metric Learning for learning effective distance metrics by exploring the historical relevance feedback log data of a CBIR system and utilizing unlabeled data when log data are limited and noisy. We formally formulate the learning problem into a convex optimization task and then present a new technique, named as “Laplacian Regularized Metric Learning” (LRML). Two efficient algorithms are then proposed to solve the LRML task. Further, we apply the proposed technique to two applications. One direct application is for Collaborative Image Retrieval (CIR), which aims to explore the CBIR log data for improving the retrieval performance of CBIR systems. The other application is for Collaborative Image Clustering (CIC), which aims to explore the CBIR log data for enhancing the clustering performance of image pattern clustering tasks. We conduct extensive evaluation to compare the proposed LRML method with a number of competing methods, including 2 standard metrics, 3 unsupervised metrics, and 4 supervised metrics with side information. Encouraging results validate the effectiveness of the proposed technique.", "title": "" }, { "docid": "8e6efa696b960cf08cf1616efc123cbd", "text": "SLAM (Simultaneous Localization and Mapping) for underwater vehicles is a challenging research topic due to the limitations of underwater localization sensors and error accumulation over long-term operations. Furthermore, acoustic sensors for mapping often provide noisy and distorted images or low-resolution ranging, while video images provide highly detailed images but are often limited due to turbidity and lighting. This paper presents a review of the approaches used in state-of-the-art SLAM techniques: Extended Kalman Filter SLAM (EKF-SLAM), FastSLAM, GraphSLAM and its application in underwater environments.", "title": "" }, { "docid": "b209b606f09888157098a3d6054df148", "text": "A major challenge in paraphrase research is the lack of parallel corpora. In this paper, we present a new method to collect large-scale sentential paraphrases from Twitter by linking tweets through shared URLs. The main advantage of our method is its simplicity, as it gets rid of the classifier or human in the loop needed to select data before annotation and subsequent application of paraphrase identification algorithms in the previous work. We present the largest human-labeled paraphrase corpus to date of 51,524 sentence pairs and the first cross-domain benchmarking for automatic paraphrase identification. In addition, we show that more than 30,000 new sentential paraphrases can be easily and continuously captured every month at ∼70% precision, and demonstrate their utility for downstream NLP tasks through phrasal paraphrase extraction. We make our code and data freely available.1", "title": "" }, { "docid": "b72c83f9daa1c46c7455c27f193bd0af", "text": "This paper shows that over time, expected market illiquidity positively affects ex ante stock excess return, suggesting that expected stock excess return partly represents an illiquidity premium. This complements the cross-sectional positive return–illiquidity relationship. Also, stock returns are negatively related over time to contemporaneous unexpected illiquidity. The illiquidity measure here is the average across stocks of the daily ratio of absolute stock return to dollar volume, which is easily obtained from daily stock data for long time series in most stock markets. Illiquidity affects more strongly small firm stocks, thus explaining time series variations in their premiums over time. r 2002 Elsevier Science B.V. All rights reserved. JEL classificaion: G12", "title": "" }, { "docid": "d6ca5374362045167695c776ecd218cf", "text": "Despite interest in structured peer-to-peer overlays and t heir scalability to millions of nodes, few, if any, overlays operate at that scale. T his paper considers the distributed hash table extensions supported by modern BitT orrent clients, which implement a Kademlia-style structured overlay network amo ng millions of BitTorrent users. As there are two disjoint Kademlia-based DHTs in use, we collected two weeks of traces from each DHT. We examine churn, reachabi lity, latency, and liveness of nodes in these overlays, and identify a variety o f problems, such as median lookup times of over a minute. We show that Kademlia’s ch oi e of iterative routing and its lack of a preferential refresh of its local ne ighborhood cause correctness problems and poor performance. We also identify im ple entation bugs, design issues, and security concerns that limit the effecti v ness of these DHTs and we offer possible solutions for their improvement.", "title": "" }, { "docid": "8e03643ffcab0fbbcaabb32d5503e653", "text": "This paper is an in-depth review on silicon implementations of threshold logic gates that covers several decades. In this paper, we will mention early MOS threshold logic solutions and detail numerous very-large-scale integration (VLSI) implementations including capacitive (switched capacitor and floating gate with their variations), conductance/current (pseudo-nMOS and output-wired-inverters, including a plethora of solutions evolved from them), as well as many differential solutions. At the end, we will briefly mention other implementations, e.g., based on negative resistance devices and on single electron technologies.", "title": "" }, { "docid": "8f6107d045b94917cf0f0bd3f262a1bf", "text": "An interesting challenge for explainable recommender systems is to provide successful interpretation of recommendations using structured sentences. It is well known that user-generated reviews, have strong influence on the users' decision. Recent techniques exploit user reviews to generate natural language explanations. In this paper, we propose a character-level attention-enhanced long short-term memory model to generate natural language explanations. We empirically evaluated this network using two real-world review datasets. The generated text present readable and similar to a real user's writing, due to the ability of reproducing negation, misspellings, and domain-specific vocabulary.", "title": "" }, { "docid": "7bda4b1ef78a70e651f74995b01c3c1e", "text": "Given a graph, how can we extract good features for the nodes? For example, given two large graphs from the same domain, how can we use information in one to do classification in the other (i.e., perform across-network classification or transfer learning on graphs)? Also, if one of the graphs is anonymized, how can we use information in one to de-anonymize the other? The key step in all such graph mining tasks is to find effective node features. We propose ReFeX (Recursive Feature eXtraction), a novel algorithm, that recursively combines local (node-based) features with neighborhood (egonet-based) features; and outputs regional features -- capturing \"behavioral\" information. We demonstrate how these powerful regional features can be used in within-network and across-network classification and de-anonymization tasks -- without relying on homophily, or the availability of class labels. The contributions of our work are as follows: (a) ReFeX is scalable and (b) it is effective, capturing regional (\"behavioral\") information in large graphs. We report experiments on real graphs from various domains with over 1M edges, where ReFeX outperforms its competitors on typical graph mining tasks like network classification and de-anonymization.", "title": "" }, { "docid": "1023cd0b40e24429cb39b4d38477cada", "text": "Organizations that migrate from identity-centric to role-based Identity Management face the initial task of defining a valid set of roles for their employees. Due to its capabilities of automated and fast role detection, role mining as a solution for dealing with this challenge has gathered a rapid increase of interest in the academic community. Research activities throughout the last years resulted in a large number of different approaches, each covering specific aspects of the challenge. In this paper, firstly, a survey of the research area provides insight into the development of the field, underlining the need for a comprehensive perspective on role mining. Consecutively, a generic process model for role mining including preand post-processing activities is introduced and existing research activities are classified according to this model. The goal is to provide a basis for evaluating potentially valuable combinations of those approaches in the future.", "title": "" }, { "docid": "2071123e78255257724e6c2a99676ee9", "text": "In a Web Advertising Traffic Operation it’s necessary to manage the day-to-day trafficking, pacing and optimization of digital and paid social campaigns. The data analyst on Traffic Operation can not only quickly provide answers but also speaks the language of the Process Manager and visually displays the discovered process problems. In order to solve a growing number of complaints in the customer service process, the weaknesses in the process itself must be identified and communicated to the department. With the help of Process Mining for the CRM data it is possible to identify unwanted loops and delays in the process. With this paper we propose a process discovery based on Machine Learning technique to automatically discover variations and detect at first glance what the problem is, and undertake corrective measures.", "title": "" }, { "docid": "167703a2adda8ec3ab7d463ab5693f77", "text": "Conventional DEA models assume deterministic and precise data for input and output observations in a static situation, and their DMUs are often ranked incompletely. To work with interval data, DMUS' complete ranking as well as dynamic assessment issues synchronously, we put forward a hybrid model for evaluating the relative efficiencies of a set of DMUs over an observed time period with consideration of interval DEA, super-efficiency DEA and dynamic DEA. However, few researchers, if any, considered this issue within the combination of these three models. The hybrid model proposed in this paper enables us to (i) take interval data in input and output into account, (ii) rank DEA efficient DMUs completely, (iii) obtain the overall dynamic efficiency of DMUs over the entire observed period. We finally illustrate the calculation procedure of the proposed approach by a numerical example.", "title": "" }, { "docid": "d51ef75ccf464cc03656210ec500db44", "text": "The choice of a business process modelling (BPM) tool in combination with the selection of a modelling language is one of the crucial steps in BPM project preparation. Different aspects influence the decision: tool functionality, price, modelling language support, etc. In this paper we discuss the aspect of usability, which has already been recognized as an important topic in software engineering and web design. We conduct a literature review to find out the current state of research on the usability in the BPM field. The results of the literature review show, that although a number of research papers mention the importance of usability for BPM tools, real usability evaluation studies have rarely been undertaken. Based on the results of the literature analysis, the possible research directions in the field of usability of BPM tools are suggested.", "title": "" }, { "docid": "ce2a19f9f3ee13978845f1ede238e5b2", "text": "Optimised allocation of system architectures is a well researched area as it can greatly reduce the developmental cost of systems and increase performance and reliability in their respective applications. In conjunction with the recent shift from federated to integrated architectures in automotive, and the increasing complexity of computer systems, both in terms of software and hardware, the applications of design space exploration and optimised allocation of system architectures are of great interest. This thesis proposes a method to derive architectures and their allocations for systems with real-time constraints. The method implements integer linear programming to solve for an optimised allocation of system architectures according to a set of linear constraints while taking resource requirements, communication dependencies, and manual design choices into account. Additionally, this thesis describes and evaluates an industrial use case using the method wherein the timing characteristics of a system were evaluated, and, the method applied to simultaneously derive a system architecture, and, an optimised allocation of the system architecture. This thesis presents evidence and validations that suggest the viability of the method and its use case in an industrial setting. The work in this thesis sets precedence for future research and development, as well as future applications of the method in both industry and academia.", "title": "" }, { "docid": "ff9ac94a02a799e63583127ac300b0b4", "text": "Latent variable models have been widely applied for the analysis and visualization of large datasets. In the case of sequential data, closed-form inference is possible when the transition and observation functions are linear. However, approximate inference techniques are usually necessary when dealing with nonlinear dynamics and observation functions. Here, we propose a novel variational inference framework for the explicit modeling of time series, Variational Inference for Nonlinear Dynamics (VIND), that is able to uncover nonlinear observation and transition functions from sequential data. The framework includes a structured approximate posterior, and an algorithm that relies on the fixed-point iteration method to find the best estimate for latent trajectories. We apply the method to several datasets and show that it is able to accurately infer the underlying dynamics of these systems, in some cases substantially outperforming state-of-the-art methods.", "title": "" }, { "docid": "376471fa0c721de5a319e990a5dbccc8", "text": "The basal ganglia are thought to play an important role in regulating motor programs involved in gait and in the fluidity and sequencing of movement. We postulated that the ability to maintain a steady gait, with low stride-to-stride variability of gait cycle timing and its subphases, would be diminished with both Parkinson's disease (PD) and Huntington's disease (HD). To test this hypothesis, we obtained quantitative measures of stride-to-stride variability of gait cycle timing in subjects with PD (n = 15), HD (n = 20), and disease-free controls (n = 16). All measures of gait variability were significantly increased in PD and HD. In subjects with PD and HD, gait variability measures were two and three times that observed in control subjects, respectively. The degree of gait variability correlated with disease severity. In contrast, gait speed was significantly lower in PD, but not in HD, and average gait cycle duration and the time spent in many subphases of the gait cycle were similar in control subjects, HD subjects, and PD subjects. These findings are consistent with a differential control of gait variability, speed, and average gait cycle timing that may have implications for understanding the role of the basal ganglia in locomotor control and for quantitatively assessing gait in clinical settings.", "title": "" }, { "docid": "ce2ef27f032d30ce2bc6aa5509a58e49", "text": "Bibliometric measures are commonly used to estimate the popularity and the impact of published research. Existing bibliometric measures provide “quantitative” indicators of how good a published paper is. This does not necessarily reflect the “quality” of the work presented in the paper. For example, when hindex is computed for a researcher, all incoming citations are treated equally, ignoring the fact that some of these citations might be negative. In this paper, we propose using NLP to add a “qualitative” aspect to biblometrics. We analyze the text that accompanies citations in scientific articles (which we term citation context). We propose supervised methods for identifying citation text and analyzing it to determine the purpose (i.e. author intention) and the polarity (i.e. author sentiment) of citation.", "title": "" }, { "docid": "0b3ed0ce26999cb6188fb0c88eb483ab", "text": "We consider the problem of learning causal networks with int erventions, when each intervention is limited in size under Pearl’s Structural Equation Model with independent e rrors (SEM-IE). The objective is to minimize the number of experiments to discover the causal directions of all the e dges in a causal graph. Previous work has focused on the use of separating systems for complete graphs for this task. We prove that any deterministic adaptive algorithm needs to be a separating system in order to learn complete graphs in t e worst case. In addition, we present a novel separating system construction, whose size is close to optimal and is ar guably simpler than previous work in combinatorics. We also develop a novel information theoretic lower bound on th e number of interventions that applies in full generality, including for randomized adaptive learning algorithms. For general chordal graphs, we derive worst case lower bound s o the number of interventions. Building on observations about induced trees, we give a new determinist ic adaptive algorithm to learn directions on any chordal skeleton completely. In the worst case, our achievable sche me is anα-approximation algorithm where α is the independence number of the graph. We also show that there exi st graph classes for which the sufficient number of experiments is close to the lower bound. In the other extreme , there are graph classes for which the required number of experiments is multiplicativelyα away from our lower bound. In simulations, our algorithm almost always performs very c lose to the lower bound, while the approach based on separating systems for complete graphs is significantly wor se for random chordal graphs.", "title": "" }, { "docid": "d3e8dce306eb20a31ac6b686364d0415", "text": "Lung diseases are the deadliest disease in the world. The computer aided detection system in lung diseases needed accurate lung segmentation to preplan the pulmonary treatment and surgeries. The researchers undergone the lung segmentation need a deep study and understanding of the traditional and recent papers developed in the lung segmentation field so that they can continue their research journey in an efficient way with successful outcomes. The need of reviewing the research papers is now a most wanted one for researches so this paper makes a survey on recent trends of pulmonary lung segmentation. Seven recent papers are carried out to analyze the performance characterization of themselves. The working methods, purpose for development, name of algorithm and drawbacks of the method are taken into consideration for the survey work. The tables and charts are drawn based on the reviewed papers. The study of lung segmentation research is more helpful to new and fresh researchers who are committed their research in lung segmentation.", "title": "" }, { "docid": "ccc3cf21c4c97f9c56915b4d1e804966", "text": "In this paper we present a prototype of a Microwave Imaging (MI) system for breast cancer detection. Our system is based on low-cost off-the-shelf microwave components, custom-made antennas, and a small form-factor processing system with an embedded Field-Programmable Gate Array (FPGA) for accelerating the execution of the imaging algorithm. We show that our system can compete with a vector network analyzer in terms of accuracy, and it is more than 20x faster than a high-performance server at image reconstruction.", "title": "" }, { "docid": "8cb8cd4fbed5f811c9000add0b318a44", "text": "Quality of Experience (QoE) has gained enormous attention during the recent years. So far, most of the existing QoE research has focused on audio and video streaming applications, although HTTP traffic carries the majority of traffic in the residential broadband Internet. However, existing QoE models for this domain do not consider temporal dynamics or historical experiences of the user's satisfaction while consuming a certain service. This psychological influence factor of past experience is referred to as the memory effect. The first contribution of this paper is the identification of the memory effect as a key influence factor for Web QoE modeling based on subjective user studies. As second contribution, three different QoE models are proposed which consider the implications of the memory effect and imply the required extensions of the basic models. The proposed Web QoE models are described with a) support vector machines, b) iterative exponential regressions, and c) two-dimensional hidden Markov models.", "title": "" } ]
scidocsrr
5bfc74165797e50564966ea96b6eb239
Security models for web-based applications
[ { "docid": "dd95ff2da8189da46528dc1b76665f02", "text": "In this paper, we develop a new paradigm for access ontrol and authorization management, called task-based authorization control s (TBAC). TBAC models access controls from a task-oriented perspective th an e traditional subject-object one. Access mediation now involves authorizations at various points during the completion of tasks in accordance with some applica tion logic. By taking a taskoriented view of access control and authorizations, TBAC lays the foundation for research into a new breed of “active” security mode ls that are required for agentbased distributed computing and workflow management .", "title": "" } ]
[ { "docid": "8899dc843831f592a89d0f6cf9688dfc", "text": "Deep neural networks have yielded immense success in speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks for recommender systems has received a relatively little introspection. Also, different recommendation scenarios have their own issues which creates the need for different approaches for recommendation. Specifically in news recommendation a major problem is that of varying user interests. In this work, we use deep neural networks with attention to tackle the problem of news recommendation. The key factor in user-item based collaborative filtering is to identify the interaction between user and item features. Matrix factorization is one of the most common approaches for identifying this interaction. It maps both the users and the items into a joint latent factor space such that user-item interactions in that space can be modeled as inner products in that space. Some recent work has used deep neural networks with the motive to learn an arbitrary function instead of the inner product that is used for capturing the user-item interaction. However, directly adapting it for the news domain does not seem to be very suitable. This is because of the dynamic nature of news readership where the interests of the users keep changing with time. Hence, it becomes challenging for recommendation systems to model both user preferences as well as account for the interests which keep changing over time. We present a deep neural model, where a non-linear mapping of users and item features are learnt first. For learning a non-linear mapping for the users we use an attention-based recurrent layer in combination with fully connected layers. For learning the mappings for the items we use only fully connected layers. We then use a ranking based objective function to learn the parameters of the network. We also use the content of the news articles as features for our model. Extensive experiments on a real-world dataset show a significant improvement of our proposed model over the state-of-the-art by 4.7% (Hit Ratio@10). Along with this, we also show the effectiveness of our model to handle the user cold-start and item cold-start problems. ? Vaibhav Kumar and Dhruv Khattar are the corresponding authors", "title": "" }, { "docid": "724f3775b6fb63507c1a327367675a9d", "text": "Machine-learning methods are becoming increasingly popular for automated data analysis. However, standard methods do not scale up to massive scientific and business data sets without expensive hardware. This paper investigates a practical alternative for scaling up: the use of distributed processing to take advantage of the often dormant PCs and workstations available on local networks. Each workstation runs a common rule-learning program on a subset of the data. We first show that for commonly used rule evaluation criteria, a simple form of cooperation can guarantee that a rule will look good to the set of cooperating learners if and only if it would look good to a single learner operating with the entire data set. We then show how such a system can further capitalize on different perspectives by sharing learned knowledge for significant reduction in search effort. We demonstrate the power of the method by learning from a massive data set taken from the domain of cellular fraud detection. Finally, we provide an overview of other methods for scaling up machine learning.", "title": "" }, { "docid": "34a8413935d1724c626f505421480f54", "text": "In this paper, we introduce the Reinforced Mnemonic Reader for machine comprehension (MC) task, which aims to answer a query about a given context document. We propose several novel mechanisms that address critical problems in MC that are not adequately solved by previous works, such as enhancing the capacity of encoder, modeling long-term dependencies of contexts, refining the predicted answer span, and directly optimizing the evaluation metric. Extensive experiments on TriviaQA and Stanford Question Answering Dataset (SQuAD) show that our model achieves state-of-theart results.", "title": "" }, { "docid": "751fffb80b29e2463117461fde03e54c", "text": "Many applications using wireless sensor networks (WSNs) aim at providing friendly and intelligent services based on the recognition of human's activities. Although the research result on wearable computing has been fruitful, our experience indicates that a user-free sensor deployment is more natural and acceptable to users. In our system, activities were recognized through matching the movement patterns of the objects, to which tri-axial accelerometers had been attached. Several representative features, including accelerations and their fusion, were calculated and three classifiers were tested on these features. Compared with decision tree (DT) C4.5 and multiple-layer perception (MLP), support vector machine (SVM) performs relatively well across different tests. Additionally, feature selection are discussed for better system performance for WSNs", "title": "" }, { "docid": "3083acd7ecb327ceac734d49a8cb5c39", "text": "In this paper, we present PicToon, a cartoon system which can generate a personalized cartoon face from an input Picture. PicToon is easy to use and requires little user interaction. Our system consists of three major components: an image-based Cartoon Generator, an interactive Cartoon Editor for exaggeration, and a speech-driven Cartoon Animator. First, to capture an artistic style, the cartoon generation is decoupled into two processes: sketch generation and stroke rendering. An example-based approach is taken to automatically generate sketch lines which depict the facial structure. An inhomogeneous non-parametric sampling plus a flexible facial template is employed to extract the vector-based facial sketch. Various styles of strokes can then be applied. Second, with the pre-designed templates in Cartoon Editor, the user can easily make the cartoon exaggerated or more expressive. Third, a real-time lip-syncing algorithm is also developed that recovers a statistical audio-visual mapping between the character's voice and the corresponding lip configuration. Experimental results demonstrate the effectiveness of our system.", "title": "" }, { "docid": "c4ca4238a0b923820dcc509a6f75849b", "text": "1", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "0926e8e4dd33240c2ad4e028980f3f95", "text": "The medical evaluation is an important part of the clinical and legal process when child sexual abuse is suspected. Practitioners who examine children need to be up to date on current recommendations regarding when, how, and by whom these evaluations should be conducted, as well as how the medical findings should be interpreted. A previously published article on guidelines for medical care for sexually abused children has been widely used by physicians, nurses, and nurse practitioners to inform practice guidelines in this field. Since 2007, when the article was published, new research has suggested changes in some of the guidelines and in the table that lists medical and laboratory findings in children evaluated for suspected sexual abuse and suggests how these findings should be interpreted with respect to sexual abuse. A group of specialists in child abuse pediatrics met in person and via online communication from 2011 through 2014 to review published research as well as recommendations from the Centers for Disease Control and Prevention and the American Academy of Pediatrics and to reach consensus on if and how the guidelines and approach to interpretation table should be updated. The revisions are based, when possible, on data from well-designed, unbiased studies published in high-ranking, peer-reviewed, scientific journals that were reviewed and vetted by the authors. When such studies were not available, recommendations were based on expert consensus.", "title": "" }, { "docid": "6b214fdd60a1a4efe27258c2ab948086", "text": "Ambient Assisted Living (AAL) aims to create innovative technical solutions and services to support independent living among older adults, improve their quality of life and reduce the costs associated with health and social care. AAL systems provide health monitoring through sensor based technologies to preserve health and functional ability and facilitate social support for the ageing population. Human activity recognition (HAR) is an enabler for the development of robust AAL solutions, especially in safety critical environments. Therefore, HAR models applied within this domain (e.g. for fall detection or for providing contextual information to caregivers) need to be accurate to assist in developing reliable support systems. In this paper, we evaluate three machine learning algorithms, namely Support Vector Machine (SVM), a hybrid of Hidden Markov Models (HMM) and SVM (SVM-HMM) and Artificial Neural Networks (ANNs) applied on a dataset collected between the elderly and their caregiver counterparts. Detected activities will later serve as inputs to a bidirectional activity awareness system for increasing social connectedness. Results show high classification performances for all three algorithms. Specifically, the SVM-HMM hybrid demonstrates the best classification performance. In addition to this, we make our dataset publicly available for use by the machine learning community.", "title": "" }, { "docid": "2967df08ad0b9987ce2d6cb6006d3e69", "text": "As a crucial security problem, anti-spoofing in biometrics, and particularly for the face modality, has achieved great progress in the recent years. Still, new threats arrive inform of better, more realistic and more sophisticated spoofing attacks. The objective of the 2nd Competition on Counter Measures to 2D Face Spoofing Attacks is to challenge researchers to create counter measures effectively detecting a variety of attacks. The submitted propositions are evaluated on the Replay-Attack database and the achieved results are presented in this paper.", "title": "" }, { "docid": "7df626465d52dfe5859e682c685c62bc", "text": "This thesis addresses the task of error detection in the choice of content words focusing on adjective–noun and verb–object combinations. We show that error detection in content words is an under-explored area in research on learner language since (i) most previous approaches to error detection and correction have focused on other error types, and (ii) the approaches that have previously addressed errors in content words have not performed error detection proper. We show why this task is challenging for the existing algorithms and propose a novel approach to error detection in content words. We note that since content words express meaning, an error detection algorithm should take the semantic properties of the words into account. We use a compositional distribu-tional semantic framework in which we represent content words using their distributions in native English, while the meaning of the combinations is represented using models of com-positional semantics. We present a number of measures that describe different properties of the modelled representations and can reliably distinguish between the representations of the correct and incorrect content word combinations. Finally, we cast the task of error detection as a binary classification problem and implement a machine learning classifier that uses the output of the semantic measures as features. The results of our experiments confirm that an error detection algorithm that uses semantically motivated features achieves good accuracy and precision and outperforms the state-of-the-art approaches. We conclude that the features derived from the semantic representations encode important properties of the combinations that help distinguish the correct combinations from the incorrect ones. The approach presented in this work can naturally be extended to other types of content word combinations. Future research should also investigate how the error correction component for content word combinations could be implemented. 3 4 Acknowledgements First and foremost, I would like to express my profound gratitude to my supervisor, Ted Briscoe, for his constant support and encouragement throughout the course of my research. This work would not have been possible without his invaluable guidance and advice. I am immensely grateful to my examiners, Ann Copestake and Stephen Pulman, for providing their advice and constructive feedback on the final version of the dissertation. I am also thankful to my colleagues at the Natural Language and Information Processing research group for the insightful and inspiring discussions over these years. In particular, I would like to express my gratitude to would like to thank …", "title": "" }, { "docid": "ce7175f868e2805e9e08e96a1c9738f4", "text": "The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its use. In A Semantic Web Primer Grigoris Antoniou and Frank van Harmelen provide an introduction and guide to this emerging field, describing its key ideas, languages, and technologies. Suitable for use as a textbook or for self-study by professionals, the book concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own and includes exercises, project descriptions, and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL, and rules) and technologies (explicit metadata, ontologies, and logic and inference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processible semantics; and OWL, the W3C-approved standard for a Web ontology language that is more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.", "title": "" }, { "docid": "49e786f66641194a22bf488c5e97ed7f", "text": "The non-negative matrix factorization (NMF) determines a lower rank approximation of a matrix where an interger \"!$# is given and nonnegativity is imposed on all components of the factors % & (' and % )'* ( . The NMF has attracted much attention for over a decade and has been successfully applied to numerous data analysis problems. In applications where the components of the data are necessarily nonnegative such as chemical concentrations in experimental results or pixels in digital images, the NMF provides a more relevant interpretation of the results since it gives non-subtractive combinations of non-negative basis vectors. In this paper, we introduce an algorithm for the NMF based on alternating non-negativity constrained least squares (NMF/ANLS) and the active set based fast algorithm for non-negativity constrained least squares with multiple right hand side vectors, and discuss its convergence properties and a rigorous convergence criterion based on the Karush-Kuhn-Tucker (KKT) conditions. In addition, we also describe algorithms for sparse NMFs and regularized NMF. We show how we impose a sparsity constraint on one of the factors by +-, -norm minimization and discuss its convergence properties. Our algorithms are compared to other commonly used NMF algorithms in the literature on several test data sets in terms of their convergence behavior.", "title": "" }, { "docid": "23d42976a9651203e0d4dd1c332234ae", "text": "BACKGROUND\nStatistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem.\n\n\nRESULTS\nThe terms in OBCS including 'data collection', 'data transformation in statistics', 'data visualization', 'statistical data analysis', and 'drawing a conclusion based on data', cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. Currently, OBCS comprehends 878 terms, representing 20 BFO classes, 403 OBI classes, 229 OBCS specific classes, and 122 classes imported from ten other OBO ontologies. We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. Other ongoing projects using OBCS for statistical data processing are also discussed. The OBCS source code and documentation are available at: https://github.com/obcs/obcs .\n\n\nCONCLUSIONS\nThe Ontology of Biological and Clinical Statistics (OBCS) is a community-based open source ontology in the domain of biological and clinical statistics. OBCS is a timely ontology that represents statistics-related terms and their relations in a rigorous fashion, facilitates standard data analysis and integration, and supports reproducible biological and clinical research.", "title": "" }, { "docid": "f2c846f200d9c59362bf285b2b68e2cd", "text": "A Root Cause Failure Analysis (RCFA) for repeated impeller blade failures in a five stage centrifugal propane compressor is described. The initial failure occurred in June 2007 with a large crack found in one blade on the third impeller and two large pieces released from adjacent blades on the fourth impeller. An RCFA was performed to determine the cause of the failures. The failure mechanism was identified to be high cycle fatigue. Several potential causes related to the design, manufacture, and operation of the compressor were examined. The RCFA concluded that the design and manufacture were sound and there were no conclusive issues with respect to operation. A specific root cause was not identified. In June 2009, a second case of blade cracking occurred with a piece once again released from a single blade on the fourth impeller. Due to the commonality with the previous instance this was identified as a repeat failure. Specifically, both cases had occurred in the same compressor whereas, two compressors operating in identical service in adjacent Liquefied natural Gas (LNG) trains had not encountered the problem. A second RCFA was accordingly launched with the ultimate objective of preventing further repeated failures. Both RCFA teams were established comprising of engineers from the End User (RasGas), the OEM (Elliott Group) and an independent consultancy (Southwest Research Institute). The scope of the current investigation included a detailed metallurgical assessment, impeller modal frequency assessment, steady and unsteady computational fluid dynamics (CFD) assessment, finite element analyses (FEA), fluid structure interaction (FSI) assessment, operating history assessment and a comparison change analysis. By the process of elimination, the most probable causes were found to be associated with: • vane wake excitation of either the impeller blade leading edge modal frequency from severe mistuning and/or unusual response of the 1-diameter cover/blades modal frequency • mist carry over from third side load upstream scrubber • end of curve operation in the compressor rear section INTRODUCTION RasGas currently operates seven LNG trains at Ras Laffan Industrial City, Qatar. Train 3 was commissioned in 2004 with a nameplate LNG production of 4.7 Mtpa which corresponds to a wet sour gas feed of 790 MMscfd (22.37 MMscmd). Trains 4 and 5 were later commissioned in 2005 and 2006 respectively. They were also designed for a production 4.7 Mtpa LNG but have higher wet sour gas feed rates of 850 MMscfd (24.05 MMscmd). Despite these differences, the rated operation of the propane compressor is identical in each train. Figure 1. APCI C3-MR Refrigeration system for Trains 3, 4 and 5 The APCI C3-MR refrigeration cycle (Roberts, et al. 2002), depicted in Figure 1 is common for all three trains. Propane is circulated in a continuous loop between four compressor inlets and a single discharge. The compressed discharge gas is cooled and condensed in three sea water cooled heat exchangers before being routed to the LLP, LP, MP and HP evaporators. Here, the liquid propane is evaporated by the transfer of heat from the warmer feed and MR gas streams. It finally passes through one of the four suction scrubbers before re-entering the compressor as a gas. Although not shown, each section inlet has a dedicated anti-surge control loop from the de-superheater discharge to the suction scrubber inlet. A cross section of the propane compressor casing and rotor is illustrated in Figure 2. It is a straight through centrifugal unit with a horizontally split casing. Five impellers are mounted upon the 21.3 ft (6.5 m) long shaft. Three side loads add gas upstream of the suction at impellers 2, 3 & 4. The impellers are of two piece construction, with each piece fabricated from AISI 4340 forgings that were heat treated such that the material has sufficient strength and toughness for operation at temperatures down to -50F (-45.5C). The blades are milled to the hub piece and the cover piece was welded to the blades using a robotic metal inert gas (MIG) welding process. The impellers are mounted to the shaft with an interference fit. The thrust disc is mounted to the shaft with a line on line fit and antirotation key. The return channel and side load inlets are all vaned to align the downstream swirl angle. The impeller diffusers are all vaneless. A summary of the relevant compressor design parameters is given in Table 1. The complete compressor string is also depicted in Figure 1. The propane compressor is coupled directly to the HP MR compressor and driven by a GE Frame 7EA gas turbine and ABB 16086 HP (12 MW) helper motor at 3600 rpm rated shaft speed. Table 1. Propane Compressor design parameters Component Material No of", "title": "" }, { "docid": "c13a77ff62c1a2fd8df1e6a35c2d4b0f", "text": "Recently, incorporating a learned dynamic model in generating imagined data has been shown to be an effective way to reduce sample-complexity of model-free RL. Such model-free/model-based hybrid approaches usually require rolling out the dynamic model a fixed number of steps into the future. We argue that such fixed rollout is problematic for several reasons. We propose a simple adaptive rollout algorithm to improve the model-based component of these approaches and conduct experiment on CartPole task to evaluate the effects of adaptive rollout.", "title": "" }, { "docid": "349bf812e97327957e3bb1b4a786e3b9", "text": "UML and UML-based development methods have become de facto standards in industry, and there are many claims for the positive effects of modelling object-oriented systems using methods based on UML. However, there is no reported empirical evaluation of UML-based development in large, industrial projects. This paper reports a case study in ABB, a global company with 120,000 employees, conducted to identify immediate benefits as well as difficulties and their causes when introducing UML-based development in large projects. ABB decided to use UML-based development in the company’s system development projects as part of an effort to enable certification according to the IEC 61508 safety standard. A UML-based development method was first applied in a large, international project with 230 system developers, testers and managers. The goal of the project was to build a new version of a safety-critical process control system. Most of the software was embedded. The project members were mostly newcomers to the use of UML. Interviews with 16 system developers and project managers at their sites in Sweden and Norway were conducted to identify the extent to which the introduction of UML-based development had improved their development process. The interviewees had experienced improvements with traceability from requirements to code, design of the code, and development of test cases as well as in communication and documentation. These results thus support claims in the literature regarding improvements that may be obtained through the use of UML. However, the results also show that the positive effects of UML-based development were reduced due to (1) legacy code that it was not feasible to reverse engineer into UML, (2) the distribution of requirements to development teams based on physical units and not on functionality, (3) training that was not particularly adapted to this project and considered too expensive to give to project members not directly involved in development with UML, and (4) a choice of modelling tools with functionality that was not in accordance with the needs of the project. The results from this study should be useful in enabling other UML adopters to have more realistic expectations and a better basis for making project management decisions.", "title": "" }, { "docid": "569fed958b7a471e06ce718102687a1e", "text": "The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain. In comparison, there are a number of problems in which there are a number of different inputs which are all ’of the same type’ — multiple particles, multiple agents, multiple stock prices, etc. The corresponding symmetry to this is permutation symmetry, in that the algorithm should not depend on the specific ordering of the input data. We discuss a permutation-invariant neural network layer in analogy to convolutional layers, and show the ability of this architecture to learn to predict the motion of a variable number of interacting hard discs in 2D. In the same way that convolutional layers can generalize to different image sizes, the permutation layer we describe generalizes to different numbers of objects.", "title": "" }, { "docid": "28971d75a464178afe93e0ef0f4479c5", "text": "OBJECTIVE\nTo compare two levels of stress (solitary confinement (SC) and non-SC) among remand prisoners as to incidence of psychiatric disorders in relation to prevalent disorders.\n\n\nMETHOD\nLongitudinal repeated assessments were carried out from the start and during the remand phase of imprisonment. Both interview-based and self-reported measures were applied to 133 remand prisoners in SC and 95 remand prisoners in non-SC randomly selected in a parallel study design.\n\n\nRESULTS\nIncidence of psychiatric disorders developed in the prison was significantly higher in SC prisoners (28%) than in non-SC prisoners (15%). Most disorders were adjustment disorders, with depressive disorders coming next. Incident psychotic disorders were rare. The difference regarding incidence was primarily explained by level of stress (i.e. prison form) rather than confounding factors. Quantitative measures of psychopathology (Hamilton Scales and General Health Questionnaire) were significantly higher in subjects with prevalent and incident disorders compared to non-disordered subjects.\n\n\nCONCLUSION\nDifferent levels of stress give rise to different incidence of psychiatric morbidity among remand prisoners. The surplus of incident disorders among SC prisoners is related to SC, which may act as a mental health hazard.", "title": "" }, { "docid": "6014fd8edf8c7417d31d6d68e615ce67", "text": "To widen their accessibility and increase their utility, intelligent agents must be able to learn complex behaviors as specified by (non-expert) human users. Moreover, they will need to learn these behaviors within a reasonable amount of time while efficiently leveraging the sparse feedback a human trainer is capable of providing. Recent work has shown that human feedback can be characterized as a critique of an agent’s current behavior rather than as an alternative reward signal to be maximized, culminating in the COnvergent Actor-Critic by Humans (COACH) algorithm for making direct policy updates based on human feedback. Our work builds on COACH, moving to a setting where the agent’s policy is represented by a deep neural network. We employ a series of modifications on top of the original COACH algorithm that are critical for successfully learning behaviors from high-dimensional observations, while also satisfying the constraint of obtaining reduced sample complexity. We demonstrate the effectiveness of our Deep COACH algorithm in the rich 3D world of Minecraft with an agent that learns to complete tasks by mapping from raw pixels to actions using only real-time human feedback in 10–15 minutes of interaction.", "title": "" } ]
scidocsrr
0327d7c4c95be05207d977669bd13ba5
Facial expression recognition from video sequences: temporal and static modeling
[ { "docid": "6c03036f1b5af68fbaa9f516f850f94f", "text": "Although initially introduced and studied in the late 1960s and early 1970s, statistical methods of Markov source or hidden Markov modeling have become increasingly popular in the last several years. There are two strong reasons why this has occurred. First the models are very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications. Second the models, when applied properly, work very well in practice for several important applications. In this paper we attempt to carefully and methodically review the theoretical aspects of this type of statistical modeling and show how they have been applied to selected problems in machine recognition of speech.", "title": "" }, { "docid": "8366003636c8596841f749d69346deee", "text": "Probabilistic classifiers are developed by assuming generative models which are product distributions over the original attribute space (as in naive Bayes) or more involved spaces (as in general Bayesian networks). While this paradigm has been shown experimentally successful on real world applications, despite vastly simplified probabilistic assumptions, the question of why these approaches work is still open. This paper resolves this question. We show that almost all joint distributions with a given set of marginals (i.e., all distributions that could have given rise to the classifier learned) or, equivalently, almost all data sets that yield this set of marginals, are very close (in terms of distributional distance) to the product distribution on the marginals; the number of these distributions goes down exponentially with their distance from the product distribution. Consequently, as we show, for almost all joint distributions with this set of marginals, the penalty incurred in using the marginal distribution rather than the true one is small. In addition to resolving the puzzle surrounding the success of probabilistic classifiers our results contribute to understanding the tradeoffs in developing probabilistic classifiers and will help in developing better classifiers.", "title": "" } ]
[ { "docid": "4e70ee2ce36077cf14aa9d99db8c13ad", "text": "Necrotizing fasciitis is a progressive, life-threatening, bacterial infection of the skin, the subcutaneous tissue and the underlying fascia, in most cases caused by -hemolytic group A Streptococcus. Only early diagnosis and aggressive therapy including broad spectrum antibiotics and surgical intervention can avoid systemic toxicity with a high mortality rate. This disease is commonly known to occur in the lower extremities and trunk, and only rarely in the head and neck region, the face being rarest finding. When located in the face necrotizing fasciitis is associated with severe cosmetic and functional complication due to the invasive nature, infection and often due to the necessary surgical treatment. In the following article, we present the successful diagnosis and management of an isolated facial necrotizing fasciitis as a consequence of odontogenic infection.", "title": "" }, { "docid": "5ce36b14860b2348bc34fd0a01a5ea87", "text": "The Orange “Data for Development” (D4D) challenge is an open data challenge on anonymous call patterns of Orange’s mobile phone users in Ivory Coast. The goal of the challenge is to help address society development questions in novel ways by contributing to the socio-economic development and well-being of the Ivory Coast population. Participants to the challenge are given access to four mobile phone datasets and the purpose of this paper is to describe the four datasets. The website http://www.d4d.orange.com contains more information about the participation rules. The datasets are based on anonymized Call Detail Records (CDR) of phone calls and SMS exchanges between five million of Orange’s customers in Ivory Coast between December 1, 2011 and April 28, 2012. The datasets are: (a) antenna-to-antenna traffic on an hourly basis, (b) individual trajectories for 50,000 customers for two week time windows with antenna location information, (3) individual trajectories for 500,000 customers over the entire observation period with sub-prefecture location information, and (4) a sample of communication graphs for 5,000 customers. The geofast web interface www.geofast.net for the visualisation of mobile phone communications (countries available: France, Belgium, Ivory Coast). ∗University of Louvain, B-1348 Louvain-la-Neuve, Belgium. vincent.blondel@uclouvain.be †Orange Labs, France 1 ar X iv :1 21 0. 01 37 v2 [ cs .C Y ] 2 8 Ja n 20 13", "title": "" }, { "docid": "3acb0ab9f20e1efece96a2414a9c9c8c", "text": "Artificial markers are successfully adopted to solve several vision tasks, ranging from tracking to calibration. While most designs share the same working principles, many specialized approaches exist to address specific application domains. Some are specially crafted to boost pose recovery accuracy. Others are made robust to occlusion or easy to detect with minimal computational resources. The sheer amount of approaches available in recent literature is indeed a statement to the fact that no silver bullet exists. Furthermore, this is also a hint to the level of scholarly interest that still characterizes this research topic. With this paper we try to add a novel option to the offer, by introducing a general purpose fiducial marker which exhibits many useful properties while being easy to implement and fast to detect. The key ideas underlying our approach are three. The first one is to exploit the projective invariance of conics to jointly find the marker and set a reading frame for it. Moreover, the tag identity is assessed by a redundant cyclic coded sequence implemented using the same circular features used for detection. Finally, the specific design and feature organization of the marker are well suited for several practical tasks, ranging from camera calibration to information payload delivery.", "title": "" }, { "docid": "da694b74b3eaae46d15f589e1abef4b8", "text": "Impaired water quality caused by human activity and the spread of invasive plant and animal species has been identified as a major factor of degradation of coastal ecosystems in the tropics. The main goal of this study was to evaluate the performance of AnnAGNPS (Annualized NonPoint Source Pollution Model), in simulating runoff and soil erosion in a 48 km watershed located on the Island of Kauai, Hawaii. The model was calibrated and validated using 2 years of observed stream flow and sediment load data. Alternative scenarios of spatial rainfall distribution and canopy interception were evaluated. Monthly runoff volumes predicted by AnnAGNPS compared well with the measured data (R 1⁄4 0.90, P < 0.05); however, up to 60% difference between the actual and simulated runoff were observed during the driest months (May and July). Prediction of daily runoff was less accurate (R 1⁄4 0.55, P < 0.05). Predicted and observed sediment yield on a daily basis was poorly correlated (R 1⁄4 0.5, P < 0.05). For the events of small magnitude, the model generally overestimated sediment yield, while the opposite was true for larger events. Total monthly sediment yield varied within 50% of the observed values, except for May 2004. Among the input parameters the model was most sensitive to the values of ground residue cover and canopy cover. It was found that approximately one third of the watershed area had low sediment yield (0e1 t ha 1 y ), and presented limited erosion threat. However, 5% of the area had sediment yields in excess of 5 t ha 1 y . Overall, the model performed reasonably well, and it can be used as a management tool on tropical watersheds to estimate and compare sediment loads, and identify ‘‘hot spots’’ on the landscape. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "84a2d26a0987a79baf597508543f39b6", "text": "In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to `justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.", "title": "" }, { "docid": "485aba813ad5587a6acb91bb3ad5ced9", "text": "Nowadays, the transformerless inverters have become a widespread trend in the single-phase grid-connected photovoltaic (PV) systems because of the low cost and high efficiency concerns. Unfortunately, due to the non-galvanic isolation configuration, the ground leakage current would appear through the PV parasitic capacitance into the ground, which induces the physical danger and serious EMI problems. A novel transformerless single-phase inverter with two unipolar SPWM control strategies is proposed in this paper. The inverter can guarantee no ground leakage current and high reliability by applying either of the SPWM strategies. Meanwhile, the low total harmonic distortion (THD) of the grid-connected current is achieved thanks to the alleviation of the dead time effect. Besides, the required input DC voltage is the same low as that of the full-bridge inverter. Furthermore, the output filter inductance is reduced greatly due to the three-level output voltage, which leads to the high power density and high efficiency. At last, a 1kW prototype has been built and tested to verify the theoretical analysis of the paper.", "title": "" }, { "docid": "a2cc07e33bf3fc3398c70e0baab791a9", "text": "Bone age assessment (BAA) of unknown people is one of the most important topics in clinical procedure for evaluation of biological maturity of children. BAA is performed usually by comparing an X-ray of left hand wrist with an atlas of known sample bones. Recently, BAA has gained remarkable ground from academia and medicine. Manual methods of BAA are time-consuming and prone to observer variability. This is a motivation for developing automated methods of BAA. However, there is considerable research on the automated assessment, much of which are still in the experimental stage. This survey provides taxonomy of automated BAA approaches and discusses the challenges. Finally, we present suggestions for future research.", "title": "" }, { "docid": "6cd317113158241a98517ad5a8247174", "text": "Feature Oriented Programming (FOP) is an emerging paradigmfor application synthesis, analysis, and optimization. Atarget application is specified declaratively as a set of features,like many consumer products (e.g., personal computers,automobiles). FOP technology translates suchdeclarative specifications into efficient programs.", "title": "" }, { "docid": "cb85db604bf21751766daf3751dd73bd", "text": "The heterogeneous cloud radio access network (H-CRAN) is a promising paradigm that incorporates cloud computing into heterogeneous networks (HetNets), thereby taking full advantage of cloud radio access networks (C-RANs) and HetNets. Characterizing cooperative beamforming with fronthaul capacity and queue stability constraints is critical for multimedia applications to improve the energy efficiency (EE) in H-CRANs. An energy-efficient optimization objective function with individual fronthaul capacity and intertier interference constraints is presented in this paper for queue-aware multimedia H-CRANs. To solve this nonconvex objective function, a stochastic optimization problem is reformulated by introducing the general Lyapunov optimization framework. Under the Lyapunov framework, this optimization problem is equivalent to an optimal network-wide cooperative beamformer design algorithm with instantaneous power, average power, and intertier interference constraints, which can be regarded as a weighted sum EE maximization problem and solved by a generalized weighted minimum mean-square error approach. The mathematical analysis and simulation results demonstrate that a tradeoff between EE and queuing delay can be achieved, and this tradeoff strictly depends on the fronthaul constraint.", "title": "" }, { "docid": "867a6923a650bdb1d1ec4f04cda37713", "text": "We examine Gärdenfors’ theory of conceptual spaces, a geometrical form of knowledge representation (Conceptual spaces: The geometry of thought, MIT Press, Cambridge, 2000), in the context of the general Creative Systems Framework introduced by Wiggins (J Knowl Based Syst 19(7):449–458, 2006a; New Generation Comput 24(3):209–222, 2006b). Gärdenfors’ theory offers a way of bridging the traditional divide between symbolic and sub-symbolic representations, as well as the gap between representational formalism and meaning as perceived by human minds. We discuss how both these qualities may be advantageous from the point of view of artificial creative systems. We take music as our example domain, and discuss how a range of musical qualities may be instantiated as conceptual spaces, and present a detailed conceptual space formalisation of musical metre.", "title": "" }, { "docid": "c6029c95b8a6b2c6dfb688ac049427dc", "text": "This paper presents development of a two-fingered robotic device for amputees whose hands are partially impaired. In this research, we focused on developing a compact and lightweight robotic finger system, so the target amputee would be able to execute simple activities in daily living (ADL), such as grasping a bottle or a cup for a long time. The robotic finger module was designed by considering the impaired shape and physical specifications of the target patient's hand. The proposed prosthetic finger was designed using a linkage mechanism which was able to create underactuated finger motion. This underactuated mechanism contributes to minimizing the number of required actuators for finger motion. In addition, the robotic finger was not driven by an electro-magnetic rotary motor, but a shape-memory alloy (SMA) actuator. Having a driving method using SMA wire contributed to reducing the total weight of the prosthetic robot finger as it has higher energy density than that offered by the method using the electrical DC motor. In this paper, we confirmed the performance of the proposed robotic finger by fundamental driving tests and the characterization of the SMA actuator.", "title": "" }, { "docid": "3bc800074b32fdf03812638d6a57f23d", "text": "Various low-latency anonymous communication systems such as Tor and Anoymizer have been designed to provide anonymity service for users. In order to hide the communication of users, many anonymity systems pack the application data into equal-sized cells (e.g., 512 bytes for Tor, a known real-world, circuit-based low-latency anonymous communication network). In this paper, we investigate a new cell counter based attack against Tor, which allows the attacker to confirm anonymous communication relationship among users very quickly. In this attack, by marginally varying the counter of cells in the target traffic at the malicious exit onion router, the attacker can embed a secret signal into the variation of cell counter of the target traffic. The embedded signal will be carried along with the target traffic and arrive at the malicious entry onion router. Then an accomplice of the attacker at the malicious entry onion router will detect the embedded signal based on the received cells and confirm the communication relationship among users. We have implemented this attack against Tor and our experimental data validate its feasibility and effectiveness. There are several unique features of this attack. First, this attack is highly efficient and can confirm very short communication sessions with only tens of cells. Second, this attack is effective and its detection rate approaches 100% with a very low false positive rate. Third, it is possible to implement the attack in a way that appears to be very difficult for honest participants to detect (e.g. using our hopping-based signal embedding).", "title": "" }, { "docid": "1524297aeea3a28a542d8006607266bf", "text": "Fully automating machine learning pipeline is one of the outstanding challenges of general artificial intelligence, as practical machine learning often requires costly human driven process, such as hyper-parameter tuning, algorithmic selection, and model selection. In this work, we consider the problem of executing automated, yet scalable search for finding optimal gradient based meta-learners in practice. As a solution, we apply progressive neural architecture search to proto-architectures by appealing to the model agnostic nature of general gradient based meta learners. In the presence of recent universality result of Finn et al.[9], our search is a priori motivated in that neural network architecture search dynamics—automated or not—may be quite different from that of the classical setting with the same target tasks, due to the presence of the gradient update operator. A posteriori, our search algorithm, given appropriately designed search spaces, finds gradient based meta learners with non-intuitive proto-architectures that are narrowly deep, unlike the inception-like structures previously observed in the resulting architectures of traditional NAS algorithms. Along with these notable findings, the searched gradient based meta-learner achieves state-of-the-art results on the few shot classification problem on Mini-ImageNet with 76.29% accuracy, which is an 13.18% improvement over results reported in the original MAML paper. To our best knowledge, this work is the first successful AutoML implementation in the context of meta learning.", "title": "" }, { "docid": "1c58342d02aaab2f3ac15770effeb156", "text": "Color Doppler US (CDUS) has been used for evaluation of cerebral venous sinuses in neonates. However, there is very limited information available regarding the appearance of superficial and deep normal cerebral venous sinuses using CDUS and the specificity of the technique to rule out disease. To determine the specificity, inter-modality and inter-reader agreement of color Doppler US (CDUS). To evaluate normal cerebral venous sinuses in neonates in comparison to MR venography (MRV). Newborns undergoing a clinically indicated brain MRI were prospectively evaluated. All underwent a dedicated CDUS of the cerebral venous sinuses within 10 h (mean, 3.5 h, range, and 2–7.6 h) of the MRI study using a standard protocol. Fifty consecutive neonates participated in the study (30 males [60%]; 25–41 weeks old; mean, 37 weeks). The mean time interval between the date of birth and the CDUS study was 19.1 days. No cases showed evidence of thrombosis. Overall agreement for US reading was 97% (range, 82–100%), for MRV reading, 99% (range, 96–100%) and for intermodality, 100% (range, 96–100%). Excellent US-MRI agreement was noted for superior sagittal sinus, cerebral veins, straight sinus, torcular Herophili, sigmoid sinus, superior jugular veins (94–98%) and transverse sinuses (82–86%). In 10 cases (20%), MRV showed flow gaps whereas normal flow was demonstrated with US. Visualization of the inferior sagittal sinus was limited with both imaging techniques. Excellent reading agreement was noted for US, MRV and intermodality. CDUS is highly specific to rule out cerebral venous thrombosis in neonates and holds potential for clinical application as part of clinical-laboratory-imaging algorithms of pre/post-test probabilities of disease.", "title": "" }, { "docid": "6bfc1850211819a2943c5cbff1355d0f", "text": "Constrained image splicing detection and localization (CISDL) is a newly proposed challenging task for image forensics, which investigates two input suspected images and identifies whether one image has suspected regions pasted from the other. In this paper, we propose a novel adversarial learning framework to train the deep matching network for CISDL. Our framework mainly consists of three building blocks: 1) the deep matching network based on atrous convolution (DMAC) aims to generate two high-quality candidate masks which indicate the suspected regions of the two input images, 2) the detection network is designed to rectify inconsistencies between the two corresponding candidate masks, 3) the discriminative network drives the DMAC network to produce masks that are hard to distinguish from ground-truth ones. In DMAC, atrous convolution is adopted to extract features with rich spatial information, the correlation layer based on the skip architecture is proposed to capture hierarchical features, and atrous spatial pyramid pooling is constructed to localize tampered regions at multiple scales. The detection network and the discriminative network act as the losses with auxiliary parameters to supervise the training of DMAC in an adversarial way. Extensive experiments, conducted on 21 generated testing sets and two public datasets, demonstrate the effectiveness of the proposed framework and the superior performance of DMAC.", "title": "" }, { "docid": "225197acad74bf9a555305949f41cd4b", "text": "Security administration plays a vital role in network management tasks. The intrusion detection systems are primarily designed to protect the availability, confidentiality and integrity of critical network information systems. There are plenty of IDSes to choose from, both commercial and open source. Since most of the commercial intrusion detection systems are at typically thousands of dollars and they tend to represent a significant resource requirement in themselves, for small networks, use of such IDS is not feasible. Therefore mostly open source IDS are being used. This paper provides a general working behaviour, features and comparison of two most popular open source network IDS SNORT & BRO. Keywords-alerts, intrusion, logging, network traffic, open source, packets", "title": "" }, { "docid": "74b7acf77a55eadb7ca83d6812895d04", "text": "Research suggests that living in and adapting to foreign cultures facilitates creativity. The current research investigated whether one aspect of the adaptation process-multicultural learning-is a critical component of increased creativity. Experiments 1-3 found that recalling a multicultural learning experience: (a) facilitates idea flexibility (e.g., the ability to solve problems in multiple ways), (b) increases awareness of underlying connections and associations, and (c) helps overcome functional fixedness. Importantly, Experiments 2 and 3 specifically demonstrated that functional learning in a multicultural context (i.e., learning about the underlying meaning or function of behaviors in that context) is particularly important for facilitating creativity. Results showed that creativity was enhanced only when participants recalled a functional multicultural learning experience and only when participants had previously lived abroad. Overall, multicultural learning appears to be an important mechanism by which foreign living experiences lead to creative enhancement.", "title": "" }, { "docid": "28574c82a49b096b11f1b78b5d62e425", "text": "A major reason for the current reproducibility crisis in the life sciences is the poor implementation of quality control measures and reporting standards. Improvement is needed, especially regarding increasingly complex in vitro methods. Good Cell Culture Practice (GCCP) was an effort from 1996 to 2005 to develop such minimum quality standards also applicable in academia. This paper summarizes recent key developments in in vitro cell culture and addresses the issues resulting for GCCP, e.g. the development of induced pluripotent stem cells (iPSCs) and gene-edited cells. It further deals with human stem-cell-derived models and bioengineering of organo-typic cell cultures, including organoids, organ-on-chip and human-on-chip approaches. Commercial vendors and cell banks have made human primary cells more widely available over the last decade, increasing their use, but also requiring specific guidance as to GCCP. The characterization of cell culture systems including high-content imaging and high-throughput measurement technologies increasingly combined with more complex cell and tissue cultures represent a further challenge for GCCP. The increasing use of gene editing techniques to generate and modify in vitro culture models also requires discussion of its impact on GCCP. International (often varying) legislations and market forces originating from the commercialization of cell and tissue products and technologies are further impacting on the need for the use of GCCP. This report summarizes the recommendations of the second of two workshops, held in Germany in December 2015, aiming map the challenge and organize the process or developing a revised GCCP 2.0.", "title": "" }, { "docid": "8d43d25619bd80d564c7c32d2592c4ac", "text": "Feature selection and dimensionality reduction are important steps in pattern recognition. In this paper, we propose a scheme for feature selection using linear independent component analysis and mutual information maximization method. The method is theoretically motivated by the fact that the classification error rate is related to the mutual information between the feature vectors and the class labels. The feasibility of the principle is illustrated on a synthetic dataset and its performance is demonstrated using EEG signal classification. Experimental results show that this method works well for feature selection.", "title": "" }, { "docid": "6c1b56a43b0475cf938a23a92a47761f", "text": "Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks to bootstrap learning. In particular we consider jointly learning the goal-driven reinforcement learning problem with an unsupervised depth prediction task and a self-supervised loop closure classification task. Using this approach we can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics. We then show that the agent implicitly learns key navigation abilities, through reinforcement learning with sparse rewards and without direct supervision.", "title": "" } ]
scidocsrr
2dc248505622a03590910a0b0a884432
Botnet protocol inference in the presence of encrypted traffic
[ { "docid": "b3801b9d9548c49c79eacef4c71e84ad", "text": "Identifying that a given binary program implements a specific cryptographic algorithm and finding out more information about the cryptographic code is an important problem. Proprietary programs and especially malicious software (so called malware) often use cryptography and we want to learn more about the context, e.g., which algorithms and keys are used by the program. This helps an analyst to quickly understand what a given binary program does and eases analysis. In this paper, we present several methods to identify cryptographic primitives (e.g., entire algorithms or only keys) within a given binary program in an automated way. We perform fine-grained dynamic binary analysis and use the collected information as input for several heuristics that characterize specific, unique aspects of cryptographic code. Our evaluation shows that these methods improve the state-of-the-art approaches in this area and that we can successfully extract cryptographic keys from a given malware binary.", "title": "" } ]
[ { "docid": "6ee6bc2f200a64c2de5481bf4adaeb5f", "text": "A generalized circuit topology for bipolar or unipolar high voltage repetitive pulse power applications is proposed. This circuit merges the negative and positive solid state Marx modulator concepts, which take advantage of the intensive use of semiconductor devices to increase the performance of the original dissipative Marx modulators. The flexibility of the proposed modular circuit enables the operation with negative and/or positive pulses, selectable duty cycles, frequencies and relaxation times between the positive and negative pulse. Additionally, the switching topology enables the discharge of the parasitic capacitances after each pulse, allowing the use of capacitive loads, and the clamping of inductive loads, recovering the reset energy back to the main capacitors. Analysis of efficiency and power loss will be addressed, as well as experimental details for different conditions based on laboratory prototype, with 1200 volt Insulated Gate Bipolar Transistors (IGBT), diodes, and 4.5 muF capacitors.", "title": "" }, { "docid": "9003a12f984d2bf2fd84984a994770f0", "text": "Sulfated polysaccharides and their lower molecular weight oligosaccharide derivatives from marine macroalgae have been shown to possess a variety of biological activities. The present paper will review the recent progress in research on the structural chemistry and the bioactivities of these marine algal biomaterials. In particular, it will provide an update on the structural chemistry of the major sulfated polysaccharides synthesized by seaweeds including the galactans (e.g., agarans and carrageenans), ulvans, and fucans. It will then review the recent findings on the anticoagulant/antithrombotic, antiviral, immuno-inflammatory, antilipidemic and antioxidant activities of sulfated polysaccharides and their potential for therapeutic application.", "title": "" }, { "docid": "8cf10c84e6e389c0c10238477c619175", "text": "Based on self-determination theory, this study proposes and tests a motivational model of intraindividual changes in teacher burnout (emotional exhaustion, depersonalization, and reduced personal accomplishment). Participants were 806 French-Canadian teachers in public elementary and high schools. Results show that changes in teachers’ perceptions of classroom overload and students’ disruptive behavior are negatively related to changes in autonomous motivation, which in turn negatively predict changes in emotional exhaustion. Results also indicate that changes in teachers’ perceptions of students’ disruptive behaviors and school principal’s leadership behaviors are related to changes in self-efficacy, which in turn negatively predict changes in three burnout components. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6ef985d656f605d40705a582483d562e", "text": "A rising issue in the scientific community entails the identification of patterns in the evolution of the scientific enterprise and the emergence of trends that influence scholarly impact. In this direction, this paper investigates the mechanism with which citation accumulation occurs over time and how this affects the overall impact of scientific output. Utilizing data regarding the SOFSEM Conference (International Conference on Current Trends in Theory and Practice of Computer Science), we study a corpus of 1006 publications with their associated authors and affiliations to uncover the effects of collaboration on the conference output. We proceed to group publications into clusters based on the trajectories they follow in their citation acquisition. Representative patterns are identified to characterize dominant trends of the conference, while exploring phenomena of early and late recognition by the scientific community and their correlation with impact.", "title": "" }, { "docid": "4eb937f806ca01268b5ed1348d0cc40c", "text": "The paradigms of transformational planning, case-based planning, and plan debugging all involve a process known as plan adaptation | modifying or repairing an old plan so it solves a new problem. In this paper we provide a domain-independent algorithm for plan adaptation, demonstrate that it is sound, complete, and systematic, and compare it to other adaptation algorithms in the literature. Our approach is based on a view of planning as searching a graph of partial plans. Generative planning starts at the graph's root and moves from node to node using planre nement operators. In planning by adaptation, a library plan|an arbitrary node in the plan graph|is the starting point for the search, and the plan-adaptation algorithm can apply both the same re nement operators available to a generative planner and can also retract constraints and steps from the plan. Our algorithm's completeness ensures that the adaptation algorithm will eventually search the entire graph and its systematicity ensures that it will do so without redundantly searching any parts of the graph.", "title": "" }, { "docid": "28f07ff6aab6d6d58d7208dc9705fdc9", "text": "A new era of theoretical computer science addresses fundamental problems about auctions, networks, and human behavior.", "title": "" }, { "docid": "1fe09f18898b152b665a43bbd2b687e1", "text": "A <inline-formula> <tex-math notation=\"LaTeX\">$2 \\times 2$ </tex-math></inline-formula> dual-polarized antenna subarray with filtering responses is proposed in this paper. This antenna subarray is a multilayered 3-D geometry, including a dual-path <inline-formula> <tex-math notation=\"LaTeX\">$1 \\times 4$ </tex-math></inline-formula> feeding network and four cavity-backed slot antennas. The isolation performance between two input ports is greatly improved by a novel method, which only needs to modify several vias in a square resonator. Cavities in the feeding network are properly arranged and coupled using different coupling structures, so that the operation modes in each cavity for different paths can always remain orthogonal, which enables the subarray to exhibit not only filtering functions (in both reflection coefficients and gain responses), but also a low cross-polarization level. A prototype is fabricated with a center frequency of 37 GHz and a bandwidth of 600 MHz for demonstration. Good agreement is achieved between simulation and measurement, for both <inline-formula> <tex-math notation=\"LaTeX\">$S$ </tex-math></inline-formula>-parameter and far-field results. The proposed filtering dual-polarized antenna array is very suitable to be employed as the subarray in millimeter-wave 5G base stations to reduce the complexity and integration loss of such beamforming systems.", "title": "" }, { "docid": "3c118c4f2b418f801faee08050e3a165", "text": "Unsupervised learning from visual data is one of the most difficult challenges in computer vision. It is essential for understanding how visual recognition works. Learning from unsupervised input has an immense practical value, as huge quantities of unlabeled videos can be collected at low cost. Here we address the task of unsupervised learning to detect and segment foreground objects in single images. We achieve our goal by training a student pathway, consisting of a deep neural network that learns to predict, from a single input image, the output of a teacher pathway that performs unsupervised object discovery in video. Our approach is different from the published methods that perform unsupervised discovery in videos or in collections of images at test time. We move the unsupervised discovery phase during the training stage, while at test time we apply the standard feed-forward processing along the student pathway. This has a dual benefit: firstly, it allows, in principle, unlimited generalization possibilities during training, while remaining fast at testing. Secondly, the student not only becomes able to detect in single images significantly better than its unsupervised video discovery teacher, but it also achieves state of the art results on two current benchmarks, YouTube Objects and Object Discovery datasets. At test time, our system is two orders of magnitude faster than other previous methods.", "title": "" }, { "docid": "8108807e3f1685e28617714c4394bf02", "text": "With the recent advances in deep learning, neural network models have obtained state-of-the-art performances for many linguistic tasks in natural language processing. However, this rapid progress also brings enormous challenges. The opaque nature of a neural network model leads to hard-to-debug-systems and difficult-to-interpret mechanisms. Here, we introduce a visualization system that, through a tight yet flexible integration between visualization elements and the underlying model, allows a user to interrogate the model by perturbing the input, internal state, and prediction while observing changes in other parts of the pipeline. We use the natural language inference problem as an example to illustrate how a perturbation-driven paradigm can help domain experts assess the potential limitation of a model, probe its inner states, and interpret and form hypotheses about fundamental model mechanisms such as attention.", "title": "" }, { "docid": "e74573560a8da7be758c619ba85202df", "text": "This paper proposes two hybrid connectionist structural acoustical models for robust context independent phone like and word like units for speaker-independent recognition system. Such structure combines strength of Hidden Markov Models (HMM) in modeling stochastic sequences and the non-linear classification capability of Artificial Neural Networks (ANN). Two kinds of Neural Networks (NN) are investigated: Multilayer Perceptron (MLP) and Elman Recurrent Neural Networks (RNN). The hybrid connectionist-HMM systems use discriminatively trained NN to estimate the a posteriori probability distribution among subword units given the acoustic observations. We efficiently tested the performance of the conceived systems using the TIMIT database in clean and noisy environments with two perceptually motivated features: MFCC and PLP. Finally, the robustness of the systems is evaluated by using a new preprocessing stage for denoising based on wavelet transform. A significant improvement in performance is obtained with the proposed method.", "title": "" }, { "docid": "92ca87957f5b97d2b249bc73e9d9a48d", "text": "Methods for text simplification using the framework of statistical machine translation have been extensively studied in recent years. However, building the monolingual parallel corpus necessary for training the model requires costly human annotation. Monolingual parallel corpora for text simplification have therefore been built only for a limited number of languages, such as English and Portuguese. To obviate the need for human annotation, we propose an unsupervised method that automatically builds the monolingual parallel corpus for text simplification using sentence similarity based on word embeddings. For any sentence pair comprising a complex sentence and its simple counterpart, we employ a many-to-one method of aligning each word in the complex sentence with the most similar word in the simple sentence and compute sentence similarity by averaging these word similarities. The experimental results demonstrate the excellent performance of the proposed method in a monolingual parallel corpus construction task for English text simplification. The results also demonstrated the superior accuracy in text simplification that use the framework of statistical machine translation trained using the corpus built by the proposed method to that using the existing corpora.", "title": "" }, { "docid": "c8e446ab0dbdaf910b5fb98f672a35dc", "text": "MinHash and SimHash are the two widely adopted Locality Sensitive Hashing (LSH) algorithms for large-scale data processing applications. Deciding which LSH to use for a particular problem at hand is an important question, which has no clear answer in the existing literature. In this study, we provide a theoretical answer (validated by experiments) that MinHash virtually always outperforms SimHash when the data are binary, as common in practice such as search. The collision probability of MinHash is a function of resemblance similarity (R), while the collision probability of SimHash is a function of cosine similarity (S). To provide a common basis for comparison, we evaluate retrieval results in terms of S for both MinHash and SimHash. This evaluation is valid as we can prove that MinHash is a valid LSH with respect to S, by using a general inequality S ≤ R ≤ S 2−S . Our worst case analysis can show that MinHash significantly outperforms SimHash in high similarity region. Interestingly, our intensive experiments reveal that MinHash is also substantially better than SimHash even in datasets where most of the data points are not too similar to each other. This is partly because, in practical data, often R ≥ S z−S holds where z is only slightly larger than 2 (e.g., z ≤ 2.1). Our restricted worst case analysis by assuming S z−S ≤ R ≤ S 2−S shows that MinHash indeed significantly outperforms SimHash even in low similarity region. We believe the results in this paper will provide valuable guidelines for search in practice, especially when the data are sparse. Appearing in Proceedings of the 17 International Conference on Artificial Intelligence and Statistics (AISTATS) 2014, Reykjavik, Iceland. JMLR: W&CP volume 33. Copyright 2014 by the authors.", "title": "" }, { "docid": "2e8e02aa0581dbadfe05319d25346856", "text": "Clustering suffers from the curse of dimensionality, and similarity functions that use all input features with equal relevance may not be effective. We introduce an algorithm that discovers clusters in subspaces spanned by different combinations of dimensions via local weightings of features. This approach avoids the risk of loss of information encountered in global dimensionality reduction techniques, and does not assume any data distribution model. Our method associates to each cluster a weight vector, whose values capture the relevance of features within the corresponding cluster. We experimentally demonstrate the gain in perfomance our method achieves with respect to competitive methods, using both synthetic and real datasets. In particular, our results show the feasibility of the proposed technique to perform simultaneous clustering of genes and conditions in gene expression data, and clustering of very high-dimensional data such as text data.", "title": "" }, { "docid": "c8d5ca95f6cd66461729cfc03772f5d0", "text": "Statistical relationalmodels combine aspects of first-order logic andprobabilistic graphical models, enabling them to model complex logical and probabilistic interactions between large numbers of objects. This level of expressivity comes at the cost of increased complexity of inference, motivating a new line of research in lifted probabilistic inference. By exploiting symmetries of the relational structure in themodel, and reasoning about groups of objects as awhole, lifted algorithms dramatically improve the run time of inference and learning. The thesis has five main contributions. First, we propose a new method for logical inference, called first-order knowledge compilation. We show that by compiling relational models into a new circuit language, hard inference problems become tractable to solve. Furthermore, we present an algorithm that compiles relational models into our circuit language. Second, we show how to use first-order knowledge compilation for statistical relational models, leading to a new state-of-the-art lifted probabilistic inference algorithm. Third, we develop a formal framework for exact lifted inference, including a definition in terms of its complexity w.r.t. the number of objects in the world. From this follows a first completeness result, showing that the two-variable class of statistical relational models always supports lifted inference. Fourth, we present an algorithm for", "title": "" }, { "docid": "683c80697974bfc402427805c3b02de1", "text": "OBJECTIVE\nTo report the clinical use of the QOLIBRI, a disease-specific measure of health-related quality-of-life (HRQoL) after traumatic brain injury (TBI).\n\n\nMETHODS\nThe QOLIBRI, with 37 items in six scales (cognition, self, daily life and autonomy, social relationships, emotions and physical problems) was completed by 795 patients in six languages (Finnish, German, Italian, French, English and Dutch). QOLIBRI scores were examined by variables likely to be influenced by rehabilitation interventions and included socio-demographic, functional outcome, health status and mental health variables.\n\n\nRESULTS\nThe QOLIBRI was self-completed by 73% of participants and 27% completed it in interview. It was sensitive to areas of life amenable to intervention, such as accommodation, work participation, health status (including mental health) and functional outcome.\n\n\nCONCLUSION\nThe QOLIBRI provides information about patient's subjective perception of his/her HRQoL which supplements clinical measures and measures of functional outcome. It can be applied across different populations and cultures. It allows the identification of personal needs, the prioritization of therapeutic goals and the evaluation of individual progress. It may also be useful in clinical trials and in longitudinal studies of TBI recovery.", "title": "" }, { "docid": "a76a1aea4861dfd1e1f426ce55747b2a", "text": "Which topics spark the most heated debates in social media? Identifying these topics is a first step towards creating systems which pierce echo chambers. In this paper, we perform a systematic methodological study of controversy detection using social media network structure and content.\n Unlike previous work, rather than identifying controversy in a single hand-picked topic and use domain-specific knowledge, we focus on comparing topics in any domain. Our approach to quantifying controversy is a graph-based three-stage pipeline, which involves (i) building a conversation graph about a topic, which represents alignment of opinion among users; (ii) partitioning the conversation graph to identify potential sides of the controversy; and (iii)measuring the amount of controversy from characteristics of the~graph.\n We perform an extensive comparison of controversy measures, as well as graph building approaches and data sources. We use both controversial and non-controversial topics on Twitter, as well as other external datasets. We find that our new random-walk-based measure outperforms existing ones in capturing the intuitive notion of controversy, and show that content features are vastly less helpful in this task.", "title": "" }, { "docid": "addad4069782620549e7a357e2c73436", "text": "Drivable region detection is challenging since various types of road, occlusion or poor illumination condition have to be considered in a outdoor environment, particularly at night. In the past decade, Many efforts have been made to solve these problems, however, most of the already existing methods are designed for visible light cameras, which are inherently inefficient under low light conditions. In this paper, we present a drivable region detection algorithm designed for thermal-infrared cameras in order to overcome the aforementioned problems. The novelty of the proposed method lies in the utilization of on-line road initialization with a highly scene-adaptive sampling mask. Furthermore, our prior road information extraction is tailored to enforce temporal consistency among a series of images. In this paper, we also propose a large number of experiments in various scenarios (on-road, off-road and cluttered road). A total of about 6000 manually annotated images are made available in our website for the research community. Using this dataset, we compared our method against multiple state-of-the-art approaches including convolutional neural network (CNN) based methods to emphasize the robustness of our approach under challenging situations.", "title": "" }, { "docid": "ba8467f6b5a28a2b076f75ac353334a0", "text": "Progress in science has advanced the development of human society across history, with dramatic revolutions shaped by information theory, genetic cloning, and artificial intelligence, among the many scientific achievements produced in the 20th century. However, the way that science advances itself is much less well-understood. In this work, we study the evolution of scientific development over the past century by presenting an anatomy of 89 million digitalized papers published between 1900 and 2015. We find that science has benefited from the shift from individual work to collaborative effort, with over 90% of the world-leading innovations generated by collaborations in this century, nearly four times higher than they were in the 1900s. We discover that rather than the frequent myopic- and self-referencing that was common in the early 20th century, modern scientists instead tend to look for literature further back and farther around. Finally, we also observe the globalization of scientific development from 1900 to 2015, including 25-fold and 7-fold increases in international collaborations and citations, respectively, as well as a dramatic decline in the dominant accumulation of citations by the US, the UK, and Germany, from ~95% to ~50% over the same period. Our discoveries are meant to serve as a starter for exploring the visionary ways in which science has developed throughout the past century, generating insight into and an impact upon the current scientific innovations and funding policies.", "title": "" }, { "docid": "04cc398c2a95119b4af7e0351d1d798a", "text": "A 16-year-old boy presented to the Emergency Department having noted the pictured skin markings on his left forearm several hours earlier. He stated that the markings had not been present earlier that afternoon, and had remained unchanged since first noted after track and field practice. There was no history of trauma, ingestions, or any systemic symptoms. The markings were neither tender nor pruritic. His parents denied any family history of malignancy. Physical examination revealed the raised black markings with minimal surrounding erythema, as seen in Figure 1. The rest of the dermatologic and remaining physical examinations were, and remained, unremarkable.", "title": "" }, { "docid": "64fddaba616a01558f3534ee723883cb", "text": "We demonstrate 70.4 Tb/s transmission over 7,600 km with C+L band EDFAs using coded modulation with hybrid probabilistic and geometrical constellation shaping. We employ multi-stage nonlinearity compensation including DBP, fast LMS equalizer and generalized filter.", "title": "" } ]
scidocsrr
8e85b6f278f1e37ba63bdf6b64900edd
A Robot Localization System Combining RSSI and Phase Shift in UHF-RFID Signals
[ { "docid": "36e42f2e4fd2f848eaf82440c2bcbf62", "text": "Indoor positioning systems (IPSs) locate objects in closed structures such as office buildings, hospitals, stores, factories, and warehouses, where Global Positioning System devices generally do not work. Most available systems apply wireless concepts, optical tracking, and/or ultrasound. This paper presents a standalone IPS using radio frequency identification (RFID) technology. The concept is based on an object carrying an RFID reader module, which reads low-cost passive tags installed next to the object path. A positioning system using a Kalman filter is proposed. The inputs of the proposed algorithm are the measurements of the backscattered signal power propagated from nearby RFID tags and a tag-path position database. The proposed algorithm first estimates the location of the reader, neglecting tag-reader angle-path loss. Based on the location estimate, an iterative procedure is implemented, targeting the estimation of the tag-reader angle-path loss, where the latter is iteratively compensated from the received signal strength information measurement. Experimental results are presented, illustrating the high performance of the proposed positioning system.", "title": "" } ]
[ { "docid": "a65b11ebb320e4883229f4a50d51ae2f", "text": "Vast quantities of text are becoming available in electronic form, ranging from published documents (e.g., electronic dictionaries, encyclopedias, libraries and archives for information retrieval services), to private databases (e.g., marketing information, legal records, medical histories), to personal email and faxes. Online information services are reaching mainstream computer users. There were over 15 million Internet users in 1993, and projections are for 30 million in 1997. With media attention reaching all-time highs, hardly a day goes by without a new article on the National Information Infrastructure, digital libraries, networked services, digital convergence or intelligent agents. This attention is moving natural language processing along the critical path for all kinds of novel applications.", "title": "" }, { "docid": "fef45863bc531960dbf2a7783995bfdb", "text": "The main goal of facial attribute recognition is to determine various attributes of human faces, e.g. facial expressions, shapes of mouth and nose, headwears, age and race, by extracting features from the images of human faces. Facial attribute recognition has a wide range of potential application, including security surveillance and social networking. The available approaches, however, fail to consider the correlations and heterogeneities between different attributes. This paper proposes that by utilizing these correlations properly, an improvement can be achieved on the recognition of different attributes. Therefore, we propose a facial attribute recognition approach based on the grouping of different facial attribute tasks and a multi-task CNN structure. Our approach can fully utilize the correlations between attributes, and achieve a satisfactory recognition result on a large number of attributes with limited amount of parameters. Several modifications to the traditional architecture have been tested in the paper, and experiments have been conducted to examine the effectiveness of our approach.", "title": "" }, { "docid": "36371909115d45074f709b090f46b644", "text": "For many years Round the World racers and leading yacht owners have appreciated the benefit of carbon. Carbon fiber spars are around 50% lighter and considerably stronger than traditional aluminum masts. The result is increased speed, and the lighter mast also gives the boat a lower centre of gravity and so heeling and pitching is reduced. The recent spate of carbon mast failures has left concerns amongst the general yachting public about the reliability of the concept and ultimately the material itself. The lack of knowledge about loads acting on the mast prevents designers from coming with an optimum design. But a new program, the \"Smart Mast\" program, developed by two of Britain's leading marine companies, has been able to monitor loads acting on a mast in real-time with an optical fiber system. This improvement could possibly be a revolution in the design of racing yachts carbon masts and fill the design data shortage. Some other evolutions in the rigging design also appeared to be of interest, like for example the free-standing mast or a video system helping the helmsman to use its sails at their maximum. Thesis supervisor: Jerome J. Connor Title: Professor of Civil and Environmental Engineering", "title": "" }, { "docid": "28fa91e4476522f895a6874ebc967cfa", "text": "The lifetime of micro electro–thermo–mechanical actuators with complex electro–thermo–mechanical coupling mechanisms can be decreased significantly due to unexpected failure events. Even more serious is the fact that various failures are tightly coupled due to micro-size and multi-physics effects. Interrelation between performance and potential failures should be established to predict reliability of actuators and improve their design. Thus, a multiphysics modeling approach is proposed to evaluate such interactive effects of failure mechanisms on actuators, where potential failures are pre-analyzed via FMMEA (Failure Modes, Mechanisms, and Effects Analysis) tool for guiding the electro–thermo–mechanical-reliability modeling process. Peak values of temperature, thermal stresses/strains and tip deflection are estimated as indicators for various failure modes and factors (e.g. residual stresses, thermal fatigue, electrical overstress, plastic deformation and parameter variations). Compared with analytical solutions and experimental data, the obtained simulation results were found suitable for coupled performance and reliability analysis of micro actuators and assessment of their design.", "title": "" }, { "docid": "766312ba98a77e04f5acd5c90fd1e60e", "text": "Injuries of the ankle joint have a high incidence in daily life and sports, thus, playing an important socioeconomic role. Therefore, proper diagnosis and adequate treatment are mandatory. While most of the ligament injuries around the ankle joint are treated conservatively, great controversy exists on how to treat deltoid ligament injuries in ankle fractures. Missed injuries and inadequate treatment of the medial ankle lead to inferior outcome with instability, progressive deformity, and ankle joint osteoarthritis.", "title": "" }, { "docid": "ad65f147a3641482e56131aa6e95104d", "text": "Studies of upper limb motion analysis using surface electromyogram (sEMG) signals measured from the forearm plays an important role in various applications, such as human interfaces for controlling robotic exoskeletons, prosthetic hands, and evaluation of body functions. Though the sEMG signals have a lot of information about the activities of the muscles, the signals do not have the activities of the deep layer muscles. We focused on forearm deformation, since hand motion brings the muscles, tendons, and skeletons under the skin. The reason why we focus is that we believe the forearm deformation delivers information about the activities of deep layer muscles. In this paper, we propose a hand motion recognition method based on the forearm deformation measured with a distance sensor array. The method uses the support vector machine. Our method achieved a mean accuracy of 92.6% for seven hand motions. Because the accuracy of the pronation and the supination are high, the distance sensor array has the potential to estimate the activities of deep layer muscles.", "title": "" }, { "docid": "136b64921e7a472ef3dd79c11deba7dd", "text": "In nowadays situation, the security forms the most important section of our lives. Security of the house or the near and dear ones is important to everybody. Home automation is an exciting area for security applications. This field has enhanced with new technologies such Internet of things (IoT). In IoT, every gadget behaves as a little part of an internet node and every node interact and communicate. Lately, security cameras are utilized in order to build safety places, homes, and cities. However, this technology needs a person who detects any problem in the frame taken from the camera. In this paper, an Internet of Things is joined with computer vision in order to detect the faces of people. For this purpose, to execute this system, a credit card size computer that utilizes its own camera board for the security system, such as raspberry pi 3 utilized. Likewise, Passive Infrared Sensor (PIR) mounted on the Raspberry PI is utilized to detect any movements. So it helps to monitor and get notifications when motion is identified, captures the image and detect the faces, then sends images to a Smartphone via utilizing telegram application. Internet of things based on telegram application used to see the activity and get notices when movement is detected.", "title": "" }, { "docid": "950fe0124f830a63f528aa5905116c82", "text": "One of the main barriers to immersivity during object manipulation in virtual reality is the lack of realistic haptic feedback. Our goal is to convey compelling interactions with virtual objects, such as grasping, squeezing, pressing, lifting, and stroking, without requiring a bulky, world-grounded kinesthetic feedback device (traditional haptics) or the use of predetermined passive objects (haptic retargeting). To achieve this, we use a pair of finger-mounted haptic feedback devices that deform the skin on the fingertips to convey cutaneous force information from object manipulation. We show that users can perceive differences in virtual object weight and that they apply increasing grasp forces when lifting virtual objects as rendered mass is increased. Moreover, we show how naive users perceive changes of a virtual object's physical properties when we use skin deformation to render objects with varying mass, friction, and stiffness. These studies demonstrate that fingertip skin deformation devices can provide a compelling haptic experience appropriate for virtual reality scenarios involving object manipulation.", "title": "" }, { "docid": "8437f899a40cf54489b8e86870c32616", "text": "Lifelong machine learning (or lifelong learning) is an advanced machine learning paradigm that learns continuously, accumulates the knowledge learned in previous tasks, and uses it to help future learning. In the process, the learner becomes more and more knowledgeable and effective at learning. This learning ability is one of the hallmarks of human intelligence. However, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model. It makes no attempt to retain the learned knowledge and use it in future learning. Although this isolated learning paradigm has been very successful, it requires a large number of training examples, and is only suitable for well-defined and narrow tasks. In comparison, we humans can learn effectively with a few examples because we have accumulated so much knowledge in the past which enables us to learn with little data or effort. Furthermore, we are able to discover new problems in the usage process of the learned knowledge or model. This enables us to learn more and more continually in a self-motivated manner. We can also adapt our previous konwledge to solve unfamilar problems and learn in the process. Lifelong learning aims to achieve these capabilities. As statistical machine learning matures, it is time to make a major effort to break the isolated learning tradition and to study lifelong learning to bring machine learning to a new height. Applications such as intelligent assistants, chatbots, and physical robots that interact with humans and systems in real-life environments are also calling for such lifelong learning capabilities. Without the ability to accumulate the learned knowledge and use it to learn more knowledge incrementally, a system will probably never be truly intelligent. This book serves as an introductory text and survey to lifelong learning.", "title": "" }, { "docid": "e08cfc5d9c67a5c806750dc7c747c52f", "text": "To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training a standard binary classifier. For solving large-scale binary code inference, we show how it is possible to ensure that the binary quadratic problems are submodular such that efficient graph cut methods may be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and are very fast to train and evaluate. Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data.", "title": "" }, { "docid": "46cecc587352fee7248377bbca2c03d2", "text": "Several tasks in urban and architectural design are today undertaken in a geospatial context. Building Information Models (BIM) and geospatial technologies offer 3D data models that provide information about buildings and the surrounding environment. The Industry Foundation Classes (IFC) and CityGML are today the two most prominent semantic models for representation of BIM and geospatial models respectively. CityGML has emerged as a standard for modeling city models while IFC has been developed as a reference model for building objects and sites. Current CAD and geospatial software provide tools that allow the conversion of information from one format to the other. These tools are however fairly limited in their capabilities, often resulting in data and information losses in the transformations. This paper describes a new approach for data integration based on a unified building model (UBM) which encapsulates both the CityGML and IFC models, thus avoiding translations between the models and loss of information. To build the UBM, all classes and related concepts were initially collected from both models, overlapping concepts were merged, new objects were created to ensure the capturing of both indoor and outdoor objects, and finally, spatial relationships between the objects were redefined. Unified Modeling Language (UML) notations were used for representing its objects and relationships between them. There are two use-case scenarios, both set in a hospital: “evacuation” and “allocating spaces for patient wards” were developed to validate and test the proposed UBM data model. Based on these two scenarios, four validation queries OPEN ACCESS ISPRS Int. J. Geo-Inf. 2012, 1 121 were defined in order to validate the appropriateness of the proposed unified building model. It has been validated, through the case scenarios and four queries, that the UBM being developed is able to integrate CityGML data as well as IFC data in an apparently seamless way. Constraints and enrichment functions are used for populating empty database tables and fields. The motivation scenarios also show the needs and benefits of having an integrated approach to the modeling of indoor and outdoor spatial features.", "title": "" }, { "docid": "d1e504ca97d70a2172a9da67ded7c011", "text": "Akhter Saeed Cllege of Pharmaceutical Sciences, , Lahore Moringa oleifera, Lam {Syn M.pterygosperma Gaertn} usually mentioned in literature as Moringa, is a natural as well as cultivated variety of the genus Moringa belonging to family Moringaceae .It is one of the richest plant sources of Vitamins A ,B {1,2,3,6,7}, C,D,E and K. The vital minerals present in Moringa include Calcium, Copper, Iron, Potassium, Magnesium, Manganese and Zinc. It has more than 40 natural anti-oxidants. Moringa has been used since 150B.C. by ancient kings and queens in their diet for mental alertness and healthy skin. The leaves, pods, seeds, gums , bark and flowers of Moringa are used in more than 80 countries {including Pakistan} to relieve mineral and vitamin deficiencies, support a healthy cardiovascular system, promote normal blood-glucose levels, neutralize free radicals {thereby reducing malignancy}, provide excellent support of the body's anit-flammatory mechanisms, enrich anemic blood and support immune system. It also improves eyesight, mental alertness and bone strength. It has potential benefit in malnutrition, general weakness, lactating mothers, menopause, depression and", "title": "" }, { "docid": "397f6c39825a5d8d256e0cc2fbba5d15", "text": "This paper presents a video-based motion modeling technique for capturing physically realistic human motion from monocular video sequences. We formulate the video-based motion modeling process in an image-based keyframe animation framework. The system first computes camera parameters, human skeletal size, and a small number of 3D key poses from video and then uses 2D image measurements at intermediate frames to automatically calculate the \"in between\" poses. During reconstruction, we leverage Newtonian physics, contact constraints, and 2D image measurements to simultaneously reconstruct full-body poses, joint torques, and contact forces. We have demonstrated the power and effectiveness of our system by generating a wide variety of physically realistic human actions from uncalibrated monocular video sequences such as sports video footage.", "title": "" }, { "docid": "69a6cfb649c3ccb22f7a4467f24520f3", "text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.", "title": "" }, { "docid": "aab50cf9f6f0caf34161fa3232229a90", "text": "A low-profile antenna on HIS/AMC/EBG surface was developed for UHF applications. The surface is inkjet-printed on paper which produces a low-cost, light-weight, and environment-friendly solution. More compact unit cells (and thus smaller overall size) and/or wider antenna bandwidth can be realized using alternative unit cells/antenna configurations as discussed in [5–6].", "title": "" }, { "docid": "6569b0630f9d9b9a5e3ca0849829f8cb", "text": "A long-term follow-up study of 55 transsexual patients (32 male-to-female and 23 female-to-male) post-sex reassignment surgery (SRS) was carried out to evaluate sexual and general health outcome. Relatively few and minor morbidities were observed in our group of patients, and they were mostly reversible with appropriate treatment. A trend toward more general health problems in male-to-females was seen, possibly explained by older age and smoking habits. Although all male-to-females, treated with estrogens continuously, had total testosterone levels within the normal female range because of estrogen effects on sex hormone binding globulin, only 32.1% reached normal free testosterone levels. After SRS, the transsexual person's expectations were met at an emotional and social level, but less so at the physical and sexual level even though a large number of transsexuals (80%) reported improvement of their sexuality. The female-to-males masturbated significantly more frequently than the male-to-females, and a trend to more sexual satisfaction, more sexual excitement, and more easily reaching orgasm was seen in the female-to-male group. The majority of participants reported a change in orgasmic feeling, toward more powerful and shorter for female-to-males and more intense, smoother, and longer in male-to-females. Over two-thirds of male-to-females reported the secretion of a vaginal fluid during sexual excitation, originating from the Cowper's glands, left in place during surgery. In female-to-males with erection prosthesis, sexual expectations were more realized (compared to those without), but pain during intercourse was more often reported.", "title": "" }, { "docid": "1dee6d60a94e434dd6d3b6754e9cd3f3", "text": "The barrier function of the intestine is essential for maintaining the normal homeostasis of the gut and mucosal immune system. Abnormalities in intestinal barrier function expressed by increased intestinal permeability have long been observed in various gastrointestinal disorders such as Crohn's disease (CD), ulcerative colitis (UC), celiac disease, and irritable bowel syndrome (IBS). Imbalance of metabolizing junction proteins and mucosal inflammation contributes to intestinal hyperpermeability. Emerging studies exploring in vitro and in vivo model system demonstrate that Rho-associated coiled-coil containing protein kinase- (ROCK-) and myosin light chain kinase- (MLCK-) mediated pathways are involved in the regulation of intestinal permeability. With this perspective, we aim to summarize the current state of knowledge regarding the role of inflammation and ROCK-/MLCK-mediated pathways leading to intestinal hyperpermeability in gastrointestinal disorders. In the near future, it may be possible to specifically target these specific pathways to develop novel therapies for gastrointestinal disorders associated with increased gut permeability.", "title": "" }, { "docid": "821c219c35463116bedce9901d33b11d", "text": "In this paper, we show a class of relationships which link Discrete Cosine Transforms (DCT) and Discrete Sine Transforms (DST) of types V, VI, VII and VIII, which have been recently considered for inclusion in the future video coding technology. In particular, the proposed relationships allow to compute the DCT-V and the DCT-VIII as functions of the DCT-VI and the DST-VII respectively, plus simple reordering and sign-inversion. Moreover, this paper exploits the proposed relationships and the Winograd factorization of the Discrete Fourier Transform to construct low-complexity factorizations for computing the DCT-V and the DCT-VIII of length 4 and 8. Finally, the proposed signal-flow-graphs have been implemented using an FPGA technology, thus showing reduced hardware utilization with respect to the direct implementation of the matrix-vector multiplication algorithm.", "title": "" }, { "docid": "3b83d1fcb735a68d4ef1026f300b1055", "text": "Entropy-based image thresholding has received considerable interest in recent years. Two types of entropy are generally used as thresholding criteria: Shannon’s entropy and relative entropy, also known as Kullback–Leibler information distance, where the former measures uncertainty in an information source with an optimal threshold obtained by maximising Shannon’s entropy, whereas the latter measures the information discrepancy between two different sources with an optimal threshold obtained by minimising relative entropy. Many thresholding methods have been developed for both criteria and reported in the literature. These two entropybased thresholding criteria have been investigated and the relationship among entropy and relative entropy thresholding methods has been explored. In particular, a survey and comparative analysis is conducted among several widely used methods that include Pun and Kapur’s maximum entropy, Kittler and Illingworth’s minimum error thresholding, Pal and Pal’s entropy thresholding and Chang et al.’s relative entropy thresholding methods. In order to objectively assess these methods, two measures, uniformity and shape, are used for performance evaluation.", "title": "" }, { "docid": "27f1f3791b7a381f92833d4983620b7e", "text": "Current action recognition methods heavily rely on trimmed videos for model training. However, it is expensive and time-consuming to acquire a large-scale trimmed video dataset. This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances. Our UntrimmedNet couples two important components, the classification module and the selection module, to learn the action models and reason about the temporal duration of action instances, respectively. These two components are implemented with feed-forward networks, and UntrimmedNet is therefore an end-to-end trainable architecture. We exploit the learned models for action recognition (WSR) and detection (WSD) on the untrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet only employs weak supervision, our method achieves performance superior or comparable to that of those strongly supervised approaches on these two datasets.", "title": "" } ]
scidocsrr
433fa521f989eb63f08022c8f1b64874
Information theoretic framework of trust modeling and evaluation for ad hoc networks
[ { "docid": "135513fa93b5fade93db11fdf942fe7a", "text": "This paper describes two techniques that improve throughput in an ad hoc network in the presence of nodes that agree to forward packets but fail to do so. To mitigate this problem, we propose categorizing nodes based upon their dynamically measured behavior. We use a watchdog that identifies misbehaving nodes and a pathrater that helps routing protocols avoid these nodes. Through simulation we evaluate watchdog and pathrater using packet throughput, percentage of overhead (routing) transmissions, and the accuracy of misbehaving node detection. When used together in a network with moderate mobility, the two techniques increase throughput by 17% in the presence of 40% misbehaving nodes, while increasing the percentage of overhead transmissions from the standard routing protocol's 9% to 17%. During extreme mobility, watchdog and pathrater can increase network throughput by 27%, while increasing the overhead transmissions from the standard routing protocol's 12% to 24%.", "title": "" }, { "docid": "3bb3c723e8342c8f5e466a591855591e", "text": "Reputations that are transmitted from person to person can deter moral hazard and discourage entry by bad types in markets where players repeat transactions but rarely with the same player. On the Internet, information about past transactions may be both limited and potentially unreliable, but it can be distributed far more systematically than the informal gossip among friends that characterizes conventional marketplaces. One of the earliest and best known Internet reputation systems is run by eBay, which gathers comments from buyers and sellers about each other after each transaction. Examination of a large data set from 1999 reveals several interesting features of this system, which facilitates many millions of sales each month. First, despite incentives to free ride, feedback was provided more than half the time. Second, well beyond reasonable expectation, it was almost always positive. Third, reputation profiles were predictive of future performance. However, the net feedback scores that eBay displays encourages Pollyanna assessments of reputations, and is far from the best predictor available. Fourth, although sellers with better reputations were more likely to sell their items, they enjoyed no boost in price, at least for the two sets of items that we examined. Fifth, there was a high correlation between buyer and seller feedback, suggesting that the players reciprocate and retaliate.", "title": "" }, { "docid": "9db9902c0e9d5fc24714554625a04c7a", "text": "Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these “Sybil attacks” is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.", "title": "" } ]
[ { "docid": "f944f5e334a127cd50ab3ec0d3c2b603", "text": "First-order methods play a central role in large-scale machine learning. Even though many variations exist, each suited to a particular problem, almost all such methods fundamentally rely on two types of algorithmic steps: gradient descent, which yields primal progress, and mirror descent, which yields dual progress. We observe that the performances of gradient and mirror descent are complementary, so that faster algorithms can be designed by linearly coupling the two. We show how to reconstruct Nesterov’s accelerated gradient methods using linear coupling, which gives a cleaner interpretation than Nesterov’s original proofs. We also discuss the power of linear coupling by extending it to many other settings that Nesterov’s methods cannot apply to. 1998 ACM Subject Classification G.1.6 Optimization, F.2 Analysis of Algorithms and Problem Complexity", "title": "" }, { "docid": "c4183c8b08da8d502d84a650d804cac8", "text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>", "title": "" }, { "docid": "a3bff96ab2a6379d21abaea00bc54391", "text": "In view of the advantages of deep networks in producing useful representation, the generated features of different modality data (such as image, audio) can be jointly learned using Multimodal Restricted Boltzmann Machines (MRB-M). Recently, audiovisual speech recognition based the M-RBM has attracted much attention, and the MRBM shows its effectiveness in learning the joint representation across audiovisual modalities. However, the built networks have weakness in modeling the multimodal sequence which is the natural property of speech signal. In this paper, we will introduce a novel temporal multimodal deep learning architecture, named as Recurrent Temporal Multimodal RB-M (RTMRBM), that models multimodal sequences by transforming the sequence of connected MRBMs into a probabilistic series model. Compared with existing multimodal networks, it's simple and efficient in learning temporal joint representation. We evaluate our model on audiovisual speech datasets, two public (AVLetters and AVLetters2) and one self-build. The experimental results demonstrate that our approach can obviously improve the accuracy of recognition compared with standard MRBM and the temporal model based on conditional RBM. In addition, RTMRBM still outperforms non-temporal multimodal deep networks in the presence of the weakness of long-term dependencies.", "title": "" }, { "docid": "68c31aa73ba8bcc1b3421981877d4310", "text": "Several approaches are available to create cross-platform applications. The majority of these approaches focus on purely mobile platforms. Their principle is to develop the application once and be able to deploy it to multiple mobile platforms with different operating systems (Android (Java), IOS (Objective C), Windows Phone 7 (C#), etc.). In this article, we propose a merged approach and cross-platform called ZCA \"ZeroCouplage Approach\". Merged to regroup the strong points of approaches: \"Runtime\", \"Component-Based\" and \"Cloud-Based\" thank to a design pattern which we created and named M2VC (Model-Virtual-View-Controller). Cross-platform allows creating a unique application that is deployable directly on many platforms: Web, Mobile and Desktop. In this article, we also compare our ZCA approach with others to approve its added value. Our idea, contrary to mobile approaches, consists of a given technology to implement cross-platform applications. To validate our approach, we have developed an open source framework named ZCF \"ZeroCouplage Framework\" for Java technology.", "title": "" }, { "docid": "10172bbb61d404eb38a898bafadb5021", "text": "Numerical code uses floating-point arithmetic and necessarily suffers from roundoff and truncation errors. Error analysis is the process to quantify such uncertainty in the solution to a problem. Forward error analysis and backward error analysis are two popular paradigms of error analysis. Forward error analysis is more intuitive and has been explored and automated by the programming languages (PL) community. In contrast, although backward error analysis is more preferred by numerical analysts and the foundation for numerical stability, it is less known and unexplored by the PL community. To fill the gap, this paper presents an automated backward error analysis for numerical code to empower both numerical analysts and application developers. In addition, we use the computed backward error results to also compute the condition number, an important quantity recognized by numerical analysts for measuring how sensitive a function is to changes or errors in the input. Experimental results on Intel X87 FPU functions and widely-used GNU C Library functions demonstrate that our analysis is effective at analyzing the accuracy of floating-point programs.", "title": "" }, { "docid": "f1a2d243c58592c7e004770dfdd4a494", "text": "Dynamic voltage scaling (DVS), which adjusts the clockspeed and supply voltage dynamically, is an effective techniquein reducing the energy consumption of embedded real-timesystems. The energy efficiency of a DVS algorithm largelydepends on the performance of the slack estimation methodused in it. In this paper, we propose a novel DVS algorithmfor periodic hard real-time tasks based on an improved slackestimation algorithm. Unlike the existing techniques, the proposedmethod takes full advantage of the periodic characteristicsof the real-time tasks under priority-driven schedulingsuch as EDF. Experimental results show that the proposed algorithmreduces the energy consumption by 20~40% over theexisting DVS algorithm. The experiment results also show thatour algorithm based on the improved slack estimation methodgives comparable energy savings to the DVS algorithm basedon the theoretically optimal (but impractical) slack estimationmethod.", "title": "" }, { "docid": "512964f588b7afe09183dbaa3fe254d0", "text": "This paper proposes an Internet of Things (IoT)-enabled multiagent system (MAS) for residential DC microgrids (RDCMG). The proposed MAS consisting of smart home agents (SHAs) aims to cooperate each other to alleviate the peak load of the RDCMG and to minimize the electricity costs for smart homes. These are achieved by agent utility functions and the best operating time algorithm (BOT) in the MAS. Moreover, IoT-based efficient and cost-effective agent communication method is proposed, which applies message queuing telemetry transport (MQTT) publish/subscribe protocol via MQTT brokers. The proposed IoT-enabled MAS and smart home models are implemented in five Raspberry pi 3 boards and validated by experimental studies for a RDCMG with five smart homes.", "title": "" }, { "docid": "d50d07954360c23bcbe3802776562f34", "text": "A stationary display of white discs positioned on intersecting gray bars on a dark background gives rise to a striking scintillating effectthe scintillating grid illusion. The spatial and temporal properties of the illusion are well known, but a neuronal-level explanation of the mechanism has not been fully investigated. Motivated by the neurophysiology of the Limulus retina, we propose disinhibition and self-inhibition as possible neural mechanisms that may give rise to the illusion. In this letter, a spatiotemporal model of the early visual pathway is derived that explicitly accounts for these two mechanisms. The model successfully predicted the change of strength in the illusion under various stimulus conditions, indicating that low-level mechanisms may well explain the scintillating effect in the illusion.", "title": "" }, { "docid": "04170c7d1110a265bcf3e9eeb32fbeef", "text": "Compared with person reidentification, which has attracted concentrated attention, vehicle reidentification is an important yet frontier problem in video surveillance and has been neglected by the multimedia and vision communities. Since most existing approaches mainly consider the general vehicle appearance for reidentification while overlooking the distinct vehicle identifier, such as the license plate number, they attain suboptimal performance. In this paper, we propose PROVID, a PROgressive Vehicle re-IDentification framework based on deep neural networks. In particular, our framework not only utilizes the multimodality data in large-scale video surveillance, such as visual features, license plates, camera locations, and contextual information, but also considers vehicle reidentification in two progressive procedures: coarse-to-fine search in the feature domain, and near-to-distant search in the physical space. Furthermore, to evaluate our progressive search framework and facilitate related research, we construct the VeRi dataset, which is the most comprehensive dataset from real-world surveillance videos. It not only provides large numbers of vehicles with varied labels and sufficient cross-camera recurrences but also contains license plate numbers and contextual information. Extensive experiments on the VeRi dataset demonstrate both the accuracy and efficiency of our progressive vehicle reidentification framework.", "title": "" }, { "docid": "d10d60684e6915ba7deb959f4fe842ae", "text": "Supervised learning methods have long been used to allow musical interface designers to generate new mappings by example. We propose a method for harnessing machine learning algorithms within a radically interactive paradigm, in which the designer may repeatedly generate examples, train a learner, evaluate outcomes, and modify parameters in real-time within a single software environment. We describe our meta-instrument, the Wekinator, which allows a user to engage in on-the-fly learning using arbitrary control modalities and sound synthesis environments. We provide details regarding the system implementation and discuss our experiences using the Wekinator for experimentation and performance.", "title": "" }, { "docid": "42dfa7988f31403dba1c390741aa164c", "text": "This study explored friendship variables in relation to body image, dietary restraint, extreme weight-loss behaviors (EWEBs), and binge eating in adolescent girls. From 523 girls, 79 friendship cliques were identified using social network analysis. Participants completed questionnaires that assessed body image concerns, eating, friendship relations, and psychological family, and media variables. Similarity was greater for within than for between friendship cliques for body image concerns, dietary restraint, and EWLBs, but not for binge eating. Cliques high in body image concerns and dieting manifested these concerns in ways consistent with a high weight/shape-preoccupied subculture. Friendship attitudes contributed significantly to the prediction of individual body image concern and eating behaviors. Use of EWLBs by friends predicted an individual's own level of use.", "title": "" }, { "docid": "fd9717ee3f6fc31918594bd4855c799c", "text": "Aggregating context information from multiple scales has been proved to be effective for improving accuracy of Single Shot Detectors (SSDs) on object detection. However, existing multi-scale context fusion techniques are computationally expensive, which unfavorably diminishes the advantageous speed of SSD. In this work, we propose a novel network topology, called WeaveNet, that can efficiently fuse multi-scale information and boost the detection accuracy with negligible extra cost. The proposed WeaveNet iteratively weaves context information from adjacent scales together to enable more sophisticated context reasoning while maintaining fast speed. Built by stacking light-weight blocks, WeaveNet is easy to train without requiring batch normalization and can be further accelerated by our proposed architecture simplification. Experimental results on PASCAL VOC 2007, PASCAL VOC 2012 benchmarks show signification performance boost brought by WeaveNet. For 320×320 input of batch size = 8, WeaveNet reaches 79.5% mAP on PASCAL VOC 2007 test in 101 fps with only 4 fps extra cost, and further improves to 79.7% mAP with more iterations.", "title": "" }, { "docid": "327269bae688715cafb872c1f3c6f1e9", "text": "The modified Ashworth scale (MAS) is the most widely used measurement technique to assess levels of spasticity. In MAS, the evaluator graduates spasticity considering his/her subjective analysis of the muscular endurance during passive stretching. Therefore, it is a subjective scale. Mechanomyography (MMG) allows registering the vibrations generated by muscle contraction and stretching events that propagate through the tissue until the surface of the skin. With this in mind, this study aimed to investigate possible correlations between MMG signal and muscle spasticity levels determined by MAS. We evaluated 34 limbs considered spastic by MAS, including upper and lower limbs of 22 individuals of both sexes. Simultaneously, the MMG signals of the spastic muscle group (agonists) were acquired. The features investigated involved, in the time domain, the median energy (MMGME) of the MMG Z-axis (perpendicular to the muscle fibers) and, in the frequency domain, the median frequency (MMGmf). The Kruskal-Wallis test (p<;0.001) determined that there were significant differences between intergroup MAS spasticity levels for MMGme. There was a high linear correlation between the MMGme and MAS (R2=0.9557) and also a high correlation as indicated by Spearman test (ρ=0.9856; p<;0.001). In spectral analysis, the Kruskal-Wallis test (p = 0.0059) showed that MMGmf did not present significant differences between MAS spasticity levels. There was moderate linear correlation between MAS and MMGmf (R2=0.4883 and Spearman test [ρ = 0.4590; p <; 0.001]). Between the two investigated features, we conclude that the median energy is the most viable feature to evaluate spasticity due to strong correlations with the MAS.", "title": "" }, { "docid": "df183cdef3bba1b99165daa4ff99fddc", "text": "Porous poly(lactic-co-glycolic acid) (PLGA) microspheres were prepared, loaded with insulin, and then coated in poly(vinyl alcohol) (PVA) and a novel boronic acid-containing copolymer [poly(acrylamide phenyl boronic acid-co-N-vinylcaprolactam); p(AAPBA-co-NVCL)]. Multilayer microspheres were generated using a layer-by-layer approach depositing alternating coats of PVA and p(AAPBA-co-NVCL) on the PLGA surface, with the optimal system found to be that with eight alternating layers of each coating. The resultant material comprised spherical particles with a porous PLGA core and the pores covered in the coating layers. Insulin could successfully be loaded into the particles, with loading capacity and encapsulation efficiencies reaching 2.83 ± 0.15 and 82.6 ± 5.1% respectively, and was found to be present in the amorphous form. The insulin-loaded microspheres could regulate drug release in response to a changing concentration of glucose. In vitro and in vivo toxicology tests demonstrated that they are safe and have high biocompatibility. Using the multilayer microspheres to treat diabetic mice, we found they can effectively control blood sugar levels over at least 18 days, retaining their glucose-sensitive properties during this time. Therefore, the novel multilayer microspheres developed in this work have significant potential as smart drug-delivery systems for the treatment of diabetes.", "title": "" }, { "docid": "3c865290194f17f68672728cd57f14cb", "text": "It is commonly agreed that the next generation of wireless communication systems, usually referred to as 4G systems, will not be based on a single access technique but it will encompass a number of different complementary access technologies. The ultimate goal is to provide ubiquitous connectivity, integrating seamlessly operations in most common scenarios, ranging from fixed and low-mobility indoor environments in one extreme to high-mobility cellular systems in the other extreme. Surprisingly, perhaps the largest installed base of short-range wireless communications links are optical, rather than RF, however. Indeed, ‘point and shoot’ links corresponding to the Infra-Red Data Association (IRDA) standard are installed in 100 million devices a year, mainly digital cameras and telephones. In this paper we argue that optical wireless communications (OW) has a part to play in the wider 4G vision. An introduction to OW is presented, together with scenarios where optical links can enhance the performance of wireless networks.", "title": "" }, { "docid": "51ac4581fa82be87a28f7c080e026ae6", "text": "III", "title": "" }, { "docid": "457ea53f0a303e8eba8847422ef61e5a", "text": "Tele-operated hydraulic underwater manipulators are commonly used to perform remote underwater intervention tasks such as weld inspection or mating of connectors. Automation of these tasks to use tele-assistance requires a suitable hybrid position/force control scheme, to specify simultaneously the robot motion and contact forces. Classical linear control does not allow for the highly non-linear and time varying robot dynamics in this situation. Adequate control performance requires more advanced controllers. This paper presents and compares two different advanced hybrid control algorithms. The first is based on a modified Variable Structure Control (VSC-HF) with a virtual environment, and the second uses a multivariable self-tuning adaptive controller. A direct comparison of the two proposed control schemes is performed in simulation, using a model of the dynamics of a hydraulic underwater manipulator (a Slingsby TA9) in contact with a surface. These comparisons look at the performance of the controllers under a wide variety of operating conditions, including different environment stiffnesses, positions of the robot and", "title": "" }, { "docid": "bdb25b8afaf922bd20e051e311c96fe1", "text": "Ear detection is an important step in ear recognition approaches. Most existing ear detection techniques are based on manually designing features or shallow learning algorithms. However, researchers found that the pose variation, occlusion, and imaging conditions provide a great challenge to the traditional ear detection methods under uncontrolled conditions. This paper proposes an efficient technique involving Multiple Scale Faster Region-based Convolutional Neural Networks (Faster R-CNN) to detect ears from 2D profile images in natural images automatically. Firstly, three regions of different scales are detected to infer the information about the ear location context within the image. Then an ear region filtering approach is proposed to extract the correct ear region and eliminate the false positives automatically. In an experiment with a test set of 200 web images (with variable photographic conditions), 98% of ears were accurately detected. Experiments were likewise conducted on the Collection J2 of University of Notre Dame Biometrics Database (UND-J2) and University of Beira Interior Ear dataset (UBEAR), which contain large occlusion, scale, and pose variations. Detection rates of 100% and 98.22%, respectively, demonstrate the effectiveness of the proposed approach.", "title": "" }, { "docid": "a451a351d50c3441d4ca8a964bf7312e", "text": "With the growing complexity and scale of high performance computing (HPC) systems, application performance variation has become a significant challenge in efficient and resilient system management. Application performance variation can be caused by resource contention as well as softwareand firmware-related problems, and can lead to premature job termination, reduced performance, and wasted compute platform resources. To effectively alleviate this problem, system administrators must detect and identify the anomalies that are responsible for performance variation and take preventive actions. However, diagnosing anomalies is often a difficult task given the vast amount of noisy and high-dimensional data being collected via a variety of system monitoring infrastructures. In this paper, we present a novel framework that uses machine learning to automatically diagnose previously encountered performance anomalies in HPC systems. Our framework leverages resource usage and performance counter data collected during application runs. We first convert the collected time series data into statistical features that retain application characteristics to significantly reduce the computational overhead of our technique. We then use machine learning algorithms to learn anomaly characteristics from this historical data and to identify the types of anomalies observed while running applications. We evaluate our framework both on an HPC cluster and on a public cloud, and demonstrate that our approach outperforms current state-of-the-art techniques in detecting anomalies, reaching an F-score over 0.97.", "title": "" }, { "docid": "dc6b18431878c1b8999d2c261b142ae7", "text": "Organizations are increasing their reliance on virtual relationships in structuring operations for a global environment. Like all teams, virtual teams require a solid foundation of mutual trust and collaboration, if they are to function effectively. Identifying and applying appropriate team building strategies for a virtual environment will not only enhance organizational effectiveness but will also impact positively on the quality of working life for virtual team members.", "title": "" } ]
scidocsrr
2eaf406b95355d848d94e10891bb9446
Imbalanced sentiment classification
[ { "docid": "ddea585db9c7772353276241f4d6bfe0", "text": "In this paper, we present a dependency treebased method for sentiment classification of Japanese and English subjective sentences using conditional random fields with hidden variables. Subjective sentences often contain words which reverse the sentiment polarities of other words. Therefore, interactions between words need to be considered in sentiment classification, which is difficult to be handled with simple bag-of-words approaches, and the syntactic dependency structures of subjective sentences are exploited in our method. In the method, the sentiment polarity of each dependency subtree in a sentence, which is not observable in training data, is represented by a hidden variable. The polarity of the whole sentence is calculated in consideration of interactions between the hidden variables. Sum-product belief propagation is used for inference. Experimental results of sentiment classification for Japanese and English subjective sentences showed that the method performs better than other methods based on bag-of-features.", "title": "" }, { "docid": "7f74c519207e469c39f81d52f39438a0", "text": "Automatic sentiment classification has been extensively studied and applied in recent years. However, sentiment is expressed differently in different domains, and annotating corpora for every possible domain of interest is impractical. We investigate domain adaptation for sentiment classifiers, focusing on online reviews for different types of products. First, we extend to sentiment classification the recently-proposed structural correspondence learning (SCL) algorithm, reducing the relative error due to adaptation between domains by an average of 30% over the original SCL algorithm and 46% over a supervised baseline. Second, we identify a measure of domain similarity that correlates well with the potential for adaptation of a classifier from one domain to another. This measure could for instance be used to select a small set of domains to annotate whose trained classifiers would transfer well to many other domains.", "title": "" }, { "docid": "1ac4ac9b112c2554db37de2070d7c2df", "text": "This paper studies empirically the effect of sampling and threshold-moving in training cost-sensitive neural networks. Both oversampling and undersampling are considered. These techniques modify the distribution of the training data such that the costs of the examples are conveyed explicitly by the appearances of the examples. Threshold-moving tries to move the output threshold toward inexpensive classes such that examples with higher costs become harder to be misclassified. Moreover, hard-ensemble and soft-ensemble, i.e., the combination of above techniques via hard or soft voting schemes, are also tested. Twenty-one UCl data sets with three types of cost matrices and a real-world cost-sensitive data set are used in the empirical study. The results suggest that cost-sensitive learning with multiclass tasks is more difficult than with two-class tasks, and a higher degree of class imbalance may increase the difficulty. It also reveals that almost all the techniques are effective on two-class tasks, while most are ineffective and even may cause negative effect on multiclass tasks. Overall, threshold-moving and soft-ensemble are relatively good choices in training cost-sensitive neural networks. The empirical study also suggests that some methods that have been believed to be effective in addressing the class imbalance problem may, in fact, only be effective on learning with imbalanced two-class data sets.", "title": "" } ]
[ { "docid": "92cf6e3fd47d40c52bb80faaafab07c8", "text": "Graham-Little syndrome, also know as Graham-Little-Piccardi-Lassueur syndrome, is an unusual form of lichen planopilaris, characterized by the presence of cicatricial alopecia on the scalp, keratosis pilaris of the trunk and extremities, and non-cicatricial hair loss of the pubis and axillae. We present the case of a 47-year-old woman whose condition was unusual in that there was a prominence of scalp findings. Her treatment included a topical steroid plus systemic prednisone beginning at 30 mg every morning, which rendered her skin smooth, but did not alter her scalp lopecia.", "title": "" }, { "docid": "2b5ade239beea52315e50e0d4fde197f", "text": "The ultimate goal of research is to produce dependable knowledge or to provide the evidence that may guide practical decisions. Statistical conclusion validity (SCV) holds when the conclusions of a research study are founded on an adequate analysis of the data, generally meaning that adequate statistical methods are used whose small-sample behavior is accurate, besides being logically capable of providing an answer to the research question. Compared to the three other traditional aspects of research validity (external validity, internal validity, and construct validity), interest in SCV has recently grown on evidence that inadequate data analyses are sometimes carried out which yield conclusions that a proper analysis of the data would not have supported. This paper discusses evidence of three common threats to SCV that arise from widespread recommendations or practices in data analysis, namely, the use of repeated testing and optional stopping without control of Type-I error rates, the recommendation to check the assumptions of statistical tests, and the use of regression whenever a bivariate relation or the equivalence between two variables is studied. For each of these threats, examples are presented and alternative practices that safeguard SCV are discussed. Educational and editorial changes that may improve the SCV of published research are also discussed.", "title": "" }, { "docid": "30e9afa44756fa1b050945e9f3e1863e", "text": "A 8-year-old Chinese boy with generalized pustular psoriasis (GPP) refractory to cyclosporine and methylprednisolone was treated successfully with two infusions of infliximab 3.3 mg/kg. He remained in remission for 21 months. Direct sequencing of IL36RN gene showed a homozygous mutation, c.115 + 6T>C. Juvenile GPP is a rare severe form of psoriasis occasionally associated with life-threatening complications. Like acitretin, cyclosporine and methotrexate, infliximab has been reported to be effective for juvenile GPP in case reports. However, there is a lack of data in the optimal treatment course of infliximab for juvenile GPP. Prolonged administration of these medications may cause toxic or fatal complications. We suggest that short-term infliximab regimen should be recommended as a choice for acute juvenile GPP refractory to traditional systemic therapies. WBC count and CRP are sensitive parameters to reflect the disease activity and evaluate the effectiveness of treatment. Monitoring CD4 T lymphocyte count, preventing and correcting CD4 lymphocytopenia are important in the treatment course of juvenile GPP.", "title": "" }, { "docid": "9582bf78b9227fa4fd2ebdb957138571", "text": "The prestige of publication has been based on traditional citation metrics, most commonly journal impact factor. However, the Internet has radically changed the speed, flow, and sharing of medical information. Furthermore, the explosion of social media, along with development of popular professional and scientific websites and blogs, has led to the need for alternative metrics, known as altmetrics, to quantify the wider impact of research. We explore the evolution of current research impact metrics and examine the evolving role of altmetrics in measuring the wider impact of research. We suggest that altmetrics used in research evaluation should be part of an informed peer-review process such as traditional metrics. Moreover, results based on altmetrics must not lead to direct decision making about research, but instead, should be used to assist experts in making decisions. Finally, traditional and alternative metrics should complement, not replace, each other in the peer-review process.", "title": "" }, { "docid": "21f6ca062098c0dcf04fe8fadfc67285", "text": "The Key study in this paper is to begin the investigation process with the initial forensic analysis in the segments of the storage media which would definitely contain the digital forensic evidences. These Storage media Locations is referred as the Windows registry. Identifying the forensic evidence from windows registry may take less time than required in the case of all locations of a storage media. Our main focus in this research will be to study the registry structure of Windows 7 and identify the useful information within the registry keys of windows 7 that may be extremely useful to carry out any task of digital forensic analysis. The main aim is to describe the importance of the study on computer & digital forensics. The Idea behind the research is to implement a forensic tool which will be very useful in extracting the digital evidences and present them in usable form to a forensic investigator. The work includes identifying various events registry keys value such as machine last shut down time along with machine name, List of all the wireless networks that the computer has connected to; List of the most recently used files or applications, List of all the USB devices that have been attached to the computer and many more. This work aims to point out the importance of windows forensic analysis to extract and identify the hidden information which shall act as an evidence tool to track and gather the user activities pattern. All Research was conducted in a Windows 7 Environment. Keywords—Windows Registry, Windows 7 Forensic Analysis, Windows Registry Structure, Analysing Registry Key, Digital Forensic Identification, Forensic data Collection, Examination of Windows Registry, Decoding of Windows Registry Keys, Discovering User Activities Patterns, Computer Forensic Investigation Tool.", "title": "" }, { "docid": "a493c6f93a4a949fc2ea32dbca26cb26", "text": "Studies of irony detection have commonly used ironic criticisms (i.e., mock positive evaluation of negative circumstances) as stimulus materials. Another basic type of verbal irony, ironic praise (i.e., mock negative evaluation of positive circumstances) is largely absent from studies on individuals' aptitude to detect verbal irony. However, it can be argued that ironic praise needs to be considered in order to investigate the detection of irony in the variety of its facets. To explore whether the detection ironic praise has a benefit beyond ironic criticism, three studies were conducted. In Study 1, an instrument (Test of Verbal Irony Detection Aptitude; TOVIDA) was constructed and its factorial structure was tested using N = 311 subjects. The TOVIDA contains 26 scenario-based items and contains two scales for the detection of ironic criticism vs. ironic praise. To validate the measurement method, the two scales of the TOVIDA were experimentally evaluated with N = 154 subjects in Study 2. In Study 3, N = 183 subjects were tested to explore personality and ability correlates of the two TOVIDA scales. Results indicate that the co-variance between the ironic TOVIDA items was organized by two inter-correlated but distinct factors: one representing ironic praise detection aptitude and one representing ironic criticism detection aptitude. Experimental validation showed that the TOVIDA items truly contain irony and that item scores reflect irony detection. Trait bad mood and benevolent humor (as a facet of the sense of humor) were found as joint correlates for both ironic criticism and ironic praise detection scores. In contrast, intelligence, trait cheerfulness, and corrective humor were found as unique correlates of ironic praise detection scores, even when statistically controlling for the aptitude to detect ironic criticism. Our results indicate that the aptitude to detect ironic praise can be seen as distinct from the aptitude to detect ironic criticism. Generating unique variance in irony detection, ironic praise can be postulated as worthwhile to include in future studies-especially when studying the role of mental ability, personality, and humor in irony detection.", "title": "" }, { "docid": "17cc2f4ae2286d36748b203492d406e6", "text": "In this paper, we consider sentence simplification as a special form of translation with the complex sentence as the source and the simple sentence as the target. We propose a Tree-based Simplification Model (TSM), which, to our knowledge, is the first statistical simplification model covering splitting, dropping, reordering and substitution integrally. We also describe an efficient method to train our model with a large-scale parallel dataset obtained from the Wikipedia and Simple Wikipedia. The evaluation shows that our model achieves better readability scores than a set of baseline systems.", "title": "" }, { "docid": "caa7ecc11fc36950d3e17be440d04010", "text": "In this paper, a comparative study of routing protocols is performed in a hybrid network to recommend the best routing protocol to perform load balancing for Internet traffic. Open Shortest Path First (OSPF), Interior Gateway Routing Protocol (IGRP) and Intermediate System to Intermediate System (IS-IS) routing protocols are compared in OPNET modeller 14 to investigate their capability of ensuring fair distribution of traffic in a hybrid network. The network simulated is scaled to a campus. The network loads are varied in size and performance study is made by running simulations with all the protocols. The only considered performance factors for observation are packet drop, network delay, throughput and network load. IGRP presented better performance as compared to other protocols. The benefit of using IGRP is reduced packet drop, reduced network delay, increased throughput while offering relative better distribution of traffic in a hybrid network.", "title": "" }, { "docid": "05518ac3a07fdfb7bfede8df8a7a500b", "text": "The prevalence of food allergy is rising for unclear reasons, with prevalence estimates in the developed world approaching 10%. Knowledge regarding the natural course of food allergies is important because it can aid the clinician in diagnosing food allergies and in determining when to consider evaluation for food allergy resolution. Many food allergies with onset in early childhood are outgrown later in childhood, although a minority of food allergy persists into adolescence and even adulthood. More research is needed to improve food allergy diagnosis, treatment, and prevention.", "title": "" }, { "docid": "b3503da8efba0d5627a3e024b2870af3", "text": "Spin Transfer Torque Magnetic RAM (STT-MRAM) promises low power, great miniaturization prospective (e.g. 22 nm) and easy integration with CMOS process. It becomes actually a strong non-volatile memory candidate for both embedded and standalone applications. However STT-MRAM suffers from important failure and reliability issues compared with the conventional solutions based on magnetic field switching. For example, a read current could write erroneously the stored data, the variability of ultra-thin oxide barrier drives high resistance variation and the injected current in the nanopillar induces lower lifetime etc. This paper classifies firstly all the possible failures of STT-MRAM into ‘‘soft errors’’ and ‘‘hard errors’’, and analyzes their impact on the memory reliability. Based on this work, we can find some efficient design solutions to address respectively these two types of errors and improve the reliability of STTMRAM. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "adad5599122e63cde59322b7ba46461b", "text": "Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance i.e. they respond systematically to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning system significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in a disjoint domain.", "title": "" }, { "docid": "e03444a976fbacb91df3a32ff0f27e6f", "text": "In past few years, mobile wallet took spotlight as alternative of existing payment solution in many countries such as USA, South Korea, Germany and China. Although considered as one of the most convenient payment, mobile wallet only claimed 1% from total electronic payment transaction in Indonesia. The aim of this study is to identify the behavior and user acceptance factors of mobile wallet technology. Online survey was conducted among 372 respondents to test hypothesis based on UTAUT2 model. Respondents consisted of 61.29% of male and 38.71% of female with age proportion was dominated by age group of 20's of 78.76%. In addition, 50.81% of respondents never used mobile wallet before and 49.19% of respondents have ever used mobile wallet. Data obtained were confirmed using confirmatory factor analysis and analyzed using structural equation model. The study found that habit was the factor that most strongly affected individual behavioral intention to use mobile wallet in Indonesia, followed by social influence, effort expectancy and hedonic motivation. The findings of this research for management can be used as consideration for making product decision related to mobile wallet. Further study is needed, as mobile wallet is still in early stage and another factor beside UTAUT2 should be considered in the study.", "title": "" }, { "docid": "f2603a583b63c1c8f350b3ddabe16642", "text": "We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval.", "title": "" }, { "docid": "361fc2b80275b786d24bf0e979dc7aec", "text": "Well-run datacenter application architectures are heavily instrumented to provide detailed traces of messages and remote invocations. Reconstructing user sessions, call graphs, transaction trees, and other structural information from these messages, a process known as sessionization, is the foundation for a variety of diagnostic, profiling, and monitoring tasks essential to the operation of the datacenter.\n We present the design and implementation of a system which processes log streams at gigabits per second and reconstructs user sessions comprising millions of transactions per second in real time with modest compute resources, while dealing with clock skew, message loss, and other real-world phenomena that make such a task challenging. Our system is based on the Timely Dataflow framework for low latency, data-parallel computation, and we demonstrate its utility with a number of use-cases and traces from a large, operational, mission-critical enterprise data center.", "title": "" }, { "docid": "bbdc213c082fd0573add260e99447f2d", "text": "Received: May 17, 2015. Received in revised form: October 15, 2015. Accepted: October 25, 2015. Although construction has been known as a highly complex application field for autonomous robotic systems, recent advances in this field offer great hope for using robotic capabilities to develop automated construction. Today, space research agencies seek to build infrastructures without human intervention, and construction companies look to robots with the potential to improve construction quality, efficiency, and safety, not to mention flexibility in architectural design. However, unlike production robots used, for instance, in automotive industries, autonomous robots should be designed with special consideration for challenges such as the complexity of the cluttered and dynamic working space, human-robot interactions and inaccuracy in positioning due to the nature of mobile systems and the lack of affordable and precise self-positioning solutions. This paper briefly reviews state-ofthe-art research into automated construction by autonomous mobile robots. We address and classify the relevant studies in terms of applications, materials, and robotic systems. We also identify ongoing challenges and discuss about future robotic requirements for automated construction.", "title": "" }, { "docid": "efb78474b403972f7bffa3e29ded5804", "text": "The idea that memory is composed of distinct systems has a long history but became a topic of experimental inquiry only after the middle of the 20th century. Beginning about 1980, evidence from normal subjects, amnesic patients, and experimental animals converged on the view that a fundamental distinction could be drawn between a kind of memory that is accessible to conscious recollection and another kind that is not. Subsequent work shifted thinking beyond dichotomies to a view, grounded in biology, that memory is composed of multiple separate systems supported, for example, by the hippocampus and related structures, the amygdala, the neostriatum, and the cerebellum. This article traces the development of these ideas and provides a current perspective on how these brain systems operate to support behavior.", "title": "" }, { "docid": "be7a33cc59e8fb297c994d046c6874d9", "text": "Purpose: Compressed sensing MRI (CS-MRI) from single and parallel coils is one of the powerful ways to reduce the scan time of MR imaging with performance guarantee. However, the computational costs are usually expensive. This paper aims to propose a computationally fast and accurate deep learning algorithm for the reconstruction of MR images from highly down-sampled k-space data. Theory: Based on the topological analysis, we show that the data manifold of the aliasing artifact is easier to learn from a uniform subsampling pattern with additional low-frequency k-space data. Thus, we develop deep aliasing artifact learning networks for the magnitude and phase images to estimate and remove the aliasing artifacts from highly accelerated MR acquisition. Methods: The aliasing artifacts are directly estimated from the distorted magnitude and phase images reconstructed from subsampled k-space data so that we can get an aliasing-free images by subtracting the estimated aliasing artifact from corrupted inputs. Moreover, to deal with the globally distributed aliasing artifact, we develop a multi-scale deep neural network with a large receptive field. Results: The experimental results confirm that the proposed deep artifact learning network effectively estimates and removes the aliasing artifacts. Compared to existing CS methods from single and multi-coli data, the proposed network shows minimal errors by removing the coherent aliasing artifacts. Furthermore, the computational time is by order of magnitude faster. Conclusion: As the proposed deep artifact learning network immediately generates accurate reconstruction, it has great potential for clinical applications.", "title": "" }, { "docid": "f83f099437475aebb81fe92be355f331", "text": "The main receptors for amyloid-beta peptide (Abeta) transport across the blood-brain barrier (BBB) from brain to blood and blood to brain are low-density lipoprotein receptor related protein-1 (LRP1) and receptor for advanced glycation end products (RAGE), respectively. In normal human plasma a soluble form of LRP1 (sLRP1) is a major endogenous brain Abeta 'sinker' that sequesters some 70 to 90 % of plasma Abeta peptides. In Alzheimer's disease (AD), the levels of sLRP1 and its capacity to bind Abeta are reduced which increases free Abeta fraction in plasma. This in turn may increase brain Abeta burden through decreased Abeta efflux and/or increased Abeta influx across the BBB. In Abeta immunotherapy, anti-Abeta antibody sequestration of plasma Abeta enhances the peripheral Abeta 'sink action'. However, in contrast to endogenous sLRP1 which does not penetrate the BBB, some anti-Abeta antibodies may slowly enter the brain which reduces the effectiveness of their sink action and may contribute to neuroinflammation and intracerebral hemorrhage. Anti-Abeta antibody/Abeta immune complexes are rapidly cleared from brain to blood via FcRn (neonatal Fc receptor) across the BBB. In a mouse model of AD, restoring plasma sLRP1 with recombinant LRP-IV cluster reduces brain Abeta burden and improves functional changes in cerebral blood flow (CBF) and behavioral responses, without causing neuroinflammation and/or hemorrhage. The C-terminal sequence of Abeta is required for its direct interaction with sLRP and LRP-IV cluster which is completely blocked by the receptor-associated protein (RAP) that does not directly bind Abeta. Therapies to increase LRP1 expression or reduce RAGE activity at the BBB and/or restore the peripheral Abeta 'sink' action, hold potential to reduce brain Abeta and inflammation, and improve CBF and functional recovery in AD models, and by extension in AD patients.", "title": "" }, { "docid": "e4eee5ef276cf0e7457e797d44b20e27", "text": "Scatterplots are effective visualization techniques for multidimensional data that use two (or three) axes to visualize data items as a point at its corresponding x and y Cartesian coordinates. Typically, each axis is bound to a single data attribute. Interactive exploration occurs by changing the data attributes bound to each of these axes. In the case of using scatterplots to visualize the outputs of dimension reduction techniques, the x and y axes are combinations of the true, high-dimensional data. For these spatializations, the axes present usability challenges in terms of interpretability and interactivity. That is, understanding the axes and interacting with them to make adjustments can be challenging. In this paper, we present InterAxis, a visual analytics technique to properly interpret, define, and change an axis in a user-driven manner. Users are given the ability to define and modify axes by dragging data items to either side of the x or y axes, from which the system computes a linear combination of data attributes and binds it to the axis. Further, users can directly tune the positive and negative contribution to these complex axes by using the visualization of data attributes that correspond to each axis. We describe the details of our technique and demonstrate the intended usage through two scenarios.", "title": "" }, { "docid": "5460958ae8ad23fb762593a22b8aad07", "text": "The paper presents an artificial neural network based approach in support of cash demand forecasting for automatic teller machine (ATM). On the start phase a three layer feed-forward neural network was trained using Levenberg-Marquardt algorithm and historical data sets. Then ANN was retuned every week using the last observations from ATM. The generalization properties of the ANN were improved using regularization term which penalizes large values of the ANN weights. Regularization term was adapted online depending on complexity of relationship between input and output variables. Performed simulation and experimental tests have showed good forecasting capacities of ANN. At current stage the proposed procedure is in the implementing phase for cash management tasks in ATM network. Key-Words: neural networks, automatic teller machine, cash forecasting", "title": "" } ]
scidocsrr
a86ba780d04b6b7204a112fc23600476
On the Provable Security of (EC)DSA Signatures
[ { "docid": "0332be71a529382e82094239db31ea25", "text": "Nguyen and Shparlinski recently presented a polynomial-time algorithm that provably recovers the signer’s secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).", "title": "" } ]
[ { "docid": "9824a6ec0809cefdec77a52170670d17", "text": "The use of planar fluidic devices for performing small-volume chemistry was first proposed by analytical chemists, who coined the term “miniaturized total chemical analysis systems” ( TAS) for this concept. More recently, the TAS field has begun to encompass other areas of chemistry and biology. To reflect this expanded scope, the broader terms “microfluidics” and “lab-on-a-chip” are now often used in addition to TAS. Most microfluidics researchers rely on micromachining technologies at least to some extent to produce microflow systems based on interconnected micrometer-dimensioned channels. As members of the microelectromechanical systems (MEMS) community know, however, one can do more with these techniques. It is possible to impart higher levels of functionality by making features in different materials and at different levels within a microfluidic device. Increasingly, researchers have considered how to integrate electrical or electrochemical function into chips for purposes as diverse as heating, temperature sensing, electrochemical detection, and pumping. MEMS processes applied to new materials have also resulted in new approaches for fabrication of microchannels. This review paper explores these and other developments that have emerged from the increasing interaction between the MEMS and microfluidics worlds.", "title": "" }, { "docid": "dd64ac591acfacb6ea514af3f104d0aa", "text": "FluMist influenza A vaccine strains contain the PB1, PB2, PA, NP, M, and NS gene segments of ca A/AA/6/60, the master donor virus-A strain. These gene segments impart the characteristic cold-adapted (ca), attenuated (att), and temperature-sensitive (ts) phenotypes to the vaccine strains. A plasmid-based reverse genetics system was used to create a series of recombinant hybrids between the isogenic non-ts wt A/Ann Arbor/6/60 and MDV-A strains to characterize the genetic basis of the ts phenotype, a critical, genetically stable, biological trait that contributes to the attenuation and safety of FluMist vaccines. PB1, PB2, and NP derived from MDV-A each expressed determinants of temperature sensitivity and the combination of all three gene segments was synergistic, resulting in expression of the characteristic MDV-A ts phenotype. Site-directed mutagenesis analysis mapped the MDV-A ts phenotype to the following four major loci: PB1(1195) (K391E), PB1(1766) (E581G), PB2(821) (N265S), and NP(146) (D34G). In addition, PB1(2005) (A661T) also contributed to the ts phenotype. The identification of multiple genetic loci that control the MDV-A ts phenotype provides a molecular basis for the observed genetic stability of FluMist vaccines.", "title": "" }, { "docid": "73dad13887b3d7abdda75716e406dd59", "text": "This paper studies the convolutional neural network (ConvNet or CNN) from a statistical modeling perspective. The ConvNet has proven to be a very successful discriminative learning machine. In this paper, we explore the generative perspective of the ConvNet. We propose to learn Markov random field models called FRAME (Filters, Random field, And Maximum Entropy) models using the highly sophisticated filters pre-learned by the ConvNet on the big ImageNet dataset. We show that the learned models can generate realistic and rich object and texture patterns in natural scenes. We explain that each learned model corresponds to a new ConvNet unit at the layer above the layer of filters employed by the model. We further show that it is possible to learn a generative ConvNet model with a new layer of multiple filters, and the learning algorithm admits an EM interpretation with binary latent variables.", "title": "" }, { "docid": "74444ed7ec8618a6fb71678d2017980c", "text": "A new method for predicting the rotor angle stability status of a power system immediately after a large disturbance is presented. The proposed two-stage method involves estimation of the similarity of post-fault voltage trajectories of the generator buses after the disturbance to some pre-identified templates and then prediction of the stability status using a classifier which takes the similarity values calculated at the different generator buses as inputs. The typical bus voltage variation patterns after a disturbance for both stable and unstable situations are identified from a database of simulations using fuzzy C-means clustering algorithm. The same database is used to train a support vector machine classifier which takes proximity of the actual voltage variations to the identified templates as features. Development of the system and its performance were demonstrated using a case study carried out on the IEEE 39-bus system. Investigations showed that the proposed method can accurately predict the stability status six cycles after the clearance of a fault. Further, the robustness of the proposed method was examined by analyzing its performance in predicting the instability when the network configuration is altered.", "title": "" }, { "docid": "bb5ce42707f086d4ca2c6a5d23587070", "text": "Supervoxel methods such as Simple Linear Iterative Clustering (SLIC) are an effective technique for partitioning an image or volume into locally similar regions, and are a common building block for the development of detection, segmentation and analysis methods. We introduce maskSLIC an extension of SLIC to create supervoxels within regions-of-interest, and demonstrate, on examples from 2-dimensions to 4-dimensions, that maskSLIC overcomes issues that affect SLIC within an irregular mask. We highlight the benefits of this method through examples, and show that it is able to better represent underlying tumour subregions and achieves significantly better results than SLIC on the BRATS 2013 brain tumour challenge data (p=0.001) – outperforming SLIC on 18/20 scans. Finally, we show an application of this method for the analysis of functional tumour subregions and demonstrate that it is more effective than voxel clustering.", "title": "" }, { "docid": "31dd5d340af797a118e5d915ada37f05", "text": "The modular multilevel converter (MMC) with half-bridge submodules (SMs) is the most promising technology for high-voltage direct current (HVDC) grids, but it lacks dc fault clearance capability. There are two main methods to handle the dc-side short-circuit fault. One is to employ the SMs that have dc fault clearance capability, but the power losses are high and the converter has to be blocked during the clearance. The other is to employ the hybrid HVDC breakers. The breaker is capable of interrupting fault current within 5 ms, but this technology is not cost effective, especially in meshed HVDC grids. In this paper, an assembly HVDC breaker and the corresponding control strategy are proposed to overcome these drawbacks. The assembly HVDC breaker consists of an active short-circuit breaker (ASCB), a main mechanical disconnector, a main breaker, and an accessory discharging switch (ADS). When a dc-side short-circuit fault occurs, the ASCB and the ADS close immediately to shunt the fault current. The main breaker opens after a short delay to isolate the faulted line from the system and then the mechanical disconnector opens. With the disconnector in open position, the ASCB opens and breaks the current. The proposed breaker can handle the dc-side fault with competitively low cost, and the operating speed is fast enough. A model of a four-terminal monopolar HVDC grid is developed in Power Systems Computer Aided Design / Electromagnetic Transients including DC, and the simulation result proves the validity and the feasibility of the proposed solution.", "title": "" }, { "docid": "72e1a2bf37495439a12a53f4b842c218", "text": "A new transmission model of human malaria in a partially immune population with three discrete delays is formulated for variable host and vector populations. These are latent period in the host population, latent period in the vector population and duration of partial immunity. The results of our mathematical analysis indicate that a threshold parameterR0 exists. ForR0 > 1, the expected number of mosquitoes infected from humansRhm should be greater than a certain critical valueR∗hm or should be less thanR∗hm whenR ∗ hm > 1, for a stable endemic equilibrium to exist. We deduce from model analysis that an increase in the period within which partial immunity is lost increases the spread of the disease. Numerically we deduce that treatment of the partially immune humans assists in reducing the severity of the disease and that transmission blocking vaccines would be effective in a partially immune population. Numerical simulations support our analytical conclusions and illustrate possible behaviour scenarios of the model. c © 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "49fe73e28714721e6dc64a3bbeadecc5", "text": "Fingerprint is the most popular biometric trait due to the perceived uniqueness and persistence of friction ridge pattern on human fingers [1]. Following the introduction of iPhone 5S with Touch ID fingerprint sensor in September 2013, most of the mobile phones, such as iPhone 5s/6/6+, Samsung Galaxy S5/S6, HTC One Max, Huawei Honor 7, Meizu MX4 Pro and others, now come with embedded fingerprint sensors for phone unlock. It has been forecasted that 50% of smartphones sold by 2019 will have an embedded fingerprint sensor [2]. With the introduction of Apple Pay, Samsung Pay and Android Pay, fingerprint recognition on mobile devices is leveraged for more than just for device unlock; it can also be used for secure mobile payment and other transactions.", "title": "" }, { "docid": "c132272c8caa7158c0549bd5f2d626aa", "text": "This study investigates alternative material compositions for flexible silicone-based dry electroencephalography (EEG) electrodes to improve the performance lifespan while maintaining high-fidelity transmission of EEG signals. Electrode materials were fabricated with varying concentrations of silver-coated silica and silver flakes to evaluate their electrical, mechanical, and EEG transmission performance. Scanning electron microscope (SEM) analysis of the initial electrode development identified some weak points in the sensors' construction, including particle pull-out and ablation of the silver coating on the silica filler. The newly-developed sensor materials achieved significant improvement in EEG measurements while maintaining the advantages of previous silicone-based electrodes, including flexibility and non-toxicity. The experimental results indicated that the proposed electrodes maintained suitable performance even after exposure to temperature fluctuations, 85% relative humidity, and enhanced corrosion conditions demonstrating improvements in the environmental stability. Fabricated flat (forehead) and acicular (hairy sites) electrodes composed of the optimum identified formulation exhibited low impedance and reliable EEG measurement; some initial human experiments demonstrate the feasibility of using these silicone-based electrodes for typical lab data collection applications.", "title": "" }, { "docid": "c048fcee111850376f585dad381a6eec", "text": "In order to simulate the effects of lightning and switching transients on power lines, coupling and decoupling network (CDN) is necessary for performing surge tests on electrical and electronic equipment. This paper presents a particular analysis of the circuit of CDN and design details according to the requirement of the standard IEC 61000-4-5. The CDN circuits are simulated by electromagnetic transients program (EMTP) to judge the suitability of the selected component values and performance of the designed CDN. Simulation results also show that increasing of decoupling capacitance or inductance can reduce the residual surge voltage on the power line inputs of the CDN. Based on the voltage drop control and residual surge voltage limiting, smaller decoupling inductance and relatively larger decoupling capacitance should be chosen if EUT has a higher rated current. The CDN developed according to the simulation results satisfy the design specifications well.", "title": "" }, { "docid": "3ed6df057a32b9dcf243b5ac367b4912", "text": "This paper presents advancements in induction motor endring design to overcome mechanical limitations and extend the operating speed range and joint reliability of induction machines. A novel endring design met the challenging mechanical requirements of this high speed, high temperature, power dense application, without compromising electrical performance. Analysis is presented of the advanced endring design features including a non uniform cross section, hoop stress relief cuts, and an integrated joint boss, which reduced critical stress concentrations, allowing operation under a broad speed and temperature design range. A generalized treatment of this design approach is presented comparing the concept results to conventional design techniques. Additionally, a low temperature joining process of the bar/end ring connection is discussed that provides the required joint strength without compromising the mechanical strength of the age hardened parent metals. A description of a prototype 2 MW, 15,000 rpm flywheel motor generator embodying this technology is presented", "title": "" }, { "docid": "d2af69233bf30376afb81b204b063c81", "text": "Exploiting the security vulnerabilities in web browsers, web applications and firewalls is a fundamental trait of cross-site scripting (XSS) attacks. Majority of web population with basic web awareness are vulnerable and even expert web users may not notice the attack to be able to respond in time to neutralize the ill effects of attack. Due to their subtle nature, a victimized server, a compromised browser, an impersonated email or a hacked web application tends to keep this form of attacks alive even in the present times. XSS attacks severely offset the benefits offered by Internet based services thereby impacting the global internet community. This paper focuses on defense, detection and prevention mechanisms to be adopted at various network doorways to neutralize XSS attacks using open source tools.", "title": "" }, { "docid": "d233e7031b84316f66a4f4568c907545", "text": "The specific biomechanical alterations related to vitality loss or endodontic procedures are confusing issues for the practitioner and have been controversially approached from a clinical standpoint. The aim of part 1 of this literature review is to present an overview of the current knowledge about composition changes, structural alterations, and status following endodontic therapy and restorative procedures. The basic search process included a systematic review of the PubMed/Medline database between 1990 and 2005, using single or combined key words to obtain the most comprehensive list of references; a perusal of the references of the relevant sources completed the review. Only negligible alterations in tissue moisture and composition attributable to vitality loss or endodontic therapy were reported. Loss of vitality followed by proper endodontic therapy proved to affect tooth biomechanical behavior only to a limited extent. Conversely, tooth strength is reduced in proportion to coronal tissue loss, due to either caries lesion or restorative procedures. Therefore, the best current approach for restoring endodontically treated teeth seems to (1) minimize tissue sacrifice, especially in the cervical area so that a ferrule effect can be created, (2) use adhesive procedures at both radicular and coronal levels to strengthen remaining tooth structure and optimize restoration stability and retention, and (3) use post and core materials with physical properties close to those of natural dentin, because of the limitations of current adhesive procedures.", "title": "" }, { "docid": "f7ff118b8f39fa0843c4861306b4910f", "text": "This article proposes a novel character-aware neural machine translation (NMT) model that views the input sequences as sequences of characters rather than words. On the use of row convolution (Amodei et al., 2015), the encoder of the proposed model composes word-level information from the input sequences of characters automatically. Since our model doesn’t rely on the boundaries between each word (as the whitespace boundaries in English), it is also applied to languages without explicit word segmentations (like Chinese). Experimental results on Chinese-English translation tasks show that the proposed character-aware NMT model can achieve comparable translation performance with the traditional word based NMT models. Despite the target side is still word based, the proposed model is able to generate much less unknown words.", "title": "" }, { "docid": "27f001247d02f075c9279b37acaa49b3", "text": "A Zadoff–Chu (ZC) sequence is uncorrelated with a non-zero cyclically shifted version of itself. However, this alone is insufficient to mitigate inter-code interference in LTE initial uplink synchronization. The performance of the state-of-the-art algorithms vary widely depending on the specific ZC sequences employed. We develop a systematic procedure to choose the ZC sequences that yield the optimum performance. It turns out that the procedure for ZC code selection in LTE standard is suboptimal when the carrier frequency offset is not small.", "title": "" }, { "docid": "729b63fe33d2cc7048a887e3fdb41662", "text": "Integrating biomechanics, behavior and ecology requires a mechanistic understanding of the processes producing the movement of animals. This calls for contemporaneous biomechanical, behavioral and environmental data along movement pathways. A recently formulated unifying movement ecology paradigm facilitates the integration of existing biomechanics, optimality, cognitive and random paradigms for studying movement. We focus on the use of tri-axial acceleration (ACC) data to identify behavioral modes of GPS-tracked free-ranging wild animals and demonstrate its application to study the movements of griffon vultures (Gyps fulvus, Hablizl 1783). In particular, we explore a selection of nonlinear and decision tree methods that include support vector machines, classification and regression trees, random forest methods and artificial neural networks and compare them with linear discriminant analysis (LDA) as a baseline for classifying behavioral modes. Using a dataset of 1035 ground-truthed ACC segments, we found that all methods can accurately classify behavior (80-90%) and, as expected, all nonlinear methods outperformed LDA. We also illustrate how ACC-identified behavioral modes provide the means to examine how vulture flight is affected by environmental factors, hence facilitating the integration of behavioral, biomechanical and ecological data. Our analysis of just over three-quarters of a million GPS and ACC measurements obtained from 43 free-ranging vultures across 9783 vulture-days suggests that their annual breeding schedule might be selected primarily in response to seasonal conditions favoring rising-air columns (thermals) and that rare long-range forays of up to 1750 km from the home range are performed despite potentially heavy energetic costs and a low rate of food intake, presumably to explore new breeding, social and long-term resource location opportunities.", "title": "" }, { "docid": "d01198e88f91a47a1777337d0db41939", "text": "Ultra low quiescent, wide output current range low-dropout regulators (LDO) are in high demand in portable applications to extend battery lives. This paper presents a 500 nA quiescent, 0 to 100 mA load, 3.5–7 V input to 3 V output LDO in a digital 0.35 μm 2P3M CMOS technology. The challenges in designing with nano-ampere of quiescent current are discussed, namely the leakage, the parasitics, and the excessive DC gain. CMOS super source follower voltage buffer and input excessive gain reduction are then proposed. The LDO is internally compensated using Ahuja method with a minimum phase margin of 55° across all load conditions. The maximum transient voltage variation is less than 150 and 75 mV when used with 1 and 10 μF external capacitor. Compared with existing work, this LDO achieves the best transient flgure-of-merit with close to best dynamic current efficiency (maximum-to-quiescent current ratio).", "title": "" }, { "docid": "1632b81068788aeeb4e458e340bbcec9", "text": "We present the design and implementation of a real-time, vision-based landing algorithm for an autonomous helicopter. The landing algorithm is integrated with algorithms for visual acquisition of the target (a helipad), and navigation to the target, from an arbitrary initial position and orientation. We use vision for precise target detection and recognition, and a combination of vision and GPS for navigation. The helicopter updates its landing target parameters based on vision and uses an onboard behavior-based controller to follow a path to the landing site. We present significant results from flight trials in the field which demonstrate that our detection, recognition and control algorithms are accurate, robust and repeatable.", "title": "" }, { "docid": "340f7af85199a263115fe917f1ea3dc7", "text": "We present a new system for the harmonic analysis of popular musical audio. It is focused on chord estimation, although the proposed system additionally estimates the key sequence and bass notes. It is distinct from competing approaches in two main ways. First, it makes use of a new improved chromagram representation of audio that takes the human perception of loudness into account. Furthermore, it is the first system for joint estimation of chords, keys, and bass notes that is fully based on machine learning, requiring no expert knowledge to tune the parameters. This means that it will benefit from future increases in available annotated audio files, broadening its applicability to a wider range of genres. In all of three evaluation scenarios, including a new one that allows evaluation on audio for which no complete ground truth annotation is available, the proposed system is shown to be faster, more memory efficient, and more accurate than the state-of-the-art.", "title": "" }, { "docid": "2fc294f2ab50b917f36155c0b9e1847d", "text": "Social and cultural conventions are an often-neglected aspect of intelligent-machine development.", "title": "" } ]
scidocsrr
2eb8db41d00b61bd186751888eab560f
Decomposition of Organometal Halide Perovskite Films on Zinc Oxide Nanoparticles.
[ { "docid": "22d153c01c82117466777842724bbaca", "text": "State-of-the-art photovoltaics use high-purity, large-area, wafer-scale single-crystalline semiconductors grown by sophisticated, high-temperature crystal growth processes. We demonstrate a solution-based hot-casting technique to grow continuous, pinhole-free thin films of organometallic perovskites with millimeter-scale crystalline grains. We fabricated planar solar cells with efficiencies approaching 18%, with little cell-to-cell variability. The devices show hysteresis-free photovoltaic response, which had been a fundamental bottleneck for the stable operation of perovskite devices. Characterization and modeling attribute the improved performance to reduced bulk defects and improved charge carrier mobility in large-grain devices. We anticipate that this technique will lead the field toward synthesis of wafer-scale crystalline perovskites, necessary for the fabrication of high-efficiency solar cells, and will be applicable to several other material systems plagued by polydispersity, defects, and grain boundary recombination in solution-processed thin films.", "title": "" } ]
[ { "docid": "1e4292950f907d26b27fa79e1e8fa41f", "text": "All over the world every business and profit earning firm want to make their consumer loyal. There are many factors responsible for this customer loyalty but two of them are prominent. This research study is focused on that how customer satisfaction and customer retention contribute towards customer loyalty. For analysis part of this study, Universities students of Peshawar Region were targeted. A sample of 120 were selected from three universities of Peshawar. These universities were Preston University, Sarhad University and City University of Science and Information technology. Analysis was conducted with the help of SPSS 19. Results of the study shows that customer loyalty is more dependent upon Customer satisfaction in comparison of customer retention. Customer perceived value and customer perceived quality are the major factors which contribute for the customer loyalty of Universities students for mobile handsets.", "title": "" }, { "docid": "a4738508bec1fe5975ce92c2239d30d0", "text": "The transpalatal arch might be one of the most common intraoral auxiliary fixed appliances used in orthodontics in order to provide dental anchorage. The aim of the present case report is to describe a case in which an adult patient with a tendency to class III, palatal compression, and bilateral posterior crossbite was treated with double transpalatal bars in order to control the torque of both the first and the second molars. Double transpalatal arches on both first and second maxillary molars are a successful appliance in order to control the posterior sectors and improve the torsion of the molars. They allow the professional to gain overbite instead of losing it as may happen with other techniques and avoid enlarging of Wilson curve, obtaining a more stable occlusion without the need for extra help from bone anchorage.", "title": "" }, { "docid": "26fb308cdcb530751ec04654f5527ebd", "text": "Deep learning is a form of machine learning for nonlinear high dimensional pattern matching and prediction. By taking a Bayesian probabilistic perspective, we provide a number of insights into more efficient algorithms for optimisation and hyper-parameter tuning. Traditional high-dimensional data reduction techniques, such as principal component analysis (PCA), partial least squares (PLS), reduced rank regression (RRR), projection pursuit regression (PPR) are all shown to be shallow learners. Their deep learning counterparts exploit multiple deep layers of data reduction which provide predictive performance gains. Stochastic gradient descent (SGD) training optimisation and Dropout (DO) regularization provide estimation and variable selection. Bayesian regularization is central to finding weights and connections in networks to optimize the predictive bias-variance trade-off. To illustrate our methodology, we provide an analysis of international bookings on Airbnb. Finally, we conclude with directions for future research.", "title": "" }, { "docid": "b43553a835a829e00b15f8f843a51c55", "text": "Much has been written on implementation of enterprise resource planning (ERP) systems in organizations of various sizes. The literature is replete with many cases studies of both successful and unsuccessful ERP implementations. However, there have been very few empirical studies that attempt to delineate the critical issues that drive successful implementation of ERP systems. Although the failure rates of ERP implementations have been publicized widely, this has not distracted companies from investing large sums of money on ERP systems. This study reports the results of an empirical research on the critical issues affecting successful ERP implementation. Through the study, eight factors were identified that attempts to explain 86% of the variances that impact ERP implementation. There was a strong correlation between successfully implementing ERP and six out of the eight factors identified. # 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f1e293b4b896547b17b5becb1e06cb47", "text": "Occupational therapy has been an invisible profession, largely because the public has had difficulty grasping the concept of occupation. The emergence of occupational science has the potential of improving this situation. Occupational science is firmly rooted in the founding ideas of occupational therapy. In the future, the nature of human occupation will be illuminated by the development of a basic theory of occupational science. Occupational science, through research and theory development, will guide the practice of occupational therapy. Applications of occupational science to the practice of pediatric occupational therapy are presented. Ultimately, occupational science will prepare pediatric occupational therapists to better meet the needs of parents and their children.", "title": "" }, { "docid": "7ca1c9096c6176cb841ae7f0e7262cb7", "text": "“Industry 4.0” is recognized as the future of industrial production in which concepts as Smart Factory and Decentralized Decision Making are fundamental. This paper proposes a novel strategy to support decentralized decision, whilst identifying opportunities and challenges of Industry 4.0 contextualizing the potential that represents industrial digitalization and how technological advances can contribute for a new perspective on manufacturing production. It is analysed a set of barriers to the full implementation of Industry 4.0 vision, identifying areas in which decision support is vital. Then, for each of the identified areas, the authors propose a strategy, characterizing it together with the level of complexity that is involved in the different processes. The strategies proposed are derived from the needs of two of Industry 4.0 main characteristics: horizontal integration and vertical integration. For each case, decision approaches are proposed concerning the type of decision required (strategic, tactical, operational and real-time). Validation results are provided together with a discussion on the main challenges that might be an obstacle for a successful decision strategy.", "title": "" }, { "docid": "3e177f8b02a5d67c7f4d93ce601c4539", "text": "This research proposes an approach for text classification that uses a simple neural network called Dynamic Text Classifier Neural Network (DTCNN). The neural network uses as input vectors of words with variable dimension without information loss called Dynamic Token Vectors (DTV). The proposed neural network is designed for the classification of large and short text into categories. The learning process combines competitive and Hebbian learning. Due to the combination of these learning rules the neural network is able to work in a supervised or semi-supervised mode. In addition, it provides transparency in the classification. The network used in this paper is quite simple, and that is what makes enough for its task. The results of evaluation the proposed method shows an improvement in the text classification problem using the DTCNN compared to baseline approaches.", "title": "" }, { "docid": "717988e7bada51ad5c4115f4d43de01a", "text": "I offer an overview of the rapidly growing field of mindfulness-based interventions (MBIs). A working definition of mindfulness in this context includes the brahma viharas, sampajanna and appamada, and suggests a very particular mental state which is both wholesome and capable of clear and penetrating insight into the nature of reality. The practices in mindfulness-based stress reduction (MBSR) that apply mindfulness to the four foundations are outlined, along with a brief history of the program and the original intentions of the founder, Jon Kabat-Zinn. The growth and scope of these interventions are detailed with demographics provided by the Center for Mindfulness, an overview of salient research studies and a listing of the varied MBIs that have grown out of MBSR. The question of ethics is explored, and other challenges are raised including teacher qualification and clarifying the “outer limits,” or minimum requirements, of what constitutes an MBI. Current trends are explored, including the increasing number of cohort-specific interventions as well as the publication of books, articles, and workbooks by a new generation of MBI teachers. Together, they form an emerging picture of MBIs as their own new “lineage,” which look to MBSR as their inspiration and original source. The potential to bring benefit to new fields, such as government and the military, represent exciting opportunities for MBIs, along with the real potential to transform health care. Sufficient experience in the delivery of MBIs has been garnered to offer the greater contemplative community valuable resources such as secular language, best practices, and extensive research.", "title": "" }, { "docid": "6bb09d944206222acccbc5613c6f854a", "text": "A new bilingual dictionary can be built using two existing bilingual dictionaries, such as Japanese-English and English-Chinese to build Japanese-Chinese dictionary. However, Japanese and Chinese are nearer languages than English, there should be a more direct way of doing this. Since a lot of Japanese words are composed of kanji, which are similar to hanzi in Chinese, we attempt to build a dictionary for kanji words by simple conversion from kanji to hanzi. Our survey shows that around 2/3 of the nouns and verbal nouns in Japanese are kanji words, and more than 1/3 of them can be translated into Chinese directly. The accuracy of conversion is 97%. Besides, we obtain translation candidates for 24% of the Japanese words using English as a pivot language with 77% accuracy. By adding the kanji/hanzi conversion method, we increase the candidates by 9%, to 33%, with better quality candidates.", "title": "" }, { "docid": "0a34ed8b01c6c700e7bb8bb15644590f", "text": "Almost all automatic semantic role labeling (SRL) systems rely on a preliminary parsing step that derives a syntactic structure from the sentence being analyzed. This makes the choice of syntactic representation an essential design decision. In this paper, we study the influence of syntactic representation on the performance of SRL systems. Specifically, we compare constituent-based and dependencybased representations for SRL of English in the FrameNet paradigm. Contrary to previous claims, our results demonstrate that the systems based on dependencies perform roughly as well as those based on constituents: For the argument classification task, dependencybased systems perform slightly higher on average, while the opposite holds for the argument identification task. This is remarkable because dependency parsers are still in their infancy while constituent parsing is more mature. Furthermore, the results show that dependency-based semantic role classifiers rely less on lexicalized features, which makes them more robust to domain changes and makes them learn more efficiently with respect to the amount of training data.", "title": "" }, { "docid": "622b0d9526dfee6abe3a605fa83e92ed", "text": "Biomedical Image Processing is a growing and demanding field. It comprises of many different types of imaging methods likes CT scans, X-Ray and MRI. These techniques allow us to identify even the smallest abnormalities in the human body. The primary goal of medical imaging is to extract meaningful and accurate information from these images with the least error possible. Out of the various types of medical imaging processes available to us, MRI is the most reliable and safe. It does not involve exposing the body to any sorts of harmful radiation. This MRI can then be processed, and the tumor can be segmented. Tumor Segmentation includes the use of several different techniques. The whole process of detecting brain tumor from an MRI can be classified into four different categories: Pre-Processing, Segmentation, Optimization and Feature Extraction. This survey involves reviewing the research by other professionals and compiling it into one paper.", "title": "" }, { "docid": "88ae7446c9a63086bda9109a696459bd", "text": "OBJECTIVES\nTo perform a systematic review of neurologic involvement in Systemic sclerosis (SSc) and Localized Scleroderma (LS), describing clinical features, neuroimaging, and treatment.\n\n\nMETHODS\nWe performed a literature search in PubMed using the following MeSH terms, scleroderma, systemic sclerosis, localized scleroderma, localized scleroderma \"en coup de sabre\", Parry-Romberg syndrome, cognitive impairment, memory, seizures, epilepsy, headache, depression, anxiety, mood disorders, Center for Epidemiologic Studies Depression (CES-D), SF-36, Beck Depression Inventory (BDI), Beck Anxiety Inventory (BAI), Patient Health Questionnaire-9 (PHQ-9), neuropsychiatric, psychosis, neurologic involvement, neuropathy, peripheral nerves, cranial nerves, carpal tunnel syndrome, ulnar entrapment, tarsal tunnel syndrome, mononeuropathy, polyneuropathy, radiculopathy, myelopathy, autonomic nervous system, nervous system, electroencephalography (EEG), electromyography (EMG), magnetic resonance imaging (MRI), and magnetic resonance angiography (MRA). Patients with other connective tissue disease knowingly responsible for nervous system involvement were excluded from the analyses.\n\n\nRESULTS\nA total of 182 case reports/studies addressing SSc and 50 referring to LS were identified. SSc patients totalized 9506, while data on 224 LS patients were available. In LS, seizures (41.58%) and headache (18.81%) predominated. Nonetheless, descriptions of varied cranial nerve involvement and hemiparesis were made. Central nervous system involvement in SSc was characterized by headache (23.73%), seizures (13.56%) and cognitive impairment (8.47%). Depression and anxiety were frequently observed (73.15% and 23.95%, respectively). Myopathy (51.8%), trigeminal neuropathy (16.52%), peripheral sensorimotor polyneuropathy (14.25%), and carpal tunnel syndrome (6.56%) were the most frequent peripheral nervous system involvement in SSc. Autonomic neuropathy involving cardiovascular and gastrointestinal systems was regularly described. Treatment of nervous system involvement, on the other hand, varied in a case-to-case basis. However, corticosteroids and cyclophosphamide were usually prescribed in severe cases.\n\n\nCONCLUSIONS\nPreviously considered a rare event, nervous system involvement in scleroderma has been increasingly recognized. Seizures and headache are the most reported features in LS en coup de sabre, while peripheral and autonomic nervous systems involvement predominate in SSc. Moreover, recently, reports have frequently documented white matter lesions in asymptomatic SSc patients, suggesting smaller branches and perforating arteries involvement.", "title": "" }, { "docid": "ee73fa4e07cea9aeae79c5144923a018", "text": "Omega-6 (n-6) polyunsaturated fatty acids (PUFA) (e.g., arachidonic acid (AA)) and omega-3 (n-3) PUFA (e.g., eicosapentaenoic acid (EPA)) are precursors to potent lipid mediator signalling molecules, termed \"eicosanoids,\" which have important roles in the regulation of inflammation. In general, eicosanoids derived from n-6 PUFA are proinflammatory while eicosanoids derived from n-3 PUFA are anti-inflammatory. Dietary changes over the past few decades in the intake of n-6 and n-3 PUFA show striking increases in the (n-6) to (n-3) ratio (~15 : 1), which are associated with greater metabolism of the n-6 PUFA compared with n-3 PUFA. Coinciding with this increase in the ratio of (n-6) : (n-3) PUFA are increases in chronic inflammatory diseases such as nonalcoholic fatty liver disease (NAFLD), cardiovascular disease, obesity, inflammatory bowel disease (IBD), rheumatoid arthritis, and Alzheimer's disease (AD). By increasing the ratio of (n-3) : (n-6) PUFA in the Western diet, reductions may be achieved in the incidence of these chronic inflammatory diseases.", "title": "" }, { "docid": "4dd59c743d7f4ae1f6a05f20a4bd6935", "text": "Self-attentive feed-forward sequence models have been shown to achieve impressive results on sequence modeling tasks including machine translation [31], image generation [30] and constituency parsing [18], thereby presenting a compelling alternative to recurrent neural networks (RNNs) which has remained the de-facto standard architecture for many sequence modeling problems to date. Despite these successes, however, feed-forward sequence models like the Transformer [31] fail to generalize in many tasks that recurrent models handle with ease (e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time [28]). Moreover, and in contrast to RNNs, the Transformer model is not computationally universal, limiting its theoretical expressivity. In this paper we propose the Universal Transformer which addresses these practical and theoretical shortcomings and we show that it leads to improved performance on several tasks. Instead of recurring over the individual symbols of sequences like RNNs, the Universal Transformer repeatedly revises its representations of all symbols in the sequence with each recurrent step. In order to combine information from different parts of a sequence, it employs a self-attention mechanism in every recurrent step. Assuming sufficient memory, its recurrence makes the Universal Transformer computationally universal. We further employ an adaptive computation time (ACT) mechanism to allow the model to dynamically adjust the number of times the representation of each position in a sequence is revised. Beyond saving computation, we show that ACT can improve the accuracy of the model. Our experiments show that on various algorithmic tasks and a diverse set of large-scale language understanding tasks the Universal Transformer generalizes significantly better and outperforms both a vanilla Transformer and an LSTM in machine translation, and achieves a new state of the art on the bAbI linguistic reasoning task and the challenging LAMBADA language modeling task.", "title": "" }, { "docid": "25389fbfbefcbfb3506b6674b70e3d89", "text": "This paper argues that a new class of geographically distributed network services is emerging, and that the most effective way to design, evaluate, and deploy these services is by using an overlay-based testbed. Unlike conventional network testbeds, however, we advocate an approach that supports both researchers that want to develop new services, and clients that want to use them. This dual use, in turn, suggests four design principles that are not widely supported in existing testbeds: services should be able to run continuously and access a slice of the overlay's resources, control over resources should be distributed, overlay management services should be unbundled and run in their own slices, and APIs should be designed to promote application development. We believe a testbed that supports these design principles will facilitate the emergence of a new service-oriented network architecture. Towards this end, the paper also briefly describes PlanetLab, an overlay network being designed with these four principles in mind.", "title": "" }, { "docid": "9ff912ad71c84cfba286f1be7bd8d4b3", "text": "This article compares traditional industrial-organizational psychology (I-O) research published in Journal of Applied Psychology (JAP) with organizational behavior management (OBM) research published in Journal of Organizational Behavior Management (JOBM). The purpose of this comparison was to identify similarities and differences with respect to research topics and methodologies, and to offer suggestions for what OBM researchers and practitioners can learn from I-O. Articles published in JAP from 1987-1997 were reviewed and compared to articles published during the same decade in JOBM (Nolan, Jarema, & Austin, 1999). This comparison includes Barbara R. Bucklin, Alicia M. Alvero, Alyce M. Dickinson, John Austin, and Austin K. Jackson are affiliated with Western Michigan University. Address correspondence to Alyce M. Dickinson, Department of Psychology, Western Michigan University, Kalamazoo, MI 49008-5052 (E-mail: alyce.dickinson@ wmich.edu.) Journal of Organizational Behavior Management, Vol. 20(2) 2000 E 2000 by The Haworth Press, Inc. All rights reserved. 27 D ow nl oa de d by [ W es te rn M ic hi ga n U ni ve rs ity ] at 1 1: 14 0 3 Se pt em be r 20 12 JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 28 (a) author characteristics, (b) authors published in both journals, (c) topics addressed, (d) type of article, and (e) research characteristics and methodologies. Among the conclusions are: (a) the primary relative strength of OBM is its practical significance, demonstrated by the proportion of research addressing applied issues; (b) the greatest strength of traditional I-O appears to be the variety and complexity of organizational research topics; and (c) each field could benefit from contact with research published in the other. [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-342-9678. E-mail address: <getinfo@haworthpressinc.com> Website: <http://www.HaworthPress.com>]", "title": "" }, { "docid": "8c35fd3040e4db2d09e3d6dc0e9ae130", "text": "Internet of Things is referred to a combination of physical devices having sensors and connection capabilities enabling them to interact with each other (machine to machine) and can be controlled remotely via cloud engine. Success of an IoT device depends on the ability of systems and devices to securely sample, collect, and analyze data, and then transmit over link, protocol, or media selections based on stated requirements, all without human intervention. Among the requirements of the IoT, connectivity is paramount. It's hard to imagine that a single communication technology can address all the use cases possible in home, industry and smart cities. Along with the existing low power technologies like Zigbee, Bluetooth and 6LoWPAN, 802.11 WiFi standards are also making its way into the market with its own advantages in high range and better speed. Along with IEEE, WiFi Alliance has a new standard for the proximity applications. Neighbor Awareness Network (NAN) popularly known as WiFi Aware is that standard which enables low power discovery over WiFi and can light up many proximity based used cases. In this paper we discuss how NAN can influence the emerging IoT market as a connectivity solution for proximity assessment and contextual notifications with its benefits in some of the scenarios. When we consider WiFi the infrastructure already exists in terms of access points all around in public and smart phones or tablets come with WiFi as a default feature hence enabling NAN can be easy and if we can pair them with IoT, many innovative use cases can evolve.", "title": "" }, { "docid": "f406d68721a39b86c122a2fe48794590", "text": "We have witnessed the tremendous growth of videos over the Internet, where most of these videos are typically paired with abundant sentence descriptions, such as video titles, captions and comments. Therefore, it has been increasingly crucial to associate specific video segments with the corresponding informative text descriptions, for a deeper understanding of video content. This motivates us to explore an overlooked problem in the research community — temporal sentence localization in video, which aims to automatically determine the start and end points of a given sentence within a paired video. For solving this problem, we face three critical challenges: (1) preserving the intrinsic temporal structure and global context of video to locate accurate positions over the entire video sequence; (2) fully exploring the sentence semantics to give clear guidance for localization; (3) ensuring the efficiency of the localization method to adapt to long videos. To address these issues, we propose a novel Attention Based Location Regression (ABLR) approach to localize sentence descriptions in videos in an efficient end-to-end manner. Specifically, to preserve the context information, ABLR first encodes both video and sentence via Bi-directional LSTM networks. Then, a multi-modal co-attention mechanism is presented to generate both video and sentence attentions. The former reflects the global video structure, while the latter highlights the sentence details for temporal localization. Finally, a novel attention based location prediction network is designed to regress the temporal coordinates of sentence from the previous attentions. We evaluate the proposed ABLR approach on two public datasets ActivityNet Captions and TACoS. Experimental results show that ABLR significantly outperforms the existing approaches in both effectiveness and", "title": "" }, { "docid": "2e02a16fa9c40bfb7e498bef8927e5ff", "text": "There exist two broad approaches to information retrieval (IR) in the legal domain: those based on manual knowledge engineering (KE) and those based on natural language processing (NLP). The KE approach is grounded in artificial intelligence (AI) and case-based reasoning (CBR), whilst the NLP approach is associated with open domain statistical retrieval. We provide some original arguments regarding the focus on KE-based retrieval in the past and why this is not sustainable in the long term. Legal approaches to questioning (NLP), rather than arguing (CBR), are proposed as the appropriate jurisprudential and cognitive underpinning for legal IR. Recall within the context of precision is proposed as a better fit to law than the ‘total recall’ model of the past, wherein conceptual and contextual search are combined to improve retrieval performance for both parties in a dispute.", "title": "" }, { "docid": "dd271275654da4bae73ee41d76fe165c", "text": "BACKGROUND\nThe recovery period for patients who have been in an intensive care unitis often prolonged and suboptimal. Anxiety, depression and post-traumatic stress disorder are common psychological problems. Intensive care staff offer various types of intensive aftercare. Intensive care follow-up aftercare services are not standard clinical practice in Norway.\n\n\nOBJECTIVE\nThe overall aim of this study is to investigate how adult patients experience theirintensive care stay their recovery period, and the usefulness of an information pamphlet.\n\n\nMETHOD\nA qualitative, exploratory research with semi-structured interviews of 29 survivors after discharge from intensive care and three months after discharge from the hospital.\n\n\nRESULTS\nTwo main themes emerged: \"Being on an unreal, strange journey\" and \"Balancing between who I was and who I am\" Patients' recollection of their intensive care stay differed greatly. Continuity of care and the nurse's ability to see and value individual differences was highlighted. The information pamphlet helped intensive care survivors understand that what they went through was normal.\n\n\nCONCLUSIONS\nContinuity of care and an individual approach is crucial to meet patients' uniqueness and different coping mechanisms. Intensive care survivors and their families must be included when information material and rehabilitation programs are designed and evaluated.", "title": "" } ]
scidocsrr
d26c5628da6902e7d4fd91586e50068f
Optimizing restoration with segment routing
[ { "docid": "15727b1d059064d118269d0217c0c014", "text": "Segment Routing is a proposed IETF protocol to improve traffic engineering and online route selection in IP networks. The key idea in segment routing is to break up the routing path into segments in order to enable better network utilization. Segment routing also enables finer control of the routing paths and can be used to route traffic through middle boxes. This paper considers the problem of determining the optimal parameters for segment routing in the offline and online cases. We develop a traffic matrix oblivious algorithm for robust segment routing in the offline case and a competitive algorithm for online segment routing. We also show that both these algorithms work well in practice.", "title": "" } ]
[ { "docid": "e2dbcae54c48a88f840e09112c55fa86", "text": "This paper aims to improve the throughput of a broadcasting system that supports the transmission of multiple services with differentiated minimum signal-to-noise ratios (SNRs) required for successful receptions simultaneously. We propose a novel multiplexing method called bit division multiplexing (BDM), which outperforms the conventional time division multiplexing (TDM) counterpart by extending the multiplexing from symbol level to bit level. Benefiting from multiple error protection levels of bits within each high-order constellation symbol, BDM can provide so-called nonlinear allocation of the channel resources. Both average mutual information (AMI) analysis and simulation results demonstrate that, compared with TDM, BDM can significantly improve the overall transmission rate of multiple services subject to the differentiated minimum SNRs required for successful receptions, or decrease the minimum SNRs required for successful receptions subject to the transmission rate requirements of multiple services.", "title": "" }, { "docid": "f68b11af8958117f75fc82c40c51c395", "text": "Uncertainty accompanies our life processes and covers almost all fields of scientific studies. Two general categories of uncertainty, namely, aleatory uncertainty and epistemic uncertainty, exist in the world. While aleatory uncertainty refers to the inherent randomness in nature, derived from natural variability of the physical world (e.g., random show of a flipped coin), epistemic uncertainty origins from human's lack of knowledge of the physical world, as well as ability of measuring and modeling the physical world (e.g., computation of the distance between two cities). Different kinds of uncertainty call for different handling methods. Aggarwal, Yu, Sarma, and Zhang et al. have made good surveys on uncertain database management based on the probability theory. This paper reviews multidisciplinary uncertainty processing activities in diverse fields. Beyond the dominant probability theory and fuzzy theory, we also review information-gap theory and recently derived uncertainty theory. Practices of these uncertainty handling theories in the domains of economics, engineering, ecology, and information sciences are also described. It is our hope that this study could provide insights to the database community on how uncertainty is managed in other disciplines, and further challenge and inspire database researchers to develop more advanced data management techniques and tools to cope with a variety of uncertainty issues in the real world.", "title": "" }, { "docid": "0f452a5b005437d05a18822dc929828b", "text": "In recent years, new studies concentrating on analyzing user personality and finding credible content in social media have become quite popular. Most such work augments features from textual content with features representing the user's social ties and the tie strength. Social ties are crucial in understanding the network the people are a part of. However, textual content is extremely useful in understanding topics discussed and the personality of the individual. We bring a new dimension to this type of analysis with methods to compute the type of ties individuals have and the strength of the ties in each dimension. We present a new genre of behavioral features that are able to capture the \"function\" of a specific relationship without the help of textual features. Our novel features are based on the statistical properties of communication patterns between individuals such as reciprocity, assortativity, attention and latency. We introduce a new methodology for determining how such features can be compared to textual features, and show, using Twitter data, that our features can be used to capture contextual information present in textual features very accurately. Conversely, we also demonstrate how textual features can be used to determine social attributes related to an individual.", "title": "" }, { "docid": "ee5670c36cf9037918ecd176dae3c881", "text": "This paper focuses on the motion control problem of an omnidirectional mobile robot. A new control method based on the inverse input-output linearized kinematic model is proposed. As the actuator saturation and actuator dynamics have important impacts on the robot performance, this control law takes into account these two aspects and guarantees the stability of the closed-loop control system. Real-world experiments with an omnidirectional middle-size RoboCup robot verifies the performance of this proposed control algorithm.", "title": "" }, { "docid": "58f093ac65039299c40da33fbce3f7ee", "text": "Currently computers are changing from single isolated devices into entry points into a worldwide network of information exchange and business transactions. Support in data, information, and knowledge exchange is becoming the key issue in current computer technology. Ontologies will play a major role in supporting information exchange processes in various areas. A prerequisite for such a role is the development of a joint standard for specifying and exchanging ontologies. The purpose of the paper is precisely concerned with this necessity. We will present OIL, which is a proposal for such a standard. It is based on existing proposals such as OKBC, XOL and RDF schema, enriching them with necessary features for expressing ontologies. The paper sketches the main ideas of OIL.", "title": "" }, { "docid": "c45447fd682f730f350bae77c835b63a", "text": "In this paper, we demonstrate a high heat resistant bonding method by Cu/Sn transient liquid phase sintering (TLPS) method can be applied to die-attachment of silicon carbide (SiC)-MOSFET in high temperature operation power module. The die-attachment is made of nano-composite Cu/Sn TLPS paste. The die shear strength was 40 MPa for 3 × 3 mm2 SiC chip after 1,000 cycles of thermal cycle testing between −40 °C and 250 °C. This indicated a high reliability of Cu/Sn die-attachment. The thermal resistance of the Cu/Sn die-attachment was evaluated by transient thermal analysis using a sample in which the SiC-MOSFET (die size: 4.04 × 6.44 mm2) was bonded with Cu/Sn die-attachment. The thermal resistance of Cu/Sn die-attachment was 0.13 K/W, which was comparable to the one of Au/Ge die-attachment (0.12 K/W). The validity of nano-composite Cu/Sn TLPS paste as a die-attachment for high-temperature operation SiC power module is confirmed.", "title": "" }, { "docid": "33b281b2f3509a6fdc3fd5f17f219820", "text": "Personal robots will contribute mobile manipulation capabilities to our future smart homes. In this paper, we propose a low-cost object localization system that uses static devices with Bluetooth capabilities, which are distributed in an environment, to detect and localize active Bluetooth beacons and mobile devices. This system can be used by a robot to coarsely localize objects in retrieval tasks. We attach small Bluetooth low energy tags to objects and require at least four static Bluetooth receivers. While commodity Bluetooth devices could be used, we have built low-cost receivers from Raspberry Pi computers. The location of a tag is estimated by lateration of its received signal strengths. In experiments, we evaluate accuracy and timing of our approach, and report on the successful demonstration at the RoboCup German Open 2014 competition in Magdeburg.", "title": "" }, { "docid": "f279060b5ebe9b163d08f29b0e70619c", "text": "Silver film over nanospheres (AgFONs) were successfully employed as surface-enhanced Raman spectroscopy (SERS) substrates to characterize several artists' red dyes including: alizarin, purpurin, carminic acid, cochineal, and lac dye. Spectra were collected on sample volumes (1 x 10(-6) M or 15 ng/microL) similar to those that would be found in a museum setting and were found to be higher in resolution and consistency than those collected on silver island films (AgIFs). In fact, to the best of the authors' knowledge, this work presents the highest resolution spectrum of the artists' material cochineal to date. In order to determine an optimized SERS system for dye identification, experiments were conducted in which laser excitation wavelengths were matched with correlating AgFON localized surface plasmon resonance (LSPR) maxima. Enhancements of approximately two orders of magnitude were seen when resonance SERS conditions were met in comparison to non-resonance SERS conditions. Finally, because most samples collected in a museum contain multiple dyestuffs, AgFONs were employed to simultaneously identify individual dyes within several dye mixtures. These results indicate that AgFONs have great potential to be used to identify not only real artwork samples containing a single dye but also samples containing dyes mixtures.", "title": "" }, { "docid": "3cf4ef33356720e55748c7f14383830d", "text": "Article history: Received 7 September 2015 Received in revised form 15 February 2016 Accepted 27 March 2016 Available online 14 April 2016 For many organizations, managing both economic and environmental performance has emerged as a key challenge. Further,with expanding globalization organizations are finding itmore difficult tomaintain adequate supplier relations to balance both economic and environmental performance initiatives. Drawing on transaction cost economics, this study examines how novel information technology like cloud computing can help firms not only maintain adequate supply chain collaboration, but also balance both economic and environmental performance. We analyze survey data from 247 IT and supply chain professionals using structural equation modeling and partial least squares to verify the robustness of our results. Our analyses yield several interesting findings. First, contrary to other studies we find that collaboration does not necessarily affect environmental performance and only partiallymediates the relationship between cloud computing and economic performance. Secondly, the results of our survey provide evidence of the direct effect of cloud computing on both economic and environmental performance. Published by Elsevier B.V.", "title": "" }, { "docid": "6d3c6bb57ecfeacf1e3fac0d4e35dd46", "text": "In this paper we show that two dynamical invariants, the second order Renyi entropy and the correlation dimension, can be estimated from recurrence plots (RPs) with arbitrary embedding dimension and delay. This fact is interesting as these quantities are even invariant if no embedding is used. This is an important advantage of RPs compared to other techniques of nonlinear data analysis. These estimates for the correlation dimension and entropy are robust and, moreover, can be obtained at a low numerical cost. We exemplify our results for the Rossler system, the funnel attractor and the Mackey-Glass system. In the last part of the paper we estimate dynamical invariants for data from some fluid dynamical experiments and confirm previous evidence for low dimensional chaos in this experimental system.", "title": "" }, { "docid": "1ff8d3270f4884ca9a9c3d875bdf1227", "text": "This paper addresses the challenging problem of perceiving the hidden or occluded geometry of the scene depicted in any given RGBD image. Unlike other image labeling problems such as image segmentation where each pixel needs to be assigned a single label, layered decomposition requires us to assign multiple labels to pixels. We propose a novel \"Occlusion-CRF\" model that allows for the integration of sophisticated priors to regularize the solution space and enables the automatic inference of the layer decomposition. We use a generalization of the Fusion Move algorithm to perform Maximum a Posterior (MAP) inference on the model that can handle the large label sets needed to represent multiple surface assignments to each pixel. We have evaluated the proposed model and the inference algorithm on many RGBD images of cluttered indoor scenes. Our experiments show that not only is our model able to explain occlusions but it also enables automatic inpainting of occluded/ invisible surfaces.", "title": "" }, { "docid": "ad8a727d0e3bd11cd972373451b90fe7", "text": "The loss functions of deep neural networks are complex and their geometric properties are not well understood. We show that the optima of these complex loss functions are in fact connected by simple curves over which training and test accuracy are nearly constant. We introduce a training procedure to discover these high-accuracy pathways between modes. Inspired by this new geometric insight, we also propose a new ensembling method entitled Fast Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in the time required to train a single model. We achieve improved performance compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10, CIFAR-100, and ImageNet.", "title": "" }, { "docid": "0ad47e79e9bea44a76029e1f24f0a16c", "text": "The requirements for OLTP database systems are becoming ever more demanding. New OLTP applications require high degrees of scalability with controlled transaction latencies in in-memory databases. Deployments of these applications require low-level control of database system overhead and program-to-data affinity to maximize resource utilization in modern machines. Unfortunately, current solutions fail to meet these requirements. First, existing database solutions fail to expose a high-level programming abstraction in which latency of transactions can be reasoned about by application developers. Second, these solutions limit infrastructure engineers in exercising low-level control on the deployment of the system on a target infrastructure, further impacting performance. In this paper, we propose a relational actor programming model for in-memory databases. Conceptually, relational actors, or reactors for short, are application-defined, isolated logical actors encapsulating relations that process function calls asynchronously. Reactors ease reasoning about correctness by guaranteeing serializability of application-level function calls. In contrast to classic transactional models, however, reactors allow developers to take advantage of intra-transaction parallelism to reduce latency and improve performance. Moreover, reactors enable a new degree of flexibility in database deployment. We present REACTDB, a novel system design exposing reactors that allows for flexible virtualization of database architecture between the extremes of shared-nothing and shared-everything without changes to application code. Our experiments with REACTDB illustrate performance predictability, multi-core scalability, and low overhead in OLTP benchmarks.", "title": "" }, { "docid": "45f8ee067c8e70b64ba879cf9415e107", "text": "Visualizing the intellectual structure of scientific domains using co-cited units such as references or authors has become a routine for domain analysis. In previous studies, paper-reference matrices are usually transformed into reference-reference matrices to obtain co-citation relationships, which are then visualized in different representations, typically as node-link networks, to represent the intellectual structures of scientific domains. Such network visualizations sometimes contain tightly knit components, which make visual analysis of the intellectual structure a challenging task. In this study, we propose a new approach to reveal co-citation relationships. Instead of using a reference-reference matrix, we directly use the original paper-reference matrix as the information source, and transform the paper-reference matrix into an FP-tree and visualize it in a Java-based prototype system. We demonstrate the usefulness of our approach through visual analyses of the intellectual structure of two domains: information visualization and Sloan Digital Sky Survey (SDSS). The results show that our visualization not only retains the major information of co-citation relationships, but also reveals more detailed sub-structures of tightly knit clusters than a conventional node-link network visualization.", "title": "" }, { "docid": "bee25514d15321f4f0bdcf867bb07235", "text": "We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full FrankWolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. However, unlike stochastic subgradient methods, the block-coordinate FrankWolfe algorithm allows us to compute the optimal step-size and yields a computable duality gap guarantee. Our experiments indicate that this simple algorithm outperforms competing structural SVM solvers.", "title": "" }, { "docid": "f071772f5789d20f51ffa12a61b7fbe3", "text": "The dynamics of a quadrotor are a simplified form of helicopter dynamics that exhibit the same basic problems of underactuation, strong coupling, multi-input/multi-output design, and unknown nonlinearities. Control design for the quadrotor is more tractable yet reveals corresponding approaches for helicopter and UAV control design. In this paper, a backstepping approach is used for quadrotor controller design. In contrast to most other approaches, we apply backstepping on the Lagrangian form of the dynamics, not the state space form. This is complicated by the fact that the Lagrangian form for the position dynamics is bilinear in the controls. We confront this problem by using an inverse kinematics solution akin to that used in robotics. In addition, two neural nets are introduced to estimate the aerodynamic components, one for aerodynamic forces and one for aerodynamic moments. The result is a controller of intuitively appealing structure having an outer kinematics loop for position control and an inner dynamics loop for attitude control. The control approach described in this paper is robust since it explicitly deals with unmodeled state-dependent disturbances and forces without needing any prior knowledge of the same. A simulation study validates the results obtained in the paper. A. Das (B) · F. Lewis Automation and Robotics Research Institute, University of Texas at Arlington, 7300 Jack Newell Blvd. S., Fort Worth, TX 76118, USA e-mail: adas@arri.uta.edu F. Lewis e-mail: lewis@uta.edu K. Subbarao Department of Mechanical and Aerospace Engineering, University of Texas at Arlington, Arlington, TX 76019, USA e-mail: subbarao@uta.edu 128 J Intell Robot Syst (2009) 56:127–151", "title": "" }, { "docid": "8af1865e0adfedb11d9ade95bb39f797", "text": "In developing automated systems to recognize the emotional content of music, we are faced with a problem spanning two disparate domains: the space of human emotions and the acoustic signal of music. To address this problem, we must develop models for both data collected from humans describing their perceptions of musical mood and quantitative features derived from the audio signal. In previous work, we have presented a collaborative game, MoodSwings, which records dynamic (per-second) mood ratings from multiple players within the two-dimensional Arousal-Valence representation of emotion. Using this data, we present a system linking models of acoustic features and human data to provide estimates of the emotional content of music according to the arousal-valence space. Furthermore, in keeping with the dynamic nature of musical mood we demonstrate the potential of this approach to track the emotional changes in a song over time. We investigate the utility of a range of acoustic features based on psychoacoustic and music-theoretic representations of the audio for this application. Finally, a simplified version of our system is re-incorporated into MoodSwings as a simulated partner for single-players, providing a potential platform for furthering perceptual studies and modeling of musical mood.", "title": "" }, { "docid": "8fba7e05a5dc6da4e639beccedb4dfd6", "text": "OBJECTIVE\nTo examine whether pretreatment emotional distress in women is associated with achievement of pregnancy after a cycle of assisted reproductive technology.\n\n\nDESIGN\nMeta-analysis of prospective psychosocial studies.\n\n\nDATA SOURCES\nPubMed, Medline, Embase, PsycINFO, PsychNET, ISI Web of Knowledge, and ISI Web of Science were searched for articles published from 1985 to March 2010 (inclusive). We also undertook a hand search of reference lists and contacted 29 authors. Eligible studies were prospective studies reporting a test of the association between pretreatment emotional distress (anxiety or depression) and pregnancy in women undergoing a single cycle of assisted reproductive technology. Review methods Two authors independently assessed the studies for eligibility and quality (using criteria adapted from the Newcastle-Ottawa quality scale) and extracted data. Authors contributed additional data not included in original publication.\n\n\nRESULTS\nFourteen studies with 3583 infertile women undergoing a cycle of fertility treatment were included in the meta-analysis. The effect size used was the standardised mean difference (adjusted for small sample size) in pretreatment anxiety or depression (priority on anxiety where both measured) between women who achieved a pregnancy (defined as a positive pregnancy test, positive fetal heart scan, or live birth) and those who did not. Pretreatment emotional distress was not associated with treatment outcome after a cycle of assisted reproductive technology (standardised mean difference -0.04, 95% confidence interval -0.11 to 0.03 (fixed effects model); heterogeneity I²=14%, P=0.30). Subgroup analyses according to previous experience of assisted reproductive technology, composition of the not pregnant group, and timing of the emotional assessment were not significant. The effect size did not vary according to study quality, but a significant subgroup analysis on timing of the pregnancy test, a contour enhanced funnel plot, and Egger's test indicated the presence of moderate publication bias.\n\n\nCONCLUSIONS\nThe findings of this meta-analysis should reassure women and doctors that emotional distress caused by fertility problems or other life events co-occurring with treatment will not compromise the chance of becoming pregnant.", "title": "" }, { "docid": "e96eaf2bde8bf50605b67fb1184b760b", "text": "In response to your recent publication comparing subjective effects of D9-tetrahydrocannabinol and herbal cannabis (Wachtel et al. 2002), a number of comments are necessary. The first concerns the suitability of the chosen “marijuana” to assay the issues at hand. NIDA cannabis has been previously characterized in a number of studies (Chait and Pierri 1989; Russo et al. 2002), as a crude lowgrade product (2–4% THC) containing leaves, stems and seeds, often 3 or more years old after processing, with a stale odor lacking in terpenoids. This contrasts with the more customary clinical cannabis employed by patients in Europe and North America, composed solely of unseeded flowering tops with a potency of up to 20% THC. Cannabis-based medicine extracts (CBME) (Whittle et al. 2001), employed in clinical trials in the UK (Notcutt 2002; Robson et al. 2002), are extracted from flowering tops with abundant glandular trichomes, and retain full terpenoid and flavonoid components. In the study at issue (Wachtel et al. 2002), we are informed that marijuana contained 2.11% THC, 0.30% cannabinol (CBN), and 0.05% (CBD). The concentration of the latter two cannabinoids is virtually inconsequential. Thus, we are not surprised that no differences were seen between NIDA marijuana with essentially only one cannabinoid, and pure, synthetic THC. In comparison, clinical grade cannabis and CBME customarily contain high quantities of CBD, frequently equaling the percentage of THC (Whittle et al. 2001). Carlini et al. (1974) determined that cannabis extracts produced effects “two or four times greater than that expected from their THC content, based on animal and human studies”. Similarly, Fairbairn and Pickens (1981) detected the presence of unidentified “powerful synergists” in cannabis extracts, causing 330% greater activity in mice than THC alone. The clinical contribution of other CBD and other cannabinoids, terpenoids and flavonoids to clinical cannabis effects has been espoused as an “entourage effect” (Mechoulam and Ben-Shabat 1999), and is reviewed in detail by McPartland and Russo (2001). Briefly summarized, CBD has anti-anxiety effects (Zuardi et al. 1982), anti-psychotic benefits (Zuardi et al. 1995), modulates metabolism of THC by blocking its conversion to the more psychoactive 11-hydroxy-THC (Bornheim and Grillo 1998), prevents glutamate excitotoxicity, serves as a powerful anti-oxidant (Hampson et al. 2000), and has notable anti-inflammatory and immunomodulatory effects (Malfait et al. 2000). Terpenoid cannabis components probably also contribute significantly to clinical effects of cannabis and boil at comparable temperatures to THC (McPartland and Russo 2001). Cannabis essential oil demonstrates serotonin receptor binding (Russo et al. 2000). Its terpenoids include myrcene, a potent analgesic (Rao et al. 1990) and anti-inflammatory (Lorenzetti et al. 1991), betacaryophyllene, another anti-inflammatory (Basile et al. 1988) and gastric cytoprotective (Tambe et al. 1996), limonene, a potent inhalation antidepressant and immune stimulator (Komori et al. 1995) and anti-carcinogenic (Crowell 1999), and alpha-pinene, an anti-inflammatory (Gil et al. 1989) and bronchodilator (Falk et al. 1990). Are these terpenoid effects significant? A dried sample of drug-strain cannabis buds was measured as displaying an essential oil yield of 0.8% (Ross and ElSohly 1996), or a putative 8 mg per 1000 mg cigarette. Buchbauer et al. (1993) demonstrated that 20–50 mg of essential oil in the ambient air in mouse cages produced measurable changes in behavior, serum levels, and bound to cortical cells. Similarly, Komori et al. (1995) employed a gel of citrus fragrance with limonene to produce a significant antidepressant benefit in humans, obviating the need for continued standard medication in some patients, and also improving CD4/8 immunologic ratios. These data would E. B. Russo ()) Montana Neurobehavioral Specialists, 900 North Orange Street, Missoula, MT, 59802 USA e-mail: erusso@blackfoot.net", "title": "" }, { "docid": "069636576cbf6c5dd8cead8fff32ea4b", "text": "Sleep-disordered breathing-comprising obstructive sleep apnoea (OSA), central sleep apnoea (CSA), or a combination of the two-is found in over half of heart failure (HF) patients and may have harmful effects on cardiac function, with swings in intrathoracic pressure (and therefore preload and afterload), blood pressure, sympathetic activity, and repetitive hypoxaemia. It is associated with reduced health-related quality of life, higher healthcare utilization, and a poor prognosis. Whilst continuous positive airway pressure (CPAP) is the treatment of choice for patients with daytime sleepiness due to OSA, the optimal management of CSA remains uncertain. There is much circumstantial evidence that the treatment of OSA in HF patients with CPAP can improve symptoms, cardiac function, biomarkers of cardiovascular disease, and quality of life, but the quality of evidence for an improvement in mortality is weak. For systolic HF patients with CSA, the CANPAP trial did not demonstrate an overall survival or hospitalization advantage for CPAP. A minute ventilation-targeted positive airway therapy, adaptive servoventilation (ASV), can control CSA and improves several surrogate markers of cardiovascular outcome, but in the recently published SERVE-HF randomized trial, ASV was associated with significantly increased mortality and no improvement in HF hospitalization or quality of life. Further research is needed to clarify the therapeutic rationale for the treatment of CSA in HF. Cardiologists should have a high index of suspicion for sleep-disordered breathing in those with HF, and work closely with sleep physicians to optimize patient management.", "title": "" } ]
scidocsrr
e8d6cc736a62a58833f1a3960c18ca26
Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics
[ { "docid": "b5a4b5b3e727dde52a9c858d3360a2e7", "text": "Differential privacy is a recent framework for computation on sensitive data, which has shown considerable promise in the regime of large datasets. Stochastic gradient methods are a popular approach for learning in the data-rich regime because they are computationally tractable and scalable. In this paper, we derive differentially private versions of stochastic gradient descent, and test them empirically. Our results show that standard SGD experiences high variability due to differential privacy, but a moderate increase in the batch size can improve performance significantly.", "title": "" } ]
[ { "docid": "1c891aa5787d52497f8869011b234440", "text": "This paper compares different indexing techniques proposed for supporting efficient access to temporal data. The comparison is based on a collection of important performance criteria, including the space consumed, update processing, and query time for representative queries. The comparison is based on worst-case analysis, hence no assumptions on data distribution or query frequencies are made. When a number of methods have the same asymptotic worst-case behavior, features in the methods that affect average case behavior are discussed. Additional criteria examined are the pagination of an index, the ability to cluster related data together, and the ability to efficiently separate old from current data (so that larger archival storage media such as write-once optical disks can be used). The purpose of the paper is to identify the difficult problems in accessing temporal data and describe how the different methods aim to solve them. A general lower bound for answering basic temporal queries is also introduced.", "title": "" }, { "docid": "87bb66ee83723cae617d9e328fe498a3", "text": "Grounding natural language in images essentially requires composite visual reasoning. However, existing methods over-simplify the composite nature of language into a monolithic sentence embedding or a coarse composition of subject-predicate-object triplet. They might perform well on short phrases, but generally fail in longer sentences, mainly due to the over-fitting to certain vision-language bias. In this paper, we propose to ground natural language in an intuitive, explainable, and composite fashion as it should be. In particular, we develop a novel modular network called Neural Module Tree network (NMTREE) that regularizes the visual grounding along the dependency parsing tree of the sentence, where each node is a module network that calculates or accumulates the grounding score in a bottom-up direction where as needed. NMTREE disentangles the visual grounding from the composite reasoning, allowing the former to only focus on primitive and easyto-generalize patterns. To reduce the impact of parsing errors, we train the modules and their assembly end-to-end by using the Gumbel-Softmax approximation and its straightthrough gradient estimator, accounting for the discrete process of module selection. Overall, the proposed NMTREE not only consistently outperforms the state-of-the-arts on several benchmarks and tasks, but also shows explainable reasoning in grounding score calculation. Therefore, NMTREE shows a good direction in closing the gap between explainability and performance.", "title": "" }, { "docid": "1d1f93011e83bcefd207c845b2edafcd", "text": "Although single dialyzer use and reuse by chemical reprocessing are both associated with some complications, there is no definitive advantage to either in this respect. Some complications occur mainly at the first use of a dialyzer: a new cellophane or cuprophane membrane may activate the complement system, or a noxious agent may be introduced to the dialyzer during production or generated during storage. These agents may not be completely removed during the routine rinsing procedure. The reuse of dialyzers is associated with environmental contamination, allergic reactions, residual chemical infusion (rebound release), inadequate concentration of disinfectants, and pyrogen reactions. Bleach used during reprocessing causes a progressive increase in dialyzer permeability to larger molecules, including albumin. Reprocessing methods without the use of bleach are associated with progressive decreases in membrane permeability, particularly to larger molecules. Most comparative studies have not shown differences in mortality between centers reusing and those not reusing dialyzers, however, the largest cluster of dialysis-related deaths occurred with single-use dialyzers due to the presence of perfluorohydrocarbon introduced during the manufacturing process and not completely removed during preparation of the dialyzers before the dialysis procedure. The cost savings associated with reuse is substantial, especially with more expensive, high-flux synthetic membrane dialyzers. With reuse, some dialysis centers can afford to utilize more efficient dialyzers that are more expensive; consequently they provide a higher dose of dialysis and reduce mortality. Some studies have shown minimally higher morbidity with chemical reuse, depending on the method. Waste disposal is definitely decreased with the reuse of dialyzers, thus environmental impacts are lessened, particularly if reprocessing is done by heat disinfection. It is safe to predict that dialyzer reuse in dialysis centers will continue because it also saves money for the providers. Saving both time for the patient and money for the provider were the main motivations to design a new machine for daily home hemodialysis. The machine, developed in the 1990s, cleans and heat disinfects the dialyzer and lines in situ so they do not need to be changed for a month. In contrast, reuse of dialyzers in home hemodialysis patients treated with other hemodialysis machines is becoming less popular and is almost extinct.", "title": "" }, { "docid": "bae3d6ffee5380ea6352b8b384667d76", "text": "A flexible transparent modify dipole antenna printed on PET film is presented in this paper. The proposed antenna was designed to operate at 2.4GHz for ISM applications. The impedance characteristic and the radiation characteristic were simulated and measured. The proposed antenna has good performance. It can be easily mounted on conformal shape, because it is fabricated on PET film having the flexible characteristic.", "title": "" }, { "docid": "2d8094fc287a36d7d011aef42eff01ca", "text": "Poor quality data may be detected and corrected by performing various quality assurance activities that rely on techniques with different efficacy and cost. In this paper, we propose a quantitative approach for measuring and comparing the effectiveness of these data quality (DQ) techniques. Our definitions of effectiveness are inspired by measures proposed in Information Retrieval. We show how the effectiveness of a DQ technique can be mathematically estimated in general cases, using formal techniques that are based on probabilistic assumptions. We then show how the resulting effectiveness formulas can be used to evaluate, compare and make choices involving DQ techniques.", "title": "" }, { "docid": "03c588f89216ee5b0b6392730fe2159f", "text": "In this paper, a three-port converter with three active full bridges, two series-resonant tanks, and a three-winding transformer is proposed. It uses a single power conversion stage with high-frequency link to control power flow between batteries, load, and a renewable source such as fuel cell. The converter has capabilities of bidirectional power flow in the battery and the load port. Use of series-resonance aids in high switching frequency operation with realizable component values when compared to existing three-port converter with only inductors. The converter has high efficiency due to soft-switching operation in all three bridges. Steady-state analysis of the converter is presented to determine the power flow equations, tank currents, and soft-switching region. Dynamic analysis is performed to design a closed-loop controller that will regulate the load-side port voltage and source-side port current. Design procedure for the three-port converter is explained and experimental results of a laboratory prototype are presented.", "title": "" }, { "docid": "4f6dffc87baa302102543a80630935f6", "text": "We present a model of pragmatic referring expression interpretation in a grounded communication task (identifying colors from descriptions) that draws upon predictions from two recurrent neural network classifiers, a speaker and a listener, unified by a recursive pragmatic reasoning framework. Experiments show that this combined pragmatic model interprets color descriptions more accurately than the classifiers from which it is built, and that much of this improvement results from combining the speaker and listener perspectives. We observe that pragmatic reasoning helps primarily in the hardest cases: when the model must distinguish very similar colors, or when few utterances adequately express the target color. Our findings make use of a newly-collected corpus of human utterances in color reference games, which exhibit a variety of pragmatic behaviors. We also show that the embedded speaker model reproduces many of these pragmatic behaviors.", "title": "" }, { "docid": "27bf341c8c91713b5b9ebed84f78c92b", "text": "The Agile Manifesto and Agile Principles are typically referred to as the definitions of \"agile\" and \"agility\". There is research on agile values and agile practises, but how should “Scaled Agility” be defined, and what might be the characteristics and principles of Scaled Agile? This paper examines the characteristics of scaled agile, and the principles that are used to build up such agility. It also gives suggestions as principles upon which Scaled Agility can be built.", "title": "" }, { "docid": "9bd630b52ec72c2675cc2ce282c3c690", "text": "Several proteomic studies in the last decade revealed that many proteins are either completely disordered or possess long structurally flexible regions. Many such regions were shown to be of functional importance, often allowing a protein to interact with a large number of diverse partners. Parallel to these findings, during the last five years structural bioinformatics has produced an explosion of results regarding protein-protein interactions and their importance for cell signaling. We studied the occurrence of relatively short (10-70 residues), loosely structured protein regions within longer, largely disordered sequences that were characterized as bound to larger proteins. We call these regions molecular recognition features (MoRFs, also known as molecular recognition elements, MoREs). Interestingly, upon binding to their partner(s), MoRFs undergo disorder-to-order transitions. Thus, in our interpretation, MoRFs represent a class of disordered region that exhibits molecular recognition and binding functions. This work extends previous research showing the importance of flexibility and disorder for molecular recognition. We describe the development of a database of MoRFs derived from the RCSB Protein Data Bank and present preliminary results of bioinformatics analyses of these sequences. Based on the structure adopted upon binding, at least three basic types of MoRFs are found: alpha-MoRFs, beta-MoRFs, and iota-MoRFs, which form alpha-helices, beta-strands, and irregular secondary structure when bound, respectively. Our data suggest that functionally significant residual structure can exist in MoRF regions prior to the actual binding event. The contribution of intrinsic protein disorder to the nature and function of MoRFs has also been addressed. The results of this study will advance the understanding of protein-protein interactions and help towards the future development of useful protein-protein binding site predictors.", "title": "" }, { "docid": "5432e79349a798083f7b13369307ad80", "text": "Existing recommendation algorithms treat recommendation problem as rating prediction and the recommendation quality is measured by RMSE or other similar metrics. However, we argued that when it comes to E-commerce product recommendation, recommendation is more than rating prediction by realizing the fact price plays a critical role in recommendation result. In this work, we propose to build E-commerce product recommender systems based on fundamental economic notions. We first proposed an incentive compatible method that can effectively elicit consumer's willingness-to-pay in a typical E-commerce setting and in a further step, we formalize the recommendation problem as maximizing total surplus. We validated the proposed WTP elicitation algorithm through crowd sourcing and the results demonstrated that the proposed approach can achieve higher seller profit by personalizing promotion. We also proposed a total surplus maximization (TSM) based recommendation framework. We specified TSM by three of the most representative settings - e-commerce where the product quantity can be viewed as infinity, P2P lending where the resource is bounded and freelancer marketing where the resource (job) can be assigned to one freelancer. The experimental results of the corresponding datasets shows that TSM exceeds existing approach in terms of total surplus.", "title": "" }, { "docid": "5a397012744d958bb1a69b435c73e666", "text": "We introduce a method to generate whole body motion of a humanoid robot such that the resulted total linear/angular momenta become specified values. First, we derive a linear equation which gives the total momentum of a robot from its physical parameters, the base link speed and the joint speeds. Constraints between the legs and the environment are also considered. The whole body motion is calculated from a given momentum reference by using a pseudo-inverse of the inertia matrix. As examples, we generated the kicking and walking motions and tested on the actual humanoid robot HRP-2. This method, the Resolved Momentum Control, gives us a unified framework to generate various maneuver of humanoid robots.", "title": "" }, { "docid": "60f6d9303508494bcff9231266e490ad", "text": "I n the January 2001 issue of Computer (pp. 135-137), we published the Software Defect Reduction Top 10 List—one of two foci pursued by the National Science Foundation-sponsored Center for Empirically Based Software Engineering (CeBASE). COTS-based systems (CBS) provide the other CeBASE focus. For our intent, COTS software has the following characteristics: The buyer has no access to the source code; the vendor controls its development; and it has a nontrivial installed base (that is, more than one customer ; more than a few copies). Criteria for making the list are that each empirical result has • significant current and future impact on software dependability, timeli-ness, and cost; • diagnostic value with respect to cost-effective best practices; and • reasonable generality across applications domains, market sectors, and product sizes. These are the same criteria we used for our defect-reduction list, but they are harder to evaluate for CBS because it is a less mature area. CBS's roller-coaster ride along Gartner Group's visibility-maturity curve (http:// gartner11.gartnerweb.com/public/static/ hotc/hc00094769.html) reveals its relative immaturity as it progresses through a peak of inflated expectations (with many overenthusiastic organizational mandates to switch to CBS), a trough of disillusion-ment, and to a slope of enlightenment, to a plateau of productivity. We present the CBS Top 10 List as hypotheses, rather than results, that also serve as software challenges for enhancing our empirical understanding of CBS. More than 99 percent of all executing computer instructions come from COTS products. Each instruction passed a market test for value. • Source. The more than 99 percent figure derives from analyzing Department of Defense data (B. Boehm, \" Managing Software Productivity and Reuse, \" Computer, Sept. 1999, pp. 111-113). • Implications. Economic necessity drives extensive COTS use. Nobody can afford to write a general-purpose operating system or database management system. Every project should consider the CBS option, but carefully weigh CBS benefits, costs, and risks against other options. \" Market test \" means that someone willingly pays to install the COTS component, not that every instruction is used or proves valuable. More than half the features in large COTS software products go unused. working alone used 12 to 16 percent of Microsoft Word and PowerPoint measurement features, whereas a 10-person group used 26 to 29 percent of these features. • Implications. Adding features is an economic necessity for vendors but it introduces complexity for COTS adopters. This added complexity can require …", "title": "" }, { "docid": "2a36a2ab5b0e01da90859179a60cef9a", "text": "We report 3 cases of renal toxicity associated with use of the antiviral agent tenofovir. Renal failure, proximal tubular dysfunction, and nephrogenic diabetes insipidus were observed, and, in 2 cases, renal biopsy revealed severe tubular necrosis with characteristic nuclear changes. Patients receiving tenofovir must be monitored closely for early signs of tubulopathy (glycosuria, acidosis, mild increase in the plasma creatinine level, and proteinuria).", "title": "" }, { "docid": "5623ce7ffce8492d637d52975df3ac99", "text": "The online advertising industry is currently based on two dominant business models: the pay-per-impression model and the pay-per-click model. With the growth of sponsored search during the last few years, there has been a move toward the pay-per-click model as it decreases the risk to small advertisers. An alternative model, discussed but not widely used in the advertising industry, is pay-per-conversion, or more generally, pay-per-action. In this paper, we discuss mechanisms for the pay-per-action model and various challenges involved in designing such mechanisms.", "title": "" }, { "docid": "4aa8316315617aec4c076a7679482fa9", "text": "Continuous integration (CI) systems automate the compilation, building, and testing of software. Despite CI rising as a big success story in automated software engineering, it has received almost no attention from the research community. For example, how widely is CI used in practice, and what are some costs and benefits associated with CI? Without answering such questions, developers, tool builders, and researchers make decisions based on folklore instead of data. In this paper, we use three complementary methods to study the usage of CI in open-source projects. To understand which CI systems developers use, we analyzed 34,544 open-source projects from GitHub. To understand how developers use CI, we analyzed 1,529,291 builds from the most commonly used CI system. To understand why projects use or do not use CI, we surveyed 442 developers. With this data, we answered several key questions related to the usage, costs, and benefits of CI. Among our results, we show evidence that supports the claim that CI helps projects release more often, that CI is widely adopted by the most popular projects, as well as finding that the overall percentage of projects using CI continues to grow, making it important and timely to focus more research on CI.", "title": "" }, { "docid": "945883e31e893b3185a72277486e070a", "text": "Research in recommender systems is now starting to recognise the importance of multiple selection criteria to improve the recommendation output. In this paper, we present a novel approach to multi-criteria recommendation, based on the idea of clustering users in \"preference lattices\" (partial orders) according to their criteria preferences. We assume that some selection criteria for an item (product or a service) will dominate the overall ranking, and that these dominant criteria will be different for different users. Following this assumption, we cluster users based on their criteria preferences, creating a \"preference lattice\". The recommendation output for a user is then based on ratings by other users from the same or close clusters. Having introduced the general approach of clustering, we proceed to formulate three alternative recommendation methods instantiating the approach: (a) using the aggregation function of the criteria, (b) using the overall item ratings, and (c) combining clustering with collaborative filtering. We then evaluate the accuracy of the three methods using a set of experiments on a service ranking dataset, and compare them with a conventional collaborative filtering approach extended to cover multiple criteria. The results indicate that our third method, which combines clustering and extended collaborative filtering, produces the highest accuracy.", "title": "" }, { "docid": "3fe09244c12dc7ce92bdd0fd96380cec", "text": "A novel switching dc-to-dc converter is presented, which has the same general conversion property (increase or decrease of the input dc voltage) as does the conventional buck-boost converter, and which offers through its new optimum topology higher efficiency, lower output voltage ripple, reduced EMI, smaller size and weight, and excellent dynamics response. One of its most significant advantages is that both input and output current are not pulsating but are continuous (essentially dc with small superimposed switching current ripple), this resulting in a close approximation to the ideal physically nonrealizable dc-to-dc transformer. The converter retains the simplest possible structure with the minimum number of components which, when interconnected in its optimum topology, yield the maximum performance. The new converter is extensively experimentally verified, and both the steady state (dc) and the dynamic (ac) theoretical model are correlated well with theexperimental data. both theoretical and experimental comparisons with the conventional buck-boost converter, to which an input filter has been added, demonstrate the significant advantages of the new optimum topology switching dc-to-dc converter.", "title": "" }, { "docid": "1bd9467a7fafcdb579f8a4cd1d7be4b3", "text": "OBJECTIVE\nTo determine the diagnostic and triage accuracy of online symptom checkers (tools that use computer algorithms to help patients with self diagnosis or self triage).\n\n\nDESIGN\nAudit study.\n\n\nSETTING\nPublicly available, free symptom checkers.\n\n\nPARTICIPANTS\n23 symptom checkers that were in English and provided advice across a range of conditions. 45 standardized patient vignettes were compiled and equally divided into three categories of triage urgency: emergent care required (for example, pulmonary embolism), non-emergent care reasonable (for example, otitis media), and self care reasonable (for example, viral upper respiratory tract infection).\n\n\nMAIN OUTCOME MEASURES\nFor symptom checkers that provided a diagnosis, our main outcomes were whether the symptom checker listed the correct diagnosis first or within the first 20 potential diagnoses (n=770 standardized patient evaluations). For symptom checkers that provided a triage recommendation, our main outcomes were whether the symptom checker correctly recommended emergent care, non-emergent care, or self care (n=532 standardized patient evaluations).\n\n\nRESULTS\nThe 23 symptom checkers provided the correct diagnosis first in 34% (95% confidence interval 31% to 37%) of standardized patient evaluations, listed the correct diagnosis within the top 20 diagnoses given in 58% (55% to 62%) of standardized patient evaluations, and provided the appropriate triage advice in 57% (52% to 61%) of standardized patient evaluations. Triage performance varied by urgency of condition, with appropriate triage advice provided in 80% (95% confidence interval 75% to 86%) of emergent cases, 55% (47% to 63%) of non-emergent cases, and 33% (26% to 40%) of self care cases (P<0.001). Performance on appropriate triage advice across the 23 individual symptom checkers ranged from 33% (95% confidence interval 19% to 48%) to 78% (64% to 91%) of standardized patient evaluations.\n\n\nCONCLUSIONS\nSymptom checkers had deficits in both triage and diagnosis. Triage advice from symptom checkers is generally risk averse, encouraging users to seek care for conditions where self care is reasonable.", "title": "" }, { "docid": "3422237504daed461a86defc3bfbe8ca", "text": "As one of the fundamental features, color provides useful information and plays an important role for face recognition. Generally, the choice of a color space is different for different visual tasks. How can a color space be sought for the specific face recognition problem? To address this problem, we propose a sparse tensor discriminant color space (STDCS) model that represents a color image as a third-order tensor in this paper. The model cannot only keep the underlying spatial structure of color images but also enhance robustness and give intuitionistic or semantic interpretation. STDCS transforms the eigenvalue problem to a series of regression problems. Then one spare color space transformation matrix and two sparse discriminant projection matrices are obtained by applying lasso or elastic net on the regression problems. The experiments on three color face databases, AR, Georgia Tech, and Labeled Faces in the Wild face databases, show that both the performance and the robustness of the proposed method outperform those of the state-of-the-art TDCS model.", "title": "" }, { "docid": "c79510daa790e5c92e0c3899cc4a563b", "text": "Purpose – The purpose of this study is to interpret consumers’ emotion in their consumption experience in the context of mobile commerce from an experiential view. The study seeks to address concerns about the experiential aspects of mobile commerce regardless of the consumption type. For the purpose, the authors aims to propose a stimulus-organism-response (S-O-R) based model that incorporates both utilitarian and hedonic factors of consumers. Design/methodology/approach – A survey study was conducted to collect data from 293 mobile phone users. The questionnaire was administered in study classrooms, a library, or via e-mail. The measurement model and structural model were examined using LISREL 8.7. Findings – The results of this research implied that emotion played a significant role in the mobile consumption experience; hedonic factors had a positive effect on the consumption experience, while utilitarian factors had a negative effect on the consumption experience of consumers. The empirical findings also indicated that media richness was as important as subjective norms, and more important than convenience and self-efficacy. Originality/value – Few m-commerce studies have focused directly on the experiential aspects of consumption, including the hedonic experience and positive emotions among mobile device users. Applying the stimulus-organism-response (S-O-R) framework from the perspective of the experiential view, the current research model is developed to examine several utilitarian and hedonic factors in the context of the consumption experience, and indicates a comparison between the information processing (utilitarian) view and the experiential (hedonic) view of consumer behavior. It illustrates the relationships among six variables (i.e. convenience, media richness, subjective norms, self-efficacy, emotion, and consumption experience) in a mobile commerce context.", "title": "" } ]
scidocsrr
2b9957b4ceee06e34cebd2e8e21a1266
A long memory property of stock market returns and a new model *
[ { "docid": "b36f29d1d0f373a3aa209fc3185f5516", "text": "A natural generalization of the ARCH (Autoregressive Conditional Heteroskedastic) process introduced in Engle (1982) to allow for past conditional variances in the current conditional variance equation is proposed. Stationarity conditions and autocorrelation structure for this new class of parametric models are derived. Maximum likelihood estimation and testing are also considered. Finally an empirical example relating to the uncertainty of the inflation rate is presented.", "title": "" } ]
[ { "docid": "e4d38d8ef673438e9ab231126acfda99", "text": "The trend toward physically dispersed work groups has necessitated a fresh inquiry into the role and nature of team leadership in virtual settings. To accomplish this, we assembled thirteen culturally diverse global teams from locations in Europe, Mexico, and the United States, assigning each team a project leader and task to complete. The findings suggest that effective team leaders demonstrate the capability to deal with paradox and contradiction by performing multiple leadership roles simultaneously (behavioral complexity). Specifically, we discovered that highly effective virtual team leaders act in a mentoring role and exhibit a high degree of understanding (empathy) toward other team members. At the same time, effective leaders are also able to assert their authority without being perceived as overbearing or inflexible. Finally, effective leaders are found to be extremely effective at providing regular, detailed, and prompt communication with their peers and in articulating role relationships (responsibilities) among the virtual team members. This study provides useful insights for managers interested in developing global virtual teams, as well as for academics interested in pursuing virtual team research. 8 KAYWORTH AND LEIDNER", "title": "" }, { "docid": "47f5fa1668c9195fddfe3358f9e82ded", "text": "Actinomycosis is an infectious disease caused by a gram-positive anaerobic or microaerophilic Actinomyces species that causes both chronic suppurative and granulomatous inflammation. The following study reports a 48-year-old Iranian woman presenting with a spontaneous discharging sinus on the hard palate for 8months. The patient has no past medical history of note. Laboratory findings were unremarkable. The diagnosis was based on history and clinical evidence of the lesion confirmed by histopathological examination. The patient was treated with a regimen of oral ampicillin 500mg four times a day. She had a marked response to the treatment after 4weeks, and it was planned to continue the treatment for at least 6months with regular follow-up. To the best of the researchers' knowledge, this is the first report of actinomycotic sinus tract of the hard palate in Iran.", "title": "" }, { "docid": "e04cccfd59c056678e39fc4aed0eaa2b", "text": "BACKGROUND\nBreast cancer is by far the most frequent cancer of women. However the preventive measures for such problem are probably less than expected. The objectives of this study are to assess breast cancer knowledge and attitudes and factors associated with the practice of breast self examination (BSE) among female teachers of Saudi Arabia.\n\n\nPATIENTS AND METHODS\nWe conducted a cross-sectional survey of teachers working in female schools in Buraidah, Saudi Arabia using a self-administered questionnaire to investigate participants' knowledge about the risk factors of breast cancer, their attitudes and screening behaviors. A sample of 376 female teachers was randomly selected. Participants lived in urban areas, and had an average age of 34.7 ±5.4 years.\n\n\nRESULTS\nMore than half of the women showed a limited knowledge level. Among participants, the most frequently reported risk factors were non-breast feeding and the use of female sex hormones. The printed media was the most common source of knowledge. Logistic regression analysis revealed that high income was the most significant predictor of better knowledge level. Knowing a non-relative case with breast cancer and having a high knowledge level were identified as the significant predictors for practicing BSE.\n\n\nCONCLUSIONS\nThe study points to the insufficient knowledge of female teachers about breast cancer and identified the negative influence of low knowledge on the practice of BSE. Accordingly, relevant educational programs to improve the knowledge level of women regarding breast cancer are needed.", "title": "" }, { "docid": "d0f1064f022f3a3c85a2a76f56f43dbb", "text": "Increasing amount of online music content has opened new opportunities for implementing new effective information access services – commonly known as music recommender systems – that support music navigation, discovery, sharing, and formation of user communities. In the recent years the new research area of contextual (or situational) music recommendation and retrieval has emerged. The basic idea is to retrieve and suggest music depending on the user’s actual situation, for instance emotional state, or any other contextual conditions that might influence the user’s perception of music. Despite the high potential of such idea, the development of real-world applications that retrieve or recommend music depending on the user’s context is still in its early stages. This survey illustrates various tools and techniques that can be used for addressing the research challenges posed by context-aware music retrieval and recommendation. This survey covers a broad range of topics, starting from classical music information retrieval (MIR) and recommender system (RS) techniques, and then focusing on context-aware music applications as well as the newer trends of affective and social computing applied to the music domain.", "title": "" }, { "docid": "57856c122a6f8a0db8423a1af9378b3e", "text": "Probiotics are defined as live microorganisms, which when administered in adequate amounts, confer a health benefit on the host. Health benefits have mainly been demonstrated for specific probiotic strains of the following genera: Lactobacillus, Bifidobacterium, Saccharomyces, Enterococcus, Streptococcus, Pediococcus, Leuconostoc, Bacillus, Escherichia coli. The human microbiota is getting a lot of attention today and research has already demonstrated that alteration of this microbiota may have far-reaching consequences. One of the possible routes for correcting dysbiosis is by consuming probiotics. The credibility of specific health claims of probiotics and their safety must be established through science-based clinical studies. This overview summarizes the most commonly used probiotic microorganisms and their demonstrated health claims. As probiotic properties have been shown to be strain specific, accurate identification of particular strains is also very important. On the other hand, it is also demonstrated that the use of various probiotics for immunocompromised patients or patients with a leaky gut has also yielded infections, sepsis, fungemia, bacteraemia. Although the vast majority of probiotics that are used today are generally regarded as safe and beneficial for healthy individuals, caution in selecting and monitoring of probiotics for patients is needed and complete consideration of risk-benefit ratio before prescribing is recommended.", "title": "" }, { "docid": "522a9deb3926d067686d4c26354a78f7", "text": "The golden age of cannabis pharmacology began in the 1960s as Raphael Mechoulam and his colleagues in Israel isolated and synthesized cannabidiol, tetrahydrocannabinol, and other phytocannabinoids. Initially, THC garnered most research interest with sporadic attention to cannabidiol, which has only rekindled in the last 15 years through a demonstration of its remarkably versatile pharmacology and synergy with THC. Gradually a cognizance of the potential of other phytocannabinoids has developed. Contemporaneous assessment of cannabis pharmacology must be even far more inclusive. Medical and recreational consumers alike have long believed in unique attributes of certain cannabis chemovars despite their similarity in cannabinoid profiles. This has focused additional research on the pharmacological contributions of mono- and sesquiterpenoids to the effects of cannabis flower preparations. Investigation reveals these aromatic compounds to contribute modulatory and therapeutic roles in the cannabis entourage far beyond expectations considering their modest concentrations in the plant. Synergistic relationships of the terpenoids to cannabinoids will be highlighted and include many complementary roles to boost therapeutic efficacy in treatment of pain, psychiatric disorders, cancer, and numerous other areas. Additional parts of the cannabis plant provide a wide and distinct variety of other compounds of pharmacological interest, including the triterpenoid friedelin from the roots, canniprene from the fan leaves, cannabisin from seed coats, and cannflavin A from seed sprouts. This chapter will explore the unique attributes of these agents and demonstrate how cannabis may yet fulfil its potential as Mechoulam's professed \"pharmacological treasure trove.\"", "title": "" }, { "docid": "105ca5d61d2b89595f4145275b41e2c9", "text": "Face recognition (image processing) is a process of identification of human face or faces similar to human face in a video or an image. Sometimes it is also referred as the process of identifying images which are similar to each other, for example there is a database of 100 images of 10 individuals, each person can look up, down, sideways, can smile, can frown etc. Thus the designed system should be able to recognize a particular person having all the different expressions and also should be proficient in differentiating other person’s face. The face recognition technology has improved over the years but still there are some drawbacks. This paper studies the three main drawbacks in the present day image processing technology and suggests useful methods to surmount the drawbacks. First half of the paper talks in detail about the present day image processing technology and its drawbacks and the second half gives an experimental study and analysis of the techniques which can be used to improve the quality of image. General Terms: False Acceptance Rate; False Rejection Rate; Eigen faces; Linear Discriminate Analysis; Elastic Bunch Graph Matching using Fisherface Algorithm; the Hidden Markov; Dynamic Link Matching.", "title": "" }, { "docid": "1caf2d15e1f9c6fcacfcb46d8fdfc5b3", "text": "Content Delivery Networks (CDNs) [79, 97] have received considerable research attention in the recent past. A few studies have investigated CDNs to categorize and analyze them, and to explore the uniqueness, weaknesses, opportunities, and future directions in this field. Peng presents an overview of CDNs [75]. His work describes the critical issues involved in designing and implementing an effective CDN, and surveys the approaches proposed in literature to address these problems. Vakali et al. [95] present a survey of CDN architecture and popular CDN service providers. The survey is focused on understanding the CDN framework and its usefulness. They identify the characteristics and current practices in the content networking domain, and present an evolutionary pathway for CDNs, in order to exploit the current content networking trends. Dilley et al. [29] provide an insight into the overall system architecture of the leading CDN, Akamai [1]. They provide an overview of the existing content delivery approaches and describe Akamai’s network infrastructure and its operations in detail. They also point out the technical challenges that are to be faced while constructing a global CDN like Akamai. Saroiu et al. [84] examine content delivery from the point of view of four content delivery systems: Hypertext Transfer Protocol (HTTP) Web traffic, the Akamai CDN, Gnutella [8, 25], and KaZaa [62, 66] peer-to-peer file sharing systems. They also present significant implications for large organizations, service providers, network infrastructure providers, and general content delivery providers. Kung et al. [60] describe a taxonomy for content networks and introduce a new class of content networks that perform “semantic aggregation and content-sensitive placement” of content. They classify content networks based on their attributes in two dimensions: content aggregation and content placement. Sivasubramanian et al. [89] identify the issues", "title": "" }, { "docid": "9585a35e333231ed32871bcb6e7e1002", "text": "Expectation propagation (EP) is a deterministic approximation algorithm that is often used to perform approximate Bayesian parameter learning. EP approximates the full intractable posterior distribution through a set of local approximations that are iteratively refined for each datapoint. EP can offer analytic and computational advantages over other approximations, such as Variational Inference (VI), and is the method of choice for a number of models. The local nature of EP appears to make it an ideal candidate for performing Bayesian learning on large models in large-scale dataset settings. However, EP has a crucial limitation in this context: the number of approximating factors needs to increase with the number of datapoints, N , which often entails a prohibitively large memory overhead. This paper presents an extension to EP, called stochastic expectation propagation (SEP), that maintains a global posterior approximation (like VI) but updates it in a local way (like EP). Experiments on a number of canonical learning problems using synthetic and real-world datasets indicate that SEP performs almost as well as full EP, but reduces the memory consumption by a factor of N . SEP is therefore ideally suited to performing approximate Bayesian learning in the large model, large dataset setting.", "title": "" }, { "docid": "4cfcbac8ec942252b79f2796fa7490f0", "text": "Over the next few years the amount of biometric data being at the disposal of various agencies and authentication service providers is expected to grow significantly. Such quantities of data require not only enormous amounts of storage but unprecedented processing power as well. To be able to face this future challenges more and more people are looking towards cloud computing, which can address these challenges quite effectively with its seemingly unlimited storage capacity, rapid data distribution and parallel processing capabilities. Since the available literature on how to implement cloud-based biometric services is extremely scarce, this paper capitalizes on the most important challenges encountered during the development work on biometric services, presents the most important standards and recommendations pertaining to biometric services in the cloud and ultimately, elaborates on the potential value of cloud-based biometric solutions by presenting a few existing (commercial) examples. In the final part of the paper, a case study on fingerprint recognition in the cloud and its integration into the e-learning environment Moodle is presented.", "title": "" }, { "docid": "e864bccfa711a5e773390524cd826808", "text": "Semantic similarity measures estimate the similarity between concepts, and play an important role in many text processing tasks. Approaches to semantic similarity in the biomedical domain can be roughly divided into knowledge based and distributional based methods. Knowledge based approaches utilize knowledge sources such as dictionaries, taxonomies, and semantic networks, and include path finding measures and intrinsic information content (IC) measures. Distributional measures utilize, in addition to a knowledge source, the distribution of concepts within a corpus to compute similarity; these include corpus IC and context vector methods. Prior evaluations of these measures in the biomedical domain showed that distributional measures outperform knowledge based path finding methods; but more recent studies suggested that intrinsic IC based measures exceed the accuracy of distributional approaches. Limitations of previous evaluations of similarity measures in the biomedical domain include their focus on the SNOMED CT ontology, and their reliance on small benchmarks not powered to detect significant differences between measure accuracy. There have been few evaluations of the relative performance of these measures on other biomedical knowledge sources such as the UMLS, and on larger, recently developed semantic similarity benchmarks. We evaluated knowledge based and corpus IC based semantic similarity measures derived from SNOMED CT, MeSH, and the UMLS on recently developed semantic similarity benchmarks. Semantic similarity measures based on the UMLS, which contains SNOMED CT and MeSH, significantly outperformed those based solely on SNOMED CT or MeSH across evaluations. Intrinsic IC based measures significantly outperformed path-based and distributional measures. We released all code required to reproduce our results and all tools developed as part of this study as open source, available under http://code.google.com/p/ytex . We provide a publicly-accessible web service to compute semantic similarity, available under http://informatics.med.yale.edu/ytex.web/ . Knowledge based semantic similarity measures are more practical to compute than distributional measures, as they do not require an external corpus. Furthermore, knowledge based measures significantly and meaningfully outperformed distributional measures on large semantic similarity benchmarks, suggesting that they are a practical alternative to distributional measures. Future evaluations of semantic similarity measures should utilize benchmarks powered to detect significant differences in measure accuracy.", "title": "" }, { "docid": "2cc3d181d9cb1201a3ae1a88e7be7954", "text": "We discuss the use of histogram of oriented gradients (HOG) d escriptors as an effective tool for text description and recognition. Specifically, we propose a HOG-based texture descriptor (THOG) that uses a partition of the image into overlapping hori zontal cells with gradual boundaries, to characterize single-line texts in outdoor scenes. The in put of our algorithm is a rectangular image presumed to contain a single line of text in Roman-like cha racters. The output is a relatively short descriptor, that provides an effective input to an SVM classifier. Extensive experiments show that the T-HOG is more accurate than Dalal and Triggs’s origi nal HOG-based classifier, for any descriptor size. In addition, we show that the T-HOG is an eff ective tool for text/non-text discrimination and can be used in various text detection application s. In particular, combining T-HOG with a permissive bottom-up text detector is shown to outperform state-of-the-art text detection systems in two major publicly available databases.", "title": "" }, { "docid": "76ef678b28d41317e2409b9fd2109f35", "text": "Conflicting guidelines for excisions about the alar base led us to develop calibrated alar base excision, a modification of Weir's approach. In approximately 20% of 1500 rhinoplasties this technique was utilized as a final step. Of these patients, 95% had lateral wallexcess (“tall nostrils”), 2% had nostril floor excess (“wide nostrils”), 2% had a combination of these (“tall-wide nostrils”), and 1% had thick nostril rims. Lateral wall excess length is corrected by a truncated crescent excision of the lateral wall above the alar crease. Nasal floor excess is improved by an excision of the nasal sill. Combination noses (e.g., tall-wide) are approached with a combination alar base excision. Finally, noses with thick rims are improved with diamond excision. Closure of the excision is accomplished with fine simple external sutures. Electrocautery is unnecessary and deep sutures are utilized only in wide noses. Few complications were noted. Benefits of this approach include straightforward surgical guidelines, a natural-appearing correction, avoidance of notching or obvious scarring, and it is quick and simple.", "title": "" }, { "docid": "2b688f9ca05c2a79f896e3fee927cc0d", "text": "This paper presents a new synchronous-reference frame (SRF)-based control method to compensate power-quality (PQ) problems through a three-phase four-wire unified PQ conditioner (UPQC) under unbalanced and distorted load conditions. The proposed UPQC system can improve the power quality at the point of common coupling on power distribution systems under unbalanced and distorted load conditions. The simulation results based on Matlab/Simulink are discussed in detail to support the SRF-based control method presented in this paper. The proposed approach is also validated through experimental study with the UPQC hardware prototype.", "title": "" }, { "docid": "f83017ad2454c465d19f70f8ba995e95", "text": "The origins of life on Earth required the establishment of self-replicating chemical systems capable of maintaining and evolving biological information. In an RNA world, single self-replicating RNAs would have faced the extreme challenge of possessing a mutation rate low enough both to sustain their own information and to compete successfully against molecular parasites with limited evolvability. Thus theoretical analyses suggest that networks of interacting molecules were more likely to develop and sustain life-like behaviour. Here we show that mixtures of RNA fragments that self-assemble into self-replicating ribozymes spontaneously form cooperative catalytic cycles and networks. We find that a specific three-membered network has highly cooperative growth dynamics. When such cooperative networks are competed directly against selfish autocatalytic cycles, the former grow faster, indicating an intrinsic ability of RNA populations to evolve greater complexity through cooperation. We can observe the evolvability of networks through in vitro selection. Our experiments highlight the advantages of cooperative behaviour even at the molecular stages of nascent life.", "title": "" }, { "docid": "aca9aa6eb86c0aed1d707aee6f36cdce", "text": "This paper presents a set of geometric signature features for offline automatic signature verification based on the description of the signature envelope and the interior stroke distribution in polar and Cartesian coordinates. The features have been calculated using 16 bits fixed-point arithmetic and tested with different classifiers, such as hidden Markov models, support vector machines, and Euclidean distance classifier. The experiments have shown promising results in the task of discriminating random and simple forgeries.", "title": "" }, { "docid": "14b6ff85d404302af45cf608137879c7", "text": "In this paper, an automatic multi-organ segmentation based on multi-boost learning and statistical shape model search was proposed. First, simple but robust Multi-Boost Classifier was trained to hierarchically locate and pre-segment multiple organs. To ensure the generalization ability of the classifier relative location information between organs, organ and whole body is exploited. Left lung and right lung are first localized and pre-segmented, then liver and spleen are detected upon its location in whole body and its relative location to lungs, kidney is finally detected upon the features of relative location to liver and left lung. Second, shape and appearance models are constructed for model fitting. The final refinement delineation is performed by best point searching guided by appearance profile classifier and is constrained with multi-boost classified probabilities, intensity and gradient features. The method was tested on 30 unseen CT and 30 unseen enhanced CT (CTce) datasets from ISBI 2015 VISCERAL challenge. The results demonstrated that the multi-boost learning can be used to locate multi-organ robustly and segment lung and kidney accurately. The liver and spleen segmentation based on statistical shape searching has shown good performance too. Copyright c © by the paper’s authors. Copying permitted only for private and academic purposes. In: O. Goksel (ed.): Proceedings of the VISCERAL Anatomy Grand Challenge at the 2015 IEEE International Symposium on Biomedical Imaging (ISBI), New York, NY, Apr 16, 2015 published at http://ceur-ws.org", "title": "" }, { "docid": "b9844995ce04b0336f82b7a6cff5f307", "text": "In this paper, we present a novel method for impulse noise filter construction, based on the switching scheme with two cascaded detectors and two corresponding estimators. Genetic programming as a supervised learning algorithm is employed for building two detectors with complementary characteristics. The first detector identifies the majority of noisy pixels. The second detector searches for the remaining noise missed by the first detector, usually hidden in image details or with amplitudes close to its local neighborhood. Both detectors are based on the robust estimators of location and scale-median and MAD. The filter made by the proposed method is capable of effectively suppressing all kinds of impulse noise, in contrast to many existing filters which are specialized only for a particular noise model. In addition, we propose the usage of a new impulse noise model-the mixed impulse noise, which is more realistic and harder to treat than existing impulse noise models. The proposed model is the combination of commonly used noise models: salt-and-pepper and uniform impulse noise models. Simulation results show that the proposed two-stage GP filter produces excellent results and outperforms existing state-of-the-art filters.", "title": "" }, { "docid": "23d7eb4d414e4323c44121040c3b2295", "text": "BACKGROUND\nThe use of clinical decision support systems to facilitate the practice of evidence-based medicine promises to substantially improve health care quality.\n\n\nOBJECTIVE\nTo describe, on the basis of the proceedings of the Evidence and Decision Support track at the 2000 AMIA Spring Symposium, the research and policy challenges for capturing research and practice-based evidence in machine-interpretable repositories, and to present recommendations for accelerating the development and adoption of clinical decision support systems for evidence-based medicine.\n\n\nRESULTS\nThe recommendations fall into five broad areas--capture literature-based and practice-based evidence in machine--interpretable knowledge bases; develop maintainable technical and methodological foundations for computer-based decision support; evaluate the clinical effects and costs of clinical decision support systems and the ways clinical decision support systems affect and are affected by professional and organizational practices; identify and disseminate best practices for work flow-sensitive implementations of clinical decision support systems; and establish public policies that provide incentives for implementing clinical decision support systems to improve health care quality.\n\n\nCONCLUSIONS\nAlthough the promise of clinical decision support system-facilitated evidence-based medicine is strong, substantial work remains to be done to realize the potential benefits.", "title": "" }, { "docid": "f5713b2e233848cab82db0007099a39c", "text": "The term 'critical design' is on the upswing in HCI. We analyze how discourses around 'critical design' are diverging in Design and HCI. We argue that this divergence undermines HCI's ability to learn from and appropriate the design approaches signaled by this term. Instead, we articulate two ways to broaden and deepen connections between Design and HCI: (1) develop a broader collective understanding of what these design approaches can be, without forcing them to be about 'criticality' or 'critical design,' narrowly construed; and (2) shape a variation of design criticism to better meet Design practices, terms, and ways of knowing.", "title": "" } ]
scidocsrr
2eada9063abf389431e8d5f3e43e83bd
Risk assessment of supply chain finance with intuitionistic fuzzy information
[ { "docid": "d2454e1236b51349c06b67f8a807b319", "text": "This paper investigates capabilities of social media, such as Facebook, Twitter, Delicious, Digg and others, for their current and potential impact on the supply chain. In particular, this paper examines the use of social media to capture the impact on supply chain events and develop a context for those events. This paper also analyzes the use of social media in the supply chain to build relationships among supply chain participants. Further, this paper investigates the of use user supplied tags as a basis of evaluating and extending an ontology for supply chains. In addition, using knowledge discovery from social media, a number of concepts related to the supply chain are examined, including supply chain reputation and influence within the supply chain. Prediction markets are analyzed for their potential use in supply chains. Finally, this paper investigates the integration of traditional knowledge management along with knowledge generated from social media.", "title": "" } ]
[ { "docid": "cf7c5cd5f4caa6ded09f8b91d9f0ea16", "text": "Covariance matrix has recently received increasing attention in computer vision by leveraging Riemannian geometry of symmetric positive-definite (SPD) matrices. Originally proposed as a region descriptor, it has now been used as a generic representation in various recognition tasks. However, covariance matrix has shortcomings such as being prone to be singular, limited capability in modeling complicated feature relationship, and having a fixed form of representation. This paper argues that more appropriate SPD-matrix-based representations shall be explored to achieve better recognition. It proposes an open framework to use the kernel matrix over feature dimensions as a generic representation and discusses its properties and advantages. The proposed framework significantly elevates covariance representation to the unlimited opportunities provided by this new representation. Experimental study shows that this representation consistently outperforms its covariance counterpart on various visual recognition tasks. In particular, it achieves significant improvement on skeleton-based human action recognition, demonstrating the state-of-the-art performance over both the covariance and the existing non-covariance representations.", "title": "" }, { "docid": "e33080761e4ece057f455148c7329d5e", "text": "This paper compares the utilization of ConceptNet and WordNet in query expansion. Spreading activation selects candidate terms for query expansion from these two resources. Three measures including discrimination ability, concept diversity, and retrieval performance are used for comparisons. The topics and document collections in the ad hoc track of TREC-6, TREC-7 and TREC-8 are adopted in the experiments. The results show that ConceptNet and WordNet are complementary. Queries expanded with WordNet have higher discrimination ability. In contrast, queries expanded with ConceptNet have higher concept diversity. The performance of queries expanded by selecting the candidate terms from ConceptNet and WordNet outperforms that of queries without expansion, and queries expanded with a single resource.", "title": "" }, { "docid": "c7efe63ebb6608e5dda46a6e7485831d", "text": "Analyzing the neutrality of referees during 12 German premier league (1. Bundesliga) soccer seasons, this paper documents evidence that social forces influence agents’ decisions. Referees, who are appointed to be impartial, tend to favor the home team by systematically awarding more stoppage time in close matches in which the home team is behind. They also favor the home team in decisions to award goals and penalty kicks. Crowd composition affects the size and the direction of the bias, and the crowd’s proximity to the field is related to the quality of refereeing. (JEL J00)", "title": "" }, { "docid": "97e018568e7fda4b2c868a6e52147a55", "text": "Numerous accelerometers are being extensively used in the recognition of simple ambulatory activities. Using wearable sensors for activity recognition is the latest topic of interest in smart home research. We use an Actigraph watch with an embedded accelerometer sensor to recognize real-life activities done in a home. Real-life activities include the set of Activities of Daily Living (ADL). ADLs are the crucial activities we perform everyday in our homes. Actigraph watches have been profusely used in sleep studies to determine the sleep/wake cycles and also the quality of sleep. In this paper, we investigate the possibility of using Actigraph watches to recognize activities. The data collected from an Actigraph watch was analyzed to predict ADLs (Activities of Daily Living). We apply machine learning algorithms to the Actigraph data to predict the ADLs. Also, a comparative study of activity prediction accuracy obtained from four machine learning algorithms is discussed.", "title": "" }, { "docid": "58f230ff6030356707411083d37de333", "text": "This thesis explores the mark understanding problem in the context of a Tablet-PC-based classroom interaction system. It presents a novel method for interpreting digital ink strokes on background images, and aggregating those interpretations. It addresses complexity of mark interpreters and development and acquisition of a representation of a contextual background. It details the design, implementation, testing, and plans for future extension of a mark interpreter and aggregator in the Classroom Learning Partner, our classroom interaction system. Thesis Supervisor: Kimberle Koile, Ph.D. Title: Research Scientist", "title": "" }, { "docid": "f8984d660f39c66b3bd484ec766fa509", "text": "The present paper focuses on Cyber Security Awareness Campaigns, and aims to identify key factors regarding security which may lead them to failing to appropriately change people’s behaviour. Past and current efforts to improve information-security practices and promote a sustainable society have not had the desired impact. It is important therefore to critically reflect on the challenges involved in improving information-security behaviours for citizens, consumers and employees. In particular, our work considers these challenges from a Psychology perspective, as we believe that understanding how people perceive risks is critical to creating effective awareness campaigns. Changing behaviour requires more than providing information about risks and reactive behaviours – firstly, people must be able to understand and apply the advice, and secondly, they must be motivated and willing to do so – and the latter requires changes to attitudes and intentions. These antecedents of behaviour change are identified in several psychological models of behaviour. We review the suitability of persuasion techniques, including the widely used ‘fear appeals’. From this range of literature, we extract essential components for an awareness campaign as well as factors which can lead to a campaign’s success or failure. Finally, we present examples of existing awareness campaigns in different cultures (the UK and Africa) and reflect on these.", "title": "" }, { "docid": "eaeccd0d398e0985e293d680d2265528", "text": "Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper we present that, even without learning, simple convolutional networks can be powerful enough to develop a robust representation for visual tracking. In the first frame, we randomly extract a set of normalized patches from the target region as filters, which define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and the useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps form together a global representation, which maintains the relative geometric positions of the local intensity patterns, and hence the inner geometric layout of the target is also well preserved. A simple and effective online strategy is adopted to update the representation, allowing it to robustly adapt to target appearance variations. Our convolution networks have surprisingly lightweight structure, yet perform favorably against several state-of-the-art methods on a large benchmark dataset with 50 challenging videos.", "title": "" }, { "docid": "df6f6e52f97cfe2d7ff54d16ed9e2e54", "text": "Example-based texture synthesis algorithms have gained widespread popularity for their ability to take a single input image and create a perceptually similar non-periodic texture. However, previous methods rely on single input exemplars that can capture only a limited band of spatial scales. For example, synthesizing a continent-like appearance at a variety of zoom levels would require an impractically high input resolution. In this paper, we develop a multiscale texture synthesis algorithm. We propose a novel example-based representation, which we call an exemplar graph, that simply requires a few low-resolution input exemplars at different scales. Moreover, by allowing loops in the graph, we can create infinite zooms and infinitely detailed textures that are impossible with current example-based methods. We also introduce a technique that ameliorates inconsistencies in the user's input, and show that the application of this method yields improved interscale coherence and higher visual quality. We demonstrate optimizations for both CPU and GPU implementations of our method, and use them to produce animations with zooming and panning at multiple scales, as well as static gigapixel-sized images with features spanning many spatial scales.", "title": "" }, { "docid": "eac0b793ae1d38cffd274c1455311959", "text": "Leakage detection is a common chemical-sensing application. Leakage detection by thresholds on a single sensor signal suffers from important drawbacks when sensors show drift effects or when they are affected by other long-term cross sensitivities. In this paper, we present an adaptive method based on a recursive dynamic principal component analysis (RDPCA) algorithm that models the relationships between the sensors in the array and their past history. In normal conditions, a certain variance distribution characterizes sensor signals, however, in the presence of a new source of variance the PCA decomposition changes drastically. In order to prevent the influence of sensor drift, the model is adaptive, and it is calculated in a recursive manner with minimum computational effort. The behavior of this technique is studied with synthetic and real signals arising by oil vapor leakages in an air compressor. Results clearly demonstrate the efficiency of the proposed method", "title": "" }, { "docid": "d6f5b1a4c937fbfe87edb64df6931a6a", "text": "We created a socially assistive robotic learning companion to support English-speaking children’s acquisition of a new language (Spanish). In a two-month microgenetic study, 34 preschool children will play an interactive game with a fully autonomous robot and the robot’s virtual sidekick, a Toucan shown on a tablet screen. Two aspects of the interaction were personalized to each child: (1) the content of the game (i.e., which words were presented), and (2) the robot’s affective responses to the child’s emotional state and performance. We will evaluate whether personalization leads to greater engagement and learning.", "title": "" }, { "docid": "f86078de4b011a737b6bdedd86b4e82f", "text": "Alarm fatigue can adversely affect nurses’ efficiency and concentration on their tasks, which is a threat to patients’ safety. The purpose of the present study was to develop and test the psychometric accuracy of an alarm fatigue questionnaire for nurses. This study was conducted in two stages: in stage one, in order to establish the different aspects of the concept of alarm fatigue, the researchers reviewed the available literature—articles and books—on alarm fatigue, and then consulted several experts in a meeting to define alarm fatigue and develop statements for the questionnaire. In stage two, after the final draft had been approved, the validity of the instrument was measured using the two methods of face validity (the quantitative and qualitative approaches) and content validity (the qualitative and quantitative approaches). Test–retest, Cronbach’s alpha, and Principal Component Analysis were used for item reduction and reliability analysis. Based on the results of stage one, the researchers extracted 30 statements based on a 5-point Likert scale. In stage two, after the face and content validity of the questionnaire had been established, 19 statements were left in the instrument. Based on factor loadings of the items and “alpha if item deleted” and after the second round of consultation with the expert panel, six items were removed from the scale. The test of the reliability of nurses’ alarm fatigue questionnaire based on the internal homogeneity and retest methods yielded the following results: test–retest correlation coefficient = 0.99; Guttman split-half correlation coefficient = 0.79; Cronbach’s alpha = 0.91. Regarding the importance of recognizing alarm fatigue in nurses, there is need for an instrument to measure the phenomenon. The results of the study show that the developed questionnaire is valid and reliable enough for measuring alarm fatigue in nurses.", "title": "" }, { "docid": "c03a0bd78edcb7ebde0321ca7479853d", "text": "The evolution of speech can be studied independently of the evolution of language, with the advantage that most aspects of speech acoustics, physiology and neural control are shared with animals, and thus open to empirical investigation. At least two changes were necessary prerequisites for modern human speech abilities: (1) modification of vocal tract morphology, and (2) development of vocal imitative ability. Despite an extensive literature, attempts to pinpoint the timing of these changes using fossil data have proven inconclusive. However, recent comparative data from nonhuman primates have shed light on the ancestral use of formants (a crucial cue in human speech) to identify individuals and gauge body size. Second, comparative analysis of the diverse vertebrates that have evolved vocal imitation (humans, cetaceans, seals and birds) provides several distinct, testable hypotheses about the adaptive function of vocal mimicry. These developments suggest that, for understanding the evolution of speech, comparative analysis of living species provides a viable alternative to fossil data. However, the neural basis for vocal mimicry and for mimesis in general remains unknown.", "title": "" }, { "docid": "299d59735ea1170228aff531645b5d4a", "text": "While the economic case for cloud computing is compelling, the security challenges it poses are equally striking. In this work we strive to frame the full space of cloud-computing security issues, attempting to separate justified concerns from possible over-reactions. We examine contemporary and historical perspectives from industry, academia, government, and “black hats”. We argue that few cloud computing security issues are fundamentally new or fundamentally intractable; often what appears “new” is so only relative to “traditional” computing of the past several years. Looking back further to the time-sharing era, many of these problems already received attention. On the other hand, we argue that two facets are to some degree new and fundamental to cloud computing: the complexities of multi-party trust considerations, and the ensuing need for mutual auditability.", "title": "" }, { "docid": "1e511c078892f54f51e93f6f4dbfa31f", "text": "Over the past decade, the use of mobile phones has increased significantly. However, with every technological development comes some element of health concern, and cell phones are no exception. Recently, various studies have highlighted the negative effects of cell phone exposure on human health, and concerns about possible hazards related to cell phone exposure have been growing. This is a comprehensive, up-to-the-minute overview of the effects of cell phone exposure on human health. The types of cell phones and cell phone technologies currently used in the world are discussed in an attempt to improve the understanding of the technical aspects, including the effect of cell phone exposure on the cardiovascular system, sleep and cognitive function, as well as localized and general adverse effects, genotoxicity potential, neurohormonal secretion and tumour induction. The proposed mechanisms by which cell phones adversely affect various aspects of human health, and male fertility in particular, are explained, and the emerging molecular techniques and approaches for elucidating the effects of mobile phone radiation on cellular physiology using high-throughput screening techniques, such as metabolomics and microarrays, are discussed. A novel study is described, which is looking at changes in semen parameters, oxidative stress markers and sperm DNA damage in semen samples exposed in vitro to cell phone radiation.", "title": "" }, { "docid": "5f70d96454e4a6b8d2ce63bc73c0765f", "text": "The Natural Language Processing group at the University of Szeged has been involved in human language technology research since 1998, and by now, it has become one of the leading workshops of Hungarian computational linguistics. Both computer scientists and linguists enrich the team with their knowledge, moreover, MSc and PhD students are also involved in research activities. The team has gained expertise in the fields of information extraction, implementing basic language processing toolkits and creating language resources. The Group is primarily engaged in processing Hungarian and English texts and its general objective is to develop language-independent or easily adaptable technologies. With the creation of the manually annotated Szeged Corpus and TreeBank, as well as the Hungarian WordNet, SzegedNE and other corpora it has become possible to apply machine learning based methods for the syntactic and semantic analysis of Hungarian texts, which is one of the strengths of the group. They have also implemented novel solutions for the morphological and syntactic parsing of morphologically rich languages and they have also published seminal papers on computational semantics, i.e. uncertainty detection and multiword expressions. They have developed tools for basic linguistic processing of Hungarian, for named entity recognition and for keyphrase extraction, which can all be easily integrated into large-scale systems and are optimizable for the specific needs of the given application. Currently, the group’s research activities focus on the processing of non-canonical texts (e.g. social media texts) and on the implementation of a syntactic parser for Hungarian, among others.", "title": "" }, { "docid": "7ea2bb46f00847c1b3800c187540d75b", "text": "The treatment of depression has predominantly focused on medication or cognitive behavioral therapy and has given little attention to the effect of body movement and postures. This study investigated how body posture during movement affects subjective energy level. One hundred and ten university students (average age 23.7) rated their energy level and then walked in either a slouched position or in a pattern of opposite arm and leg skipping. After about two to three minutes, the students rated their subjective energy level, then walked in the opposite movement pattern and rated themselves again. After slouched walking, the participants experienced a decrease in their subjective energy (p , .01); after opposite arm leg skipping they experienced a significant increase in their subjective energy (p , .01). There was a significantly greater decrease (p , .05) in energy at the end of the slouched walk for the 20% of the participants who had the highest self-rated depression scores, as compared to the lowest 20%. By changing posture, subjective energy level can be decreased or increased. Thus the mind-body relationship is a two way street: mind to body and body to mind. The authors discuss clinical and teaching implications of body posture.", "title": "" }, { "docid": "c39ab37765fbafdbc2dd3bf70c801d27", "text": "This paper presents the advantages in extending Classical T ensor Algebra (CTA), also known as Kronecker Algebra, to allow the definition of functions, i.e., functional dependencies among its operands. Such extended tensor algebra have been called Generalized Tenso r Algebra (GTA). Stochastic Automata Networks (SAN) and Superposed Generalized Stochastic Petri Ne ts (SGSPN) formalisms use such Kronecker representations. We show that SAN, which uses GTA, has the sa m application scope of SGSPN, which uses CTA. We also show that any SAN model with functions has at least one equivalent representation without functions. In fact, the use of functions, and conseq uently the GTA, is not really a “need” since there is an equivalence of formalisms, but in some cases it represe nts, in a computational cost point of view, some irrefutable “advantages”. Some modeling examples are pres ent d in order to draw comparisons between the memory needs and CPU time to the generation, and the solution of the presented models.", "title": "" }, { "docid": "7004293690fe2fcc2e8880d08de83e7c", "text": "Hidradenitis suppurativa (HS) is a challenging skin disease with limited therapeutic options. Obesity and metabolic syndrome are being increasingly implicated and associated with younger ages and greater metabolic severity. A 19-year-old female with an 8-year history of progressively debilitating cicatricial HS disease presented with obesity, profound anemia, leukocytosis, increased platelet count, hypoalbuminemia, and elevated liver enzymes. A combination of metformin, liraglutide, levonorgestrel-ethinyl estradiol, dapsone, and finasteride was initiated. Acute antibiotic use for recurrences and flares could be slowly discontinued. Over the course of 3 years on this regimen, the liver enzymes normalized in 1 year, followed in2 years by complete resolution of the majority of the hematological and metabolic abnormalities. The sedimentation rate reduced from over 120 to 34 mm/h. She required 1 surgical intervention for perianal disease after 9 months on the regimen. Flares greatly diminished in intensity and duration, with none in the past 6 months. Right axillary lesions have completely healed with residual disease greatly reduced. Chiefly abdominal lesions are persistent. She was able to complete high school from home, start a job, and resume a normal life. Initial weight loss of 40 pounds was not maintained. The current regimen is being well tolerated and continued.", "title": "" }, { "docid": "91c7a22694ec8ae4d8ca5ad3147fb11e", "text": "The binary-weight CNN is one of the most efficient solutions for mobile CNNs. However, a large number of operations are required to process each image. To reduce such a huge operation count, we propose an energy-efficient kernel decomposition architecture, based on the observation that a large number of operations are redundant. In this scheme, all kernels are decomposed into sub-kernels to expose the common parts. By skipping the redundant computations, the operation count for each image was consequently reduced by 47.7%. Furthermore, a low cost bit-width quantization technique was implemented by exploiting the relative scales of the feature data. Experimental results showed that the proposed architecture achieves a 22% energy reduction.", "title": "" }, { "docid": "2f7ba7501fcf379b643867c7d5a9d7bf", "text": "The connections in many networks are not merely binary entities, either present or not, but have associated weights that record their strengths relative to one another. Recent studies of networks have, by and large, steered clear of such weighted networks, which are often perceived as being harder to analyze than their unweighted counterparts. Here we point out that weighted networks can in many cases be analyzed using a simple mapping from a weighted network to an unweighted multigraph, allowing us to apply standard techniques for unweighted graphs to weighted ones as well. We give a number of examples of the method, including an algorithm for detecting community structure in weighted networks and a simple proof of the maximum-flow-minimum-cut theorem.", "title": "" } ]
scidocsrr
a0d7fa366db7c0e86edff6405f3ce713
Green initiatives in IoT
[ { "docid": "ecf0538ad1528f465e6f582c65b18bb8", "text": "The Internet of Things (IoT) is a dynamic global information network consisting of Internet-connected objects, such as radio frequency identifications, sensors, and actuators, as well as other instruments and smart appliances that are becoming an integral component of the Internet. Over the last few years, we have seen a plethora of IoT solutions making their way into the industry marketplace. Context-aware communications and computing have played a critical role throughout the last few years of ubiquitous computing and are expected to play a significant role in the IoT paradigm as well. In this paper, we examine a variety of popular and innovative IoT solutions in terms of context-aware technology perspectives. More importantly, we evaluate these IoT solutions using a framework that we built around well-known context-aware computing theories. This survey is intended to serve as a guideline and a conceptual framework for context-aware product development and research in the IoT paradigm. It also provides a systematic exploration of existing IoT products in the marketplace and highlights a number of potentially significant research directions and trends.", "title": "" } ]
[ { "docid": "ddc7052b6931604379d4fdeda706f2f0", "text": "Advancements in convolutional neural networks (CNNs) have made significant strides toward achieving high performance levels on multiple object recognition tasks. While some approaches utilize information from the entire scene to propose regions of interest, the task of interpreting a particular region or object is still performed independently of other objects and features in the image. Here we demonstrate that a scene's ‘gist’ can significantly contribute to how well humans can recognize objects. These findings are consistent with the notion that humans foveate on an object and incorporate information from the periphery to aid in recognition. We use a biologically inspired two-part convolutional neural network ('GistNet') that models the fovea and periphery to provide a proof-of-principle demonstration that computational object recognition can significantly benefit from the gist of the scene as contextual information. Our model yields accuracy improvements of up to 50% in certain object categories when incorporating contextual gist, while only increasing the original model size by 5%. This proposed model mirrors our intuition about how the human visual system recognizes objects, suggesting specific biologically plausible constraints to improve machine vision and building initial steps towards the challenge of scene understanding.", "title": "" }, { "docid": "d3fa0c7d502ab16c9cd2acc74c7cb9b0", "text": "Driven by the trends of BigData and Cloud computing, there is a growing demand for processing and analyzing data that are generated and stored across geo-distributed data centers. However, due to the limited network bandwidth between data centers and the growing data volume spread across different locations, it has become increasingly inefficient to aggregate data and to perform computations at a single data center. An approach that has been commonly used by data-intensive cluster computation systems, like Hadoop, is to distribute computations based on data locality so that data can be processed locally to reduce the network overhead and improve performance. But limited work has been done to adapt and evaluate such technique for geo-distributed data centers. In this paper, we proposed DRASH (Data-Replication Aware Scheduler), a job scheduling algorithm that enforces data locality to prevent data transfer, and exploits data replications to improve overall system performance. Our evaluation using simulations with realistic workload traces shows that DRASH can outperform other existing approaches by 16% to 60% in average job completion time, and achieve greater improvements under higher data replication factors.", "title": "" }, { "docid": "d916bb0a6029f31ebac027a1ac08d5d2", "text": "Probabilistic Roadmap (PRM) is one of the methods for finding the shortest path between Beginning and destination way-points on maritime shipping routes. The main behavior of the algorithm is based on assigning randomly distributed nodes on search space, finding the alternative routes, and then selecting the shortest one among them. As its name indicates, these path candidates are determined according to the position of the randomly distributed nodes at the beginning of the PRM algorithm. Therefore, it is not possible to obtain the global (or near global) shortest path. Hence, in this paper, two issues are considered i) omit the randomness with the previously proposed methodologies and ii) reduce the total track mileage with the aid of Hooke-Jeeves algorithm which is one of the classical optimization algorithm. To present the performance of this proposed framework, 20 different scenarios are defined. The PRM-HJ couple and PRM are applied to all these test problems and results are compared with respect to the total track mileage.", "title": "" }, { "docid": "7d0dfce24bd539cb790c0c25348d075d", "text": "When learning from positive and unlabelled data, it is a strong assumption that the positive observations are randomly sampled from the distribution of X conditional on Y = 1, where X stands for the feature and Y the label. Most existing algorithms are optimally designed under the assumption. However, for many realworld applications, the observed positive examples are dependent on the conditional probability P (Y = 1|X) and should be sampled biasedly. In this paper, we assume that a positive example with a higher P (Y = 1|X) is more likely to be labelled and propose a probabilistic-gap based PU learning algorithms. Speci€cally, by treating the unlabelled data as noisy negative examples, we could automatically label a group positive and negative examples whose labels are identical to the ones assigned by a Bayesian optimal classi€er with a consistency guarantee. Œe relabelled examples have a biased domain, which is remedied by the kernel mean matching technique. Œe proposed algorithm is model-free and thus do not have any parameters to tune. Experimental results demonstrate that our method works well on both generated and real-world datasets. ∗UBTECH Sydney Arti€cial Intelligence Centre and the School of Information Technologies, Faculty of Engineering and Information Technologies, Œe University of Sydney, Darlington, NSW 2008, Australia, fehe7727@uni.sydney.edu.au; tongliang.liu@sydney.edu.au; dacheng.tao@sydney.edu.au. †Faculty of Information Technology, Monash University, Clayton, VIC 3800, Australia, geo‚.webb@monash.edu. 1 ar X iv :1 80 8. 02 18 0v 1 [ cs .L G ] 7 A ug 2 01 8", "title": "" }, { "docid": "7bc8be5766eeb11b15ea0aa1d91f4969", "text": "A coplanar waveguide (CPW)-fed planar monopole antenna with triple-band operation for WiMAX and WLAN applications is presented. The antenna, which occupies a small size of 25(L) × 25(W) × 0.8(H) mm3, is simply composed of a pentagonal radiating patch with two bent slots. By carefully selecting the positions and lengths of these slots, good dual stopband rejection characteristic of the antenna can be obtained so that three operating bands covering 2.14-2.85, 3.29-4.08, and 5.02-6.09 GHz can be achieved. The measured results also demonstrate that the proposed antenna has good omnidirectional radiation patterns with appreciable gain across the operating bands and is thus suitable to be integrated within the portable devices for WiMAX/WLAN applications.", "title": "" }, { "docid": "9ee98f4c2e1fe8b5f49fd0e8a3b142c5", "text": "In this paper we characterize the workload of a Netflix streaming video web server. Netflix is a widely popular subscription service with over 81 million global subscribers [24]. The service streams professionally produced TV shows and movies over the Internet to an extremely diverse and representative set of playback devices over broadband, DSL, WiFi and cellular connections. Characterizing this type of workload is an important step to understanding and optimizing the performance of the servers used to support the growing number of streaming video services. We focus on the HTTP requests observed at the server from Netflix client devices by analyzing anonymized log files obtained from a server containing a portion of the Netflix catalog. We introduce the notion of chains of sequential requests to represent the spatial locality of the workload and find that despite servicing clients that adapt to changes in network and server conditions, and despite the fact that the majority of chains are short (60% are no longer than 1 MB), the vast majority of the bytes requested are sequential. We also observe that during a viewing session, client devices behave in recognizable patterns. We characterize sessions using transient, stable and inactive phases. We find that playback sessions are surprisingly stable; across all sessions 5% of the total session time is spent in transient phases, 79% in stable phases and 16% in inactive phases, and the average duration of a stable phase is 8.5 minutes. Finally we analyze the chains to evaluate different prefetch algorithms and show that by exploiting knowledge about workload characteristics, the workload can be serviced with 13% lower hard drive utilization or 30% less system memory compared to a prefetch algorithm that makes no use of workload characteristics.", "title": "" }, { "docid": "86d725fa86098d90e5e252c6f0aaab3c", "text": "This paper illustrates the manner in which UML can be used to study mappings to different types of database systems. After introducing UML through a comparison to the EER model, UML diagrams are used to teach different approaches for mapping conceptual designs to the relational model. As we cover object-oriented and object-relational database systems, different features of UML are used over the same enterprise example to help students understand mapping alternatives for each model. Students are required to compare and contrast the mappings in each model as part of the learning process. For object-oriented and object-relational database systems, we address mappings to the ODMG and SQL99 standards in addition to specific commercial implementations.", "title": "" }, { "docid": "72ca03796ee22e19b49ccbaea2abd3df", "text": "BACKGROUND\nVortioxetine is approved for the treatment of major depressive disorder and differs from other antidepressants in terms of its pharmacodynamic profile. Given the limited number of head-to-head studies comparing vortioxetine with other antidepressants, indirect comparisons using effect sizes observed in other trials can be helpful to discern potential differences in clinical outcomes.\n\n\nMETHODS\nData sources were the clinical trial reports for the pivotal short-term double-blind trials for vortioxetine and from publicly available sources for the pivotal short-term double-blind trials for two commonly used generic serotonin specific reuptake inhibitor antidepressants (sertraline, escitalopram), two commonly used generic serotonin-norepinephrine reuptake inhibitor antidepressants (venlafaxine, duloxetine), and two recently introduced branded antidepressants (vilazodone, levomilnacipran). Response was the efficacy outcome of interest, defined as a≥50% reduction from baseline on the Montgomery-Asberg Depression Rating Scale or Hamilton Depression Rating Scale. The tolerability outcome of interest was discontinuation because of an adverse event. Number needed to treat (NNT) and number needed to harm (NNH) for these outcomes vs. placebo were calculated, as well as likelihood to be helped or harmed (LHH) to contrast efficacy vs. tolerability.\n\n\nRESULTS\nThe analysis included 8 studies for duloxetine, 3 studies for escitalopram, 5 studies for levomilnacipran, 1 study for sertraline, 4 studies for venlafaxine, 2 studies for vilazodone, and 11 studies for vortioxetine. NNTs for response vs. placebo were 6 (95% CI 5-8), 7 (5-11), 10 (8-16), 6 (4-13), 6 (5-9), 8 (6-16), and 9 (7-11), respectively. NNHs for discontinuation because of an adverse event vs. placebo were 25 (17-51), 31 (19-92), 19 (14-27), 7 (5-12), 8 (7-11), 27 (15-104), and 43 (28-91), respectively. LHH values contrasting response vs. discontinuation because of an adverse event were 4.3, 4.6, 1.8, 1.2, 1.4, 3.3, and 5.1 respectively.\n\n\nLIMITATIONS\nSubjects were all participants in carefully designed and executed clinical trials and may not necessarily reflect patients in clinical settings who may have complex psychiatric and non-psychiatric comorbidities. The measured outcomes come from different studies and thus comparisons are indirect.\n\n\nCONCLUSIONS\nVortioxetine demonstrates similar efficacy to that observed for duloxetine, escitalopram, levomilnacipran, sertraline, venlafaxine, and vilazodone, however overall tolerability as measured by discontinuation because of an adverse event differs. Vortioxetine is 5.1 times more likely to be associated with response than discontinuation because of an adverse event when compared to placebo.", "title": "" }, { "docid": "169ed8d452a7d0dd9ecf90b9d0e4a828", "text": "Technology is common in the domain of knowledge distribution, but it rarely enhances the process of knowledge use. Distribution delivers knowledge to the potential user's desktop but cannot dictate what he or she does with it thereafter. It would be interesting to envision technologies that help to manage personal knowledge as it applies to decisions and actions. The viewpoints about knowledge vary from individual, community, society, personnel development or national development. Personal Knowledge Management (PKM) integrates Personal Information Management (PIM), focused on individual skills, with Knowledge Management (KM). KM Software is a subset of Enterprise content management software and which contains a range of software that specialises in the way information is collected, stored and/or accessed. This article focuses on KM skills, PKM and PIM Open Sources Software, Social Personal Management and also highlights the Comparison of knowledge base management software and its use.", "title": "" }, { "docid": "a8e9c127e56302596610e6719554de98", "text": "Online media offers opportunities to marketers to deliver brand messages to a large audience. Advertising technology platforms enables the advertisers to find the proper group of audiences and deliver ad impressions to them in real time. The recent growth of the real time bidding has posed a significant challenge on monitoring such a complicated system. With so many components we need a reliable system that detects the possible changes in the system and alerts the engineering team. In this paper we describe the mechanism that we invented for recovering the representative metrics and detecting the change in their behavior. We show that this mechanism is able to detect the possible problems in time by describing some incident cases.", "title": "" }, { "docid": "c3f942a915c149a7fc9929e0404c61f2", "text": "Distributed model training suffers from communication overheads due to frequent gradient updates transmitted between compute nodes. To mitigate these overheads, several studies propose the use of sparsified stochastic gradients. We argue that these are facets of a general sparsification method that can operate on any possible atomic decomposition. Notable examples include elementwise, singular value, and Fourier decompositions. We present Atomo, a general framework for atomic sparsification of stochastic gradients. Given a gradient, an atomic decomposition, and a sparsity budget, Atomo gives a random unbiased sparsification of the atoms minimizing variance. We show that recent methods such as QSGD and TernGrad are special cases of Atomo and that sparsifiying the singular value decomposition of neural networks gradients, rather than their coordinates, can lead to significantly faster distributed training.", "title": "" }, { "docid": "5f1c03e25fa9a83f6e3a66843778e066", "text": "We consider the problem of learning preferences over trajectories for mobile manipulators such as personal robots and assembly line robots. The preferences we learn are more intricate than those arising from simple geometric constraints on robot’s trajectory, such as distance of the robot from human etc. Our preferences are rather governed by the surrounding context of various objects and human interactions in the environment. Such preferences makes the problem challenging because the criterion of defining a good trajectory now varies with the task, with the environment and across the users. Furthermore, demonstrating optimal trajectories (e.g., learning from expert’s demonstrations) is often challenging and non-intuitive on high degrees of freedom manipulators. In this work, we propose an approach that requires a non-expert user to only incrementally improve the trajectory currently proposed by the robot. We implement our algorithm on two high degree-of-freedom robots, PR2 and Baxter, and present three intuitive mechanisms for providing such incremental feedback. In our experimental evaluation we consider two context rich settings – household chores and grocery store checkout – and show that users are able to train the robot with just a few feedbacks (taking only a few minutes). Despite receiving sub-optimal feedback from non-expert users, our algorithm enjoys theoretical bounds on regret that match the asymptotic rates of optimal trajectory algorithms.", "title": "" }, { "docid": "40be421f4d66283357c22fa9cd59790f", "text": "We have examined standards required for successful e-commerce (EC) architectures and evaluated the strengths and limitations of current systems that have been developed to support EC. We find that there is an unfilled need for systems that can reliably locate buyers and sellers in electronic marketplaces and also facilitate automated transactions. The notion of a ubiquitous network where loosely coupled buyers and sellers can reliably find each other in real time, evaluate products, negotiate prices, and conduct transactions is not adequately supported by current systems. These findings were based on an analysis of mainline EC architectures: EDI, company Websites, B2B hubs, e-Procurement systems, and Web Services. Limitations of each architecture were identified. Particular attention was given to the strengths and weaknesses of the Web Services architecture, since it may overcome some limitations of the other approaches. # 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "1a7bdb641bc9b52a1e48e2d6842bf5aa", "text": "Sales of a brand are determined by measures such as how many customers buy the brand, how often, and how much they also buy other brands. Scanner panel operators routinely report these ‘‘brand performance measures’’ (BPMs) to their clients. In this position paper, we consider how to understand, interpret, and use these measures. The measures are shown to follow well-established patterns. One is that big and small brands differ greatly in how many buyers they have, but usually far less in how loyal these buyers are. The Dirichlet model predicts these patterns. It also provides a broader framework for thinking about all competitive repeat-purchase markets—from soup to gasoline, prescription drugs to aviation fuel, where there are large and small brands, and light and heavy buyers, in contexts as diverse as the United States, United Kingdom, Japan, Germany, and Australasia. Numerous practical uses of the framework are illustrated: auditing the performance of established brands, predicting and evaluating the performance of new brands, checking the nature of unfamiliar markets, of partitioned markets, and of dynamic market situations more generally (where the Dirichlet provides theoretical benchmarks for price promotions, advertising, etc.). In addition, many implications for our understanding of consumers, brands, and the marketing mix logically follow from the Dirichlet framework. In repeat-purchase markets, there is often a lack of segmentation between brands and the typical consumer exhibits polygamous buying behavior (though there might be strong segmentation at the category level). An understanding of these applications and implications leads to consumer insights, imposes constraints on marketing action, and provides norms for evaluating brands and for assessing marketing initiatives. D 2003 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "9bbee4d4c1040b5afd92910ce23d5ba5", "text": "BACKGROUND\nNovel interventions for treatment-resistant depression (TRD) in adolescents are urgently needed. Ketamine has been studied in adults with TRD, but little information is available for adolescents. This study investigated efficacy and tolerability of intravenous ketamine in adolescents with TRD, and explored clinical response predictors.\n\n\nMETHODS\nAdolescents, 12-18 years of age, with TRD (failure to respond to two previous antidepressant trials) were administered six ketamine (0.5 mg/kg) infusions over 2 weeks. Clinical response was defined as a 50% decrease in Children's Depression Rating Scale-Revised (CDRS-R); remission was CDRS-R score ≤28. Tolerability assessment included monitoring vital signs and dissociative symptoms using the Clinician-Administered Dissociative States Scale (CADSS).\n\n\nRESULTS\nThirteen participants (mean age 16.9 years, range 14.5-18.8 years, eight biologically male) completed the protocol. Average decrease in CDRS-R was 42.5% (p = 0.0004). Five (38%) adolescents met criteria for clinical response. Three responders showed sustained remission at 6-week follow-up; relapse occurred within 2 weeks for the other two responders. Ketamine infusions were generally well tolerated; dissociative symptoms and hemodynamic symptoms were transient. Higher dose was a significant predictor of treatment response.\n\n\nCONCLUSIONS\nThese results demonstrate the potential role for ketamine in treating adolescents with TRD. Limitations include the open-label design and small sample; future research addressing these issues are needed to confirm these results. Additionally, evidence suggested a dose-response relationship; future studies are needed to optimize dose. Finally, questions remain regarding the long-term safety of ketamine as a depression treatment; more information is needed before broader clinical use.", "title": "" }, { "docid": "b475a47a9c8e8aca82c236250bbbfc33", "text": "OBJECTIVE\nTo issue a recommendation on the types and amounts of physical activity needed to improve and maintain health in older adults.\n\n\nPARTICIPANTS\nA panel of scientists with expertise in public health, behavioral science, epidemiology, exercise science, medicine, and gerontology.\n\n\nEVIDENCE\nThe expert panel reviewed existing consensus statements and relevant evidence from primary research articles and reviews of the literature.\n\n\nPROCESS\nAfter drafting a recommendation for the older adult population and reviewing drafts of the Updated Recommendation from the American College of Sports Medicine (ACSM) and the American Heart Association (AHA) for Adults, the panel issued a final recommendation on physical activity for older adults.\n\n\nSUMMARY\nThe recommendation for older adults is similar to the updated ACSM/AHA recommendation for adults, but has several important differences including: the recommended intensity of aerobic activity takes into account the older adult's aerobic fitness; activities that maintain or increase flexibility are recommended; and balance exercises are recommended for older adults at risk of falls. In addition, older adults should have an activity plan for achieving recommended physical activity that integrates preventive and therapeutic recommendations. The promotion of physical activity in older adults should emphasize moderate-intensity aerobic activity, muscle-strengthening activity, reducing sedentary behavior, and risk management.", "title": "" }, { "docid": "94e5d19f134670a6ae982311e6c1ccc1", "text": "In mobile ad hoc networks, it is usually assumed that all the nodes belong to the same authority; therefore, they are expected to cooperate in order to support the basic functions of the network such as routing. In this paper, we consider the case in which each node is its own authority and tries to maximize the bene ts it gets from the network. In order to stimulate cooperation, we introduce a virtual currency and detail the way it can be protected against theft and forgery. We show that this mechanism ful lls our expectations without signi cantly decreasing the performance of the network.", "title": "" }, { "docid": "102ed611c0c32cfae33536706d5b3fbf", "text": "In this paper we model user behaviour in Twitter to capture the emergence of trending topics. For this purpose, we first extensively analyse tweet datasets of several different events. In particular, for these datasets, we construct and investigate the retweet graphs. We find that the retweet graph for a trending topic has a relatively dense largest connected component (LCC). Next, based on the insights obtained from the analyses of the datasets, we design a mathematical model that describes the evolution of a retweet graph by three main parameters. We then quantify, analytically and by simulation, the influence of the model parameters on the basic characteristics of the retweet graph, such as the density of edges and the size and density of the LCC. Finally, we put the model in practice, estimate its parameters and compare the resulting behavior of the model to our datasets.", "title": "" } ]
scidocsrr
424a0bbb6e02a2162b0ea504e9d7f2f9
A Survey on Data-driven Dictionary-based Methods for 3D Modeling
[ { "docid": "1353157ed70460e7ddf2202a3f1125f9", "text": "Following the increasing demand to make the creation and manipulation of 3D geometry simpler and more accessible, we introduce a modeling approach that allows even novice users to create sophisticated models in minutes. Our approach is based on the observation that in many modeling settings users create models which belong to a small set of model classes, such as humans or quadrupeds. The models within each class typically share a common component structure. Following this observation, we introduce a modeling system which utilizes this common component structure allowing users to create new models by shuffling interchangeable components between existing models. To enable shuffling, we develop a method for computing a compatible segmentation of input models into meaningful, interchangeable components. Using this segmentation our system lets users create new models with a few mouse clicks, in a fraction of the time required by previous composition techniques. We demonstrate that the shuffling paradigm allows for easy and fast creation of a rich geometric content.", "title": "" }, { "docid": "bdfb3a761d7d9dbb96fa4f07bc2c1f89", "text": "We present an algorithm for recognition and reconstruction of scanned 3D indoor scenes. 3D indoor reconstruction is particularly challenging due to object interferences, occlusions and overlapping which yield incomplete yet very complex scene arrangements. Since it is hard to assemble scanned segments into complete models, traditional methods for object recognition and reconstruction would be inefficient. We present a search-classify approach which interleaves segmentation and classification in an iterative manner. Using a robust classifier we traverse the scene and gradually propagate classification information. We reinforce classification by a template fitting step which yields a scene reconstruction. We deform-to-fit templates to classified objects to resolve classification ambiguities. The resulting reconstruction is an approximation which captures the general scene arrangement. Our results demonstrate successful classification and reconstruction of cluttered indoor scenes, captured in just few minutes.", "title": "" } ]
[ { "docid": "deccfbca102068be749a231405aca30e", "text": " Case report.. We present a case of 28-year-old female patient with condylomata gigantea (Buschke-Lowenstein tumor) in anal and perianal region with propagation on vulva and vagina. The local surgical excision and CO2 laser treatment were performed. Histological examination showed presence of HPV type 11 without malignant potential. Result.. Three months later, there was no recurrence.", "title": "" }, { "docid": "a35aa35c57698d2518e3485ec7649c66", "text": "The review paper describes the application of various image processing techniques for automatic detection of glaucoma. Glaucoma is a neurodegenerative disorder of the optic nerve, which causes partial loss of vision. Large number of people suffers from eye diseases in rural and semi urban areas all over the world. Current diagnosis of retinal disease relies upon examining retinal fundus image using image processing. The key image processing techniques to detect eye diseases include image registration, image fusion, image segmentation, feature extraction, image enhancement, morphology, pattern matching, image classification, analysis and statistical measurements. KeywordsImage Registration; Fusion; Segmentation; Statistical measures; Morphological operation; Classification Full Text: http://www.ijcsmc.com/docs/papers/November2013/V2I11201336.pdf", "title": "" }, { "docid": "07f1caa5f4c0550e3223e587239c0a14", "text": "Due to the unavailable GPS signals in indoor environments, indoor localization has become an increasingly heated research topic in recent years. Researchers in robotics community have tried many approaches, but this is still an unsolved problem considering the balance between performance and cost. The widely deployed low-cost WiFi infrastructure provides a great opportunity for indoor localization. In this paper, we develop a system for WiFi signal strength-based indoor localization and implement two approaches. The first is improved KNN algorithm-based fingerprint matching method, and the other is the Gaussian Process Regression (GPR) with Bayes Filter approach. We conduct experiments to compare the improved KNN algorithm with the classical KNN algorithm and evaluate the localization performance of the GPR with Bayes Filter approach. The experiment results show that the improved KNN algorithm can bring enhancement for the fingerprint matching method compared with the classical KNN algorithm. In addition, the GPR with Bayes Filter approach can provide about 2m localization accuracy for our test environment.", "title": "" }, { "docid": "4a89f20c4b892203be71e3534b32449c", "text": "This paper draws together knowledge from a variety of fields to propose that innovation management can be viewed as a form of organisational capability. Excellent companies invest and nurture this capability, from which they execute effective innovation processes, leading to innovations in new product, services and processes, and superior business performance results. An extensive review of the literature on innovation management, along with a case study of Cisco Systems, develops a conceptual model of the firm as an innovation engine. This new operating model sees substantial investment in innovation capability as the primary engine for wealth creation, rather than the possession of physical assets. Building on the dynamic capabilities literature, an “innovation capability” construct is proposed with seven elements. These are vision and strategy, harnessing the competence base, organisational intelligence, creativity and idea management, organisational structures and systems, culture and climate, and management of technology.", "title": "" }, { "docid": "aa223de93696eec79feb627f899f8e8d", "text": "The standard life events methodology for the prediction of psychological symptoms was compared with one focusing on relatively minor events, namely, the hassles and uplifts of everyday life. Hassles and Uplifts Scales were constructed and administered once a month for 10 consecutive months to a community sample of middle-aged adults. It was found that the Hassles Scale was a better predictor of concurrent and subsequent psychological symptoms than were the life events scores, and that the scale shared most of the variance in symptoms accounted for by life events. When the effects of life events scores were removed, hassles and symptoms remained significantly correlated. Uplifts were positively related to symptoms for women but not for men. Hassles and uplifts were also shown to be related, although only modestly so, to positive and negative affect, thus providing discriminate validation for hassles and uplifts in comparison to measures of emotion. It was concluded that the assessment of daily hassles and uplifts may be a better approach to the prediction of adaptational outcomes than the usual life events approach.", "title": "" }, { "docid": "abea38a143932cc7372fa19f0c494908", "text": "Applications of reinforcement learning for robotic manipulation often assume an episodic setting. However, controllers trained with reinforcement learning are often situated in the context of a more complex compound task, where multiple controllers might be invoked in sequence to accomplish a higher-level goal. Furthermore, training such controllers typically requires resetting the environment between episodes, which is typically handled manually. We describe an approach for training chains of controllers with reinforcement learning. This requires taking into account the state distributions induced by preceding controllers in the chain, as well as automatically training reset controllers that can reset the task between episodes. The initial state of each controller is determined by the controller that precedes it, resulting in a non-stationary learning problem. We demonstrate that a recently developed method that optimizes linear-Gaussian controllers under learned local linear models can tackle this sort of non-stationary problem, and that training controllers concurrently with a corresponding reset controller only minimally increases training time. We also demonstrate this method on a complex tool use task that consists of seven stages and requires using a toy wrench to screw in a bolt. This compound task requires grasping and handling complex contact dynamics. After training, the controllers can execute the entire task quickly and efficiently. Finally, we show that this method can be combined with guided policy search to automatically train nonlinear neural network controllers for a grasping task with considerable variation in target position.", "title": "" }, { "docid": "3c54b07b159fabe4c3ca1813abfdae6f", "text": "We study the structure of the social graph of active Facebook users, the largest social network ever analyzed. We compute numerous features of the graph including the number of users and friendships, the degree distribution, path lengths, clustering, and mixing patterns. Our results center around three main observations. First, we characterize the global structure of the graph, determining that the social network is nearly fully connected, with 99.91% of individuals belonging to a single large connected component, and we confirm the ‘six degrees of separation’ phenomenon on a global scale. Second, by studying the average local clustering coefficient and degeneracy of graph neighborhoods, we show that while the Facebook graph as a whole is clearly sparse, the graph neighborhoods of users contain surprisingly dense structure. Third, we characterize the assortativity patterns present in the graph by studying the basic demographic and network properties of users. We observe clear degree assortativity and characterize the extent to which ‘your friends have more friends than you’. Furthermore, we observe a strong effect of age on friendship preferences as well as a globally modular community structure driven by nationality, but we do not find any strong gender homophily. We compare our results with those from smaller social networks and find mostly, but not entirely, agreement on common structural network characteristics.", "title": "" }, { "docid": "2f761de3f94d86a2c73aac3dce413dca", "text": "The class imbalance problem has been recognized in many practical domains and a hot topic of machine learning in recent years. In such a problem, almost all the examples are labeled as one class, while far fewer examples are labeled as the other class, usually the more important class. In this case, standard machine learning algorithms tend to be overwhelmed by the majority class and ignore the minority class since traditional classifiers seeking an accurate performance over a full range of instances. This paper reviewed academic activities special for the class imbalance problem firstly. Then investigated various remedies in four different levels according to learning phases. Following surveying evaluation metrics and some other related factors, this paper showed some future directions at last.", "title": "" }, { "docid": "3ba2477beb6a42bfe2e0c45d9b48b471", "text": "The presence and functional role of inositol trisphosphate receptors (IP3R) was investigated by electrophysiology and immunohistochemistry in hair cells from the frog semicircular canal. Intracellular recordings were performed from single fibres of the posterior canal in the isolated, intact frog labyrinth, at rest and during rotation, in the presence of IP3 receptor inhibitors and drugs known to produce Ca2+ release from the internal stores or to increase IP3 production. Hair cell immunolabelling for IP3 receptor was performed by standard procedures. The drug 2-aminoethoxydiphenyl borate (2APB), an IP3 receptor inhibitor, produced a marked decrease of mEPSP and spike frequency at low concentration (0.1 mm), without affecting mEPSP size or time course. At high concentration (1 mm), 2APB is reported to block the sarcoplasmic-endoplasmic reticulum Ca2+-ATPase (SERCA pump) and increase [Ca2+]i; at the labyrinthine cytoneural junction, it greatly enhanced the resting and mechanically evoked sensory discharge frequency. The selective agonist of group I metabotropic glutamate receptors (RS)-3,5-dihydroxyphenylglycine (DHPG, 0.6 mm), produced a transient increase in resting mEPSP and spike frequency at the cytoneural junction, with no effects on mEPSP shape or amplitude. Pretreatment with cyclopiazonic acid (CPA, 0.1 mm), a SERCA pump inhibitor, prevented the facilitatory effect of both 2APB and DHPG, suggesting a link between Ca2+ release from intracellular stores and quantal emission. Consistently, diffuse immunoreactivity for IP3 receptors was observed in posterior canal hair cells. Our results indicate the presence and a possibly relevant functional role of IP3-sensitive stores in controlling [Ca2+]i and modulating the vestibular discharge.", "title": "" }, { "docid": "670556463e3204a98b1e407ea0619a1f", "text": "1 Ekaterina Prasolova-Forland, IDI, NTNU, Sem Salandsv 7-9, N-7491 Trondheim, Norway ekaterip@idi.ntnu.no Abstract  This paper discusses awareness support in educational context, focusing on the support offered by collaborative virtual environments. Awareness plays an important role in everyday educational activities, especially in engineering courses where projects and group work is an integral part of the curriculum. In this paper we will provide a general overview of awareness in computer supported cooperative work and then focus on the awareness mechanisms offered by CVEs. We will also discuss the role and importance of these mechanisms in educational context and make some comparisons between awareness support in CVEs and in more traditional tools.", "title": "" }, { "docid": "6db44a34a5a78c4a65fa7653dbf8ab96", "text": "Grush and Churchland (1995) attempt to address aspects of the proposal that we have been making concerning a possible physical mechanism underlying the phenomenon of consciousness. Unfortunately, they employ arguments that are highly misleading and, in some important respects, factually incorrect. Their article 'Gaps in Penrose's Toilings' is addressed specifically at the writings of one of us (Penrose), but since the particular model they attack is one put forward by both of us (Hameroff and Penrose, 1995; 1996), it is appropriate that we both reply; but since our individual remarks refer to different aspects of their criticism we are commenting on their article separately. The logical arguments discussed by Grush and Churchland, and the related physics are answered in Part l by Penrose, largely by pointing out precisely where these arguments have already been treated in detail in Shadows of the Mind (Penrose, 1994). In Part 2, Hameroff replies to various points on the biological side, showing for example how they have seriously misunderstood what they refer to as 'physiological evidence' regarding to effects of the drug colchicine. The reply serves also to discuss aspects of our model 'orchestrated objective reduction in brain microtubules – Orch OR' which attempts to deal with the serious problems of consciousness more directly and completely than any previous theory. Logical arguments It has been argued in the books by one of us, The Emperor's New Mind (Penrose, 1989 – henceforth Emperor) and Shadows of the Mind (Penrose, 1994 – henceforth Shadows) that Gödel's theorem shows that there must be something non–computational involved in mathematical thinking. The Grush and Churchland (1995 – henceforth G&C) discussion attempts to dismiss this argument from Gödel's theorem on certain grounds. However, the main points that they put forward are ones which have been amply addressed in Shadows. It is very hard to understand how G&C can make the claims that they do without giving any indication that virtually all their points are explicitly taken into account in Shadows. It might be the case that the", "title": "" }, { "docid": "58e176bb818efed6de7224d7088f2487", "text": "In the context of marketing, attribution is the process of quantifying the value of marketing activities relative to the final outcome. It is a topic rapidly growing in importance as acknowledged by the industry. However, despite numerous tools and techniques designed for its measurement, the absence of a comprehensive assessment and classification scheme persists. Thus, we aim to bridge this gap by providing an academic review to accumulate and comprehend current knowledge in attribution modeling, leading to a road map to guide future research, expediting new knowledge creation.", "title": "" }, { "docid": "ac6ce191c14b48695c82f3d230264777", "text": "We introduce a kernel-based method for change-point analys is within a sequence of temporal observations. Change-point analysis of an unla belled sample of observations consists in, first, testing whether a change in the di stribution occurs within the sample, and second, if a change occurs, estimating the ch ange-point instant after which the distribution of the observations switches f rom one distribution to another different distribution. We propose a test statisti c based upon the maximum kernel Fisher discriminant ratio as a measure of homogeneit y b tween segments. We derive its limiting distribution under the null hypothes is (no change occurs), and establish the consistency under the alternative hypoth esis (a change occurs). This allows to build a statistical hypothesis testing proce dur for testing the presence of a change-point, with a prescribed false-alarm proba bility and detection probability tending to one in the large-sample setting. If a ch nge actually occurs, the test statistic also yields an estimator of the change-po int location. Promising experimental results in temporal segmentation of mental ta sks from BCI data and pop song indexation are presented.", "title": "" }, { "docid": "67e06feae2a593017596ab238f9e096e", "text": "ABSTRACT\nThis paper presents a survey on methods that use digital image processing techniques to detect, quantify and classify plant diseases from digital images in the visible spectrum. Although disease symptoms can manifest in any part of the plant, only methods that explore visible symptoms in leaves and stems were considered. This was done for two main reasons: to limit the length of the paper and because methods dealing with roots, seeds and fruits have some peculiarities that would warrant a specific survey. The selected proposals are divided into three classes according to their objective: detection, severity quantification, and classification. Each of those classes, in turn, are subdivided according to the main technical solution used in the algorithm. This paper is expected to be useful to researchers working both on vegetable pathology and pattern recognition, providing a comprehensive and accessible overview of this important field of research.", "title": "" }, { "docid": "de4b2f6ff87b254a68ecd4a7b5318d66", "text": "Many scholars see entrepreneurs as action-oriented individuals who use rules of thumb and other mental heuristics to make decisions, but who do little systematic planning and analysis. We argue that what distinguishes successful from unsuccessful entrepreneurs is precisely that the former vary their decisionmaking styles, sometimes relying on heuristics and sometimes relying on systematic analysis. In our proposed framework, successful entrepreneurs assess their level of expertise and the level of ambiguity in a particular decision context and then tailor their decision-making process to reduce risk.", "title": "" }, { "docid": "b151d236ce17b4d03b384a29dbb91330", "text": "To investigate the blood supply to the nipple areola complex (NAC) on thoracic CT angiograms (CTA) to improve breast pedicle design in reduction mammoplasty. In a single centre, CT scans of the thorax were retrospectively reviewed for suitability by a cardiothoracic radiologist. Suitable scans had one or both breasts visible in extended fields, with contrast enhancement of breast vasculature in a female patient. The arterial sources, intercostal space perforated, glandular/subcutaneous course, vessel entry point, and the presence of periareolar anastomoses were recorded for the NAC of each breast. From 69 patients, 132 breasts were suitable for inclusion. The most reproducible arterial contribution to the NAC was perforating branches arising from the internal thoracic artery (ITA) (n = 108, 81.8%), followed by the long thoracic artery (LTA) (n = 31, 23.5%) and anterior intercostal arteries (AI) (n = 21, 15.9%). Blood supply was superficial versus deep in (n = 86, 79.6%) of ITA sources, (n = 28, 90.3%) of LTA sources, and 10 (47.6%) of AI sources. The most vascularly reliable breast pedicle would be asymmetrical in 7.9% as a conservative estimate. We suggest that breast CT angiography can provide valuable information about NAC blood supply to aid customised pedicle design, especially in high-risk, large-volume breast reductions where the risk of vascular-dependent complications is the greatest and asymmetrical dominant vasculature may be present. Superficial ITA perforator supplies are predominant in a majority of women, followed by LTA- and AIA-based sources, respectively.", "title": "" }, { "docid": "32faa5a14922d44101281c783cf6defb", "text": "A novel multifocus color image fusion algorithm based on the quaternion wavelet transform (QWT) is proposed in this paper, aiming at solving the image blur problem. The proposed method uses a multiresolution analysis procedure based on the quaternion wavelet transform. The performance of the proposed fusion scheme is assessed by some experiments, and the experimental results show that the proposed method is effective and performs better than the existing fusion methods.", "title": "" }, { "docid": "8090121a59c1070aacc7a20941898551", "text": "In this article, I explicitly solve dynamic portfolio choice problems, up to the solution of an ordinary differential equation (ODE), when the asset returns are quadratic and the agent has a constant relative risk aversion (CRRA) coefficient. My solution includes as special cases many existing explicit solutions of dynamic portfolio choice problems. I also present three applications that are not in the literature. Application 1 is the bond portfolio selection problem when bond returns are described by ‘‘quadratic term structure models.’’ Application 2 is the stock portfolio selection problem when stock return volatility is stochastic as in Heston model. Application 3 is a bond and stock portfolio selection problem when the interest rate is stochastic and stock returns display stochastic volatility. (JEL G11)", "title": "" }, { "docid": "12be3f9c1f02ad3f26462ab841a80165", "text": "Queries in patent prior art search are full patent applications and much longer than standard ad hoc search and web search topics. Standard information retrieval (IR) techniques are not entirely effective for patent prior art search because of ambiguous terms in these massive queries. Reducing patent queries by extracting key terms has been shown to be ineffective mainly because it is not clear what the focus of the query is. An optimal query reduction algorithm must thus seek to retain the useful terms for retrieval favouring recall of relevant patents, but remove terms which impair IR effectiveness. We propose a new query reduction technique decomposing a patent application into constituent text segments and computing the Language Modeling (LM) similarities by calculating the probability of generating each segment from the top ranked documents. We reduce a patent query by removing the least similar segments from the query, hypothesising that removal of these segments can increase the precision of retrieval, while still retaining the useful context to achieve high recall. Experiments on the patent prior art search collection CLEF-IP 2010 show that the proposed method outperforms standard pseudo-relevance feedback (PRF) and a naive method of query reduction based on removal of unit frequency terms (UFTs).", "title": "" }, { "docid": "28ef5955b08bdfbef524c96a41f4aa9e", "text": "A central concern in Evidence Based Medicine (EBM) is how to convey research results effectively to practitioners. One important idea is to summarize results by key summary statistics that describe the effectiveness (or lack thereof) of a given intervention, specifically the absolute risk reduction (ARR) and number needed to treat (NNT). Manual summarization is slow and expensive, thus, with the exponential growth of the biomedical research literature, automated solutions are needed. In this paper, we present a novel method for automatically creating EBM-oriented summaries from research abstracts of randomly-controlled trials (RCTs). The system extracts descriptions of the treatment groups and outcomes, as well as various associated quantities, and then calculates summary statistics. Results on a hand-annotated corpus of research abstracts show promising, and potentially useful, results.", "title": "" } ]
scidocsrr
c0439b7b9e978c4bc24ed86826ca9a08
Migrating Monolithic Mobile Application to Microservice Architecture: An Experiment Report
[ { "docid": "ca21a20152eef5081fa51e7f3a5c2d87", "text": "We review some of the most widely used patterns for the programming of microservices: circuit breaker, service discovery, and API gateway. By systematically analysing different deployment strategies for these patterns, we reach new insight especially for the application of circuit breakers. We also evaluate the applicability of Jolie, a language for the programming of microservices, for these patterns and report on other standard frameworks offering similar solutions. Finally, considerations for future developments are presented.", "title": "" } ]
[ { "docid": "9cb682049f4a4d1291189b7cfccafb1e", "text": "The sequencing by hybridization (SBH) of determining the order in which nucleotides should occur on a DNA string is still under discussion for enhancements on computational intelligence although the next generation of DNA sequencing has come into existence. In the last decade, many works related to graph theory-based DNA sequencing have been carried out in the literature. This paper proposes a method for SBH by integrating hypergraph with genetic algorithm (HGGA) for designing a novel analytic technique to obtain DNA sequence from its spectrum. The paper represents elements of the spectrum and its relation as hypergraph and applies the unimodular property to ensure the compatibility of relations between l-mers. The hypergraph representation and unimodular property are bound with the genetic algorithm that has been customized with a novel selection and crossover operator reducing the computational complexity with accelerated convergence. Subsequently, upon determining the primary strand, an anti-homomorphism is invoked to find the reverse complement of the sequence. The proposed algorithm is implemented in the GenBank BioServer datasets, and the results are found to prove the efficiency of the algorithm. The HGGA is a non-classical algorithm with significant advantages and computationally attractive complexity reductions ranging to $$O(n^{2} )$$ O ( n 2 ) with improved accuracy that makes it prominent for applications other than DNA sequencing like image processing, task scheduling and big data processing.", "title": "" }, { "docid": "66da54da90bbd252386713751cec7c67", "text": "A cyber world (CW) is a digitized world created on cyberspaces inside computers interconnected by networks including the Internet. Following ubiquitous computers, sensors, e-tags, networks, information, services, etc., is a road towards a smart world (SW) created on both cyberspaces and real spaces. It is mainly characterized by ubiquitous intelligence or computational intelligence pervasion in the physical world filled with smart things. In recent years, many novel and imaginative researcheshave been conducted to try and experiment a variety of smart things including characteristic smart objects and specific smart spaces or environments as well as smart systems. The next research phase to emerge, we believe, is to coordinate these diverse smart objects and integrate these isolated smart spaces together into a higher level of spaces known as smart hyperspace or hyper-environments, and eventually create the smart world. In this paper, we discuss the potential trends and related challenges toward the smart world and ubiquitous intelligence from smart things to smart spaces and then to smart hyperspaces. Likewise, we show our efforts in developing a smart hyperspace of ubiquitous care for kids, called UbicKids.", "title": "" }, { "docid": "133c0f5dd7c1e61b26699b5c898b0962", "text": "Myocardial and vascular endothelial tissues have receptors for thyroid hormones and are sensitive to changes in the concentrations of circulating thyroid hormones. The importance of thyroid hormones in maintaining cardiovascular homeostasis can be deduced from clinical and experimental data showing that even subtle changes in thyroid hormone concentrations — such as those observed in subclinical hypothyroidism or hyperthyroidism, and low triiodothyronine syndrome — adversely influence the cardiovascular system. Some potential mechanisms linking the two conditions are dyslipidaemia, endothelial dysfunction, blood pressure changes, and direct effects of thyroid hormones on the myocardium. Several interventional trials showed that treatment of subclinical thyroid diseases improves cardiovascular risk factors, which implies potential benefits for reducing cardiovascular events. Over the past 2 decades, accumulating evidence supports the association between abnormal thyroid function at the time of an acute myocardial infarction (MI) and subsequent adverse cardiovascular outcomes. Furthermore, experimental studies showed that thyroid hormones can have an important therapeutic role in reducing infarct size and improving myocardial function after acute MI. In this Review, we summarize the literature on thyroid function in cardiovascular diseases, both as a risk factor as well as in the setting of cardiovascular diseases such as heart failure or acute MI, and outline the effect of thyroid hormone replacement therapy for reducing the risk of cardiovascular disease.", "title": "" }, { "docid": "3d288560b7736f75d31a62bbb615f73a", "text": "Abstract: Whole world and managements of Educational Institutions’ are worried about consistency of student attendance, which affects in their complete academic performance and finally affects the development of education in students’. Proper attendance recording and management has become important in today’s world as attendance and achievement go hand in hand. Attendance is one of the work ethics valued by employers. Most of the government organizations and educational institutions in developing countries still use paper-based attendance method for maintaining the attendance records. There is a need to replace these traditional methods of attendance recording with biometric attendance system. Besides being secure, Fingerprint based attendance system will also be environment friendly. Fingerprint matching is widely used in forensics for a long time. It can also be used in applications such as identity management and access control. This review incorporates the problems of attendance systems presently in use, working of a typical fingerprint-based attendance system, study of different systems, their advantages, disadvantages and comparison based upon important parameters. Presently the conventional methods for taking attendance is calling name their name/roll no or by signing on a paper, which practically time consuming and less secure also since there are many chance of proxy attendance. Hence, for preserving attendance there is a necessity of a computer-based student attendance supervision system which will assist the faculty. The paper reviews several computerized attendance supervision systems which is being developed by using different techniques. Whole world and managements of Educational Institutions’ are worried about consistency of student attendance, which affects in their complete academic performance and finally affects the development of education in students’. Proper attendance recording and management has become important in today’s world as attendance and achievement go hand in hand. Attendance is one of the work ethics valued by employers. Most of the government organizations and educational institutions in developing countries still use paper-based attendance method for maintaining the attendance records. There is a need to replace these traditional methods of attendance recording with biometric attendance system. Besides being secure, Fingerprint based attendance system will also be environment friendly. Fingerprint matching is widely used in forensics for a long time. It can also be used in applications such as identity management and access control. This review incorporates the problems of attendance systems presently in use, working of a typical fingerprint-based attendance system, study of different systems, their advantages, disadvantages and comparison based upon important parameters. Presently the conventional methods for taking attendance is calling name their name/roll no or by signing on a paper, which practically time consuming and less secure also since there are many chance of proxy attendance. Hence, for preserving attendance there is a necessity of a computer-based student attendance supervision system which will assist the faculty. The paper reviews several computerized attendance supervision systems which is being developed by using different techniques. Keyword: — Biometric, Fingerprint, GSM, LabVIEW, Android, MATLAB, RFID, ZigBee systems", "title": "" }, { "docid": "fb8c665ba2a93d6c7d4c1763af946912", "text": "Always-on continuous sensing apps drain the battery quickly because they prevent the main processor from sleeping. Instead, sensor hub hardware, available in many smartphones today, can run continuous sensing at lower power while keeping the main processor idle. However, developers have to divide functionality between the main processor and the sensor hub. We implement MobileHub, a system that automatically rewrites applications to leverage the sensor hub without additional programming effort. MobileHub uses a combination of dynamic taint tracking and machine learning to learn when it is safe to leverage the sensor hub without affecting application semantics. We implement MobileHub in Android and prototype a sensor hub on a 8-bit AVR micro-controller. We experiment with 20 applications from Google Play. Our evaluation shows that MobileHub significantly reduces power consumption for continuous sensing apps.", "title": "" }, { "docid": "890a2092f3f55799e9c0216dac3d9e2f", "text": "The rise in popularity of permissioned blockchain platforms in recent time is significant. Hyperledger Fabric is one such permissioned blockchain platform and one of the Hyperledger projects hosted by the Linux Foundation. The Fabric comprises various components such as smart-contracts, endorsers, committers, validators, and orderers. As the performance of blockchain platform is a major concern for enterprise applications, in this work, we perform a comprehensive empirical study to characterize the performance of Hyperledger Fabric and identify potential performance bottlenecks to gain a better understanding of the system. We follow a two-phased approach. In the first phase, our goal is to understand the impact of various configuration parameters such as block size, endorsement policy, channels, resource allocation, state database choice on the transaction throughput & latency to provide various guidelines on configuring these parameters. In addition, we also aim to identify performance bottlenecks and hotspots. We observed that (1) endorsement policy verification, (2) sequential policy validation of transactions in a block, and (3) state validation and commit (with CouchDB) were the three major bottlenecks. In the second phase, we focus on optimizing Hyperledger Fabric v1.0 based on our observations. We introduced and studied various simple optimizations such as aggressive caching for endorsement policy verification in the cryptography component (3x improvement in the performance) and parallelizing endorsement policy verification (7x improvement). Further, we enhanced and measured the effect of an existing bulk read/write optimization for CouchDB during state validation & commit phase (2.5x improvement). By combining all three optimizations1, we improved the overall throughput by 16x (i.e., from 140 tps to 2250 tps).", "title": "" }, { "docid": "cc6161fd350ac32537dc704cbfef2155", "text": "The contribution of cloud computing and mobile computing technologies lead to the newly emerging mobile cloud computing paradigm. Three major approaches have been proposed for mobile cloud applications: 1) extending the access to cloud services to mobile devices; 2) enabling mobile devices to work collaboratively as cloud resource providers; 3) augmenting the execution of mobile applications on portable devices using cloud resources. In this paper, we focus on the third approach in supporting mobile data stream applications. More specifically, we study how to optimize the computation partitioning of a data stream application between mobile and cloud to achieve maximum speed/throughput in processing the streaming data.\n To the best of our knowledge, it is the first work to study the partitioning problem for mobile data stream applications, where the optimization is placed on achieving high throughput of processing the streaming data rather than minimizing the makespan of executions as in other applications. We first propose a framework to provide runtime support for the dynamic computation partitioning and execution of the application. Different from existing works, the framework not only allows the dynamic partitioning for a single user but also supports the sharing of computation instances among multiple users in the cloud to achieve efficient utilization of the underlying cloud resources. Meanwhile, the framework has better scalability because it is designed on the elastic cloud fabrics. Based on the framework, we design a genetic algorithm for optimal computation partition. Both numerical evaluation and real world experiment have been performed, and the results show that the partitioned application can achieve at least two times better performance in terms of throughput than the application without partitioning.", "title": "" }, { "docid": "64ba4467dc4495c6828f2322e8f415f2", "text": "Due to the advancement of microoptoelectromechanical systems and microelectromechanical systems (MEMS) technologies, novel display architectures have emerged. One of the most successful and well-known examples is the Digital Micromirror Device from Texas Instruments, a 2-D array of bistable MEMS mirrors, which function as spatial light modulators for the projection display. This concept of employing an array of modulators is also seen in the grating light valve and the interferometric modulator display, where the modulation mechanism is based on optical diffraction and interference, respectively. Along with this trend comes the laser scanning display, which requires a single scanning device with a large scan angle and a high scan frequency. A special example in this category is the retinal scanning display, which is a head-up wearable module that laser-scans the image directly onto the retina. MEMS technologies are also found in other display-related research, such as stereoscopic (3-D) displays and plastic thin-film displays.", "title": "" }, { "docid": "f0f47ce0fc361740aedf17d6d2061e03", "text": "In supervised learning scenarios, feature selection has be en studied widely in the literature. Selecting features in unsupervis ed learning scenarios is a much harder problem, due to the absence of class la bel that would guide the search for relevant information. And, almos t all of previous unsupervised feature selection methods are “wrapper ” techniques that require a learning algorithm to evaluate the candidate fe ture subsets. In this paper, we propose a “filter” method for feature select ion which is independent of any learning algorithm. Our method can be per formed in either supervised or unsupervised fashion. The proposed me thod is based on the observation that, in many real world classification pr oblems, data from the same class are often close to each other. The importa nce of a feature is evaluated by its power of locality preserving, or , Laplacian Score. We compare our method with data variance (unsupervised) an d Fisher score (supervised) on two data sets. Experimental re sults demonstrate the effectiveness and efficiency of our algorithm.", "title": "" }, { "docid": "17c4ad36c7e97097d783382d7450279c", "text": "Popular content such as software updates is requested by a large number of users. Traditionally, to satisfy a large number of requests, lager server farms or mirroring are used, both of which are expensive. An inexpensive alternative are peer-to-peer based replication systems, where users who retrieve the file, act simultaneously as clients and servers. In this paper, we study BitTorrent, a new and already very popular peerto-peer application that allows distribution of very large contents to a large set of hosts. Our analysis of BitTorrent is based on measurements collected on a five months long period that involved thousands of peers. We assess the performance of the algorithms used in BitTorrent through several metrics. Our conclusions indicate that BitTorrent is a realistic and inexpensive alternative to the classical server-based content distribution.", "title": "" }, { "docid": "f38530be19fc66121fbce56552ade0ea", "text": "A fully integrated low-dropout-regulated step-down multiphase-switched-capacitor DC-DC converter (a.k.a. charge pump, CP) with a fast-response adaptive-phase (Fast-RAP) digital controller is designed using a 65-nm CMOS process. Different from conventional designs, a low-dropout regulator (LDO) with an NMOS power stage is used without the need for an additional stepup CP for driving. A clock tripler and a pulse divider are proposed to enable the Fast-RAP control. As the Fast-RAP digital controller is designed to be able to respond faster than the cascaded linear regulator, transient response will not be affected by the adaptive scheme. Thus, light-load efficiency is improved without sacrificing the response time. When the CP operates at 90 MHz with 80.3% CP efficiency, only small ripples would appear on the CP output with the 18-phase interleaving scheme, and be further attenuated at VOUT by the 50-mV dropout regulator with only 4.1% efficiency overhead and 6.5% area overhead. The output ripple is less than 2 mV for a load current of 20 mA.", "title": "" }, { "docid": "fc06673e86c237e06d9e927e2f6468a8", "text": "Locality sensitive hashing (LSH) is a computationally efficient alternative to the distance based anomaly detection. The main advantages of LSH lie in constant detection time, low memory requirement, and simple implementation. However, since the metric of distance in LSHs does not consider the property of normal training data, a naive use of existing LSHs would not perform well. In this paper, we propose a new hashing scheme so that hash functions are selected dependently on the properties of the normal training data for reliable anomaly detection. The distance metric of the proposed method, called NSH (Normality Sensitive Hashing) is theoretically interpreted in terms of the region of normal training data and its effectiveness is demonstrated through experiments on real-world data. Our results are favorably comparable to state-of-the arts with the low-level features.", "title": "" }, { "docid": "b5ef16dbafcdafaf085abd562bfad0ad", "text": "Neuroimaging has greatly enhanced the cognitive neuroscience understanding of the human brain and its variation across individuals (neurodiversity) in both health and disease. Such progress has not yet, however, propelled changes in educational or medical practices that improve people's lives. We review neuroimaging findings in which initial brain measures (neuromarkers) are correlated with or predict future education, learning, and performance in children and adults; criminality; health-related behaviors; and responses to pharmacological or behavioral treatments. Neuromarkers often provide better predictions (neuroprognosis), alone or in combination with other measures, than traditional behavioral measures. With further advances in study designs and analyses, neuromarkers may offer opportunities to personalize educational and clinical practices that lead to better outcomes for people.", "title": "" }, { "docid": "75cf6e81de38f370d629d0041783243d", "text": "CONTEXT\nThe Association of American Medical Colleges' Institute for Improving Medical Education's report entitled 'Effective Use of Educational Technology' called on researchers to study the effectiveness of multimedia design principles. These principles were empirically shown to result in superior learning when used with college students in laboratory studies, but have not been studied with undergraduate medical students as participants.\n\n\nMETHODS\nA pre-test/post-test control group design was used, in which the traditional-learning group received a lecture on shock using traditionally designed slides and the modified-design group received the same lecture using slides modified in accord with Mayer's principles of multimedia design. Participants included Year 3 medical students at a private, midwestern medical school progressing through their surgery clerkship during the academic year 2009-2010. The medical school divides students into four groups; each group attends the surgery clerkship during one of the four quarters of the academic year. Students in the second and third quarters served as the modified-design group (n=91) and students in the fourth-quarter clerkship served as the traditional-design group (n=39).\n\n\nRESULTS\nBoth student cohorts had similar levels of pre-lecture knowledge. Both groups showed significant improvements in retention (p<0.0001), transfer (p<0.05) and total scores (p<0.0001) between the pre- and post-tests. Repeated-measures anova analysis showed statistically significant greater improvements in retention (F=10.2, p=0.0016) and total scores (F=7.13, p=0.0081) for those students instructed using principles of multimedia design compared with those instructed using the traditional design.\n\n\nCONCLUSIONS\nMultimedia design principles are easy to implement and result in improved short-term retention among medical students, but empirical research is still needed to determine how these principles affect transfer of learning. Further research on applying the principles of multimedia design to medical education is needed to verify the impact it has on the long-term learning of medical students, as well as its impact on other forms of multimedia instructional programmes used in the education of medical students.", "title": "" }, { "docid": "10f726ffc8ee1727b1c905f67fc80686", "text": "Previous monocular depth estimation methods take a single view and directly regress the expected results. Though recent advances are made by applying geometrically inspired loss functions during training, the inference procedure does not explicitly impose any geometrical constraint. Therefore these models purely rely on the quality of data and the effectiveness of learning to generalize. This either leads to suboptimal results or the demand of huge amount of expensive ground truth labelled data to generate reasonable results. In this paper, we show for the first time that the monocular depth estimation problem can be reformulated as two sub-problems, a view synthesis procedure followed by stereo matching, with two intriguing properties, namely i) geometrical constraints can be explicitly imposed during inference; ii) demand on labelled depth data can be greatly alleviated. We show that the whole pipeline can still be trained in an end-to-end fashion and this new formulation plays a critical role in advancing the performance. The resulting model outperforms all the previous monocular depth estimation methods as well as the stereo block matching method in the challenging KITTI dataset by only using a small number of real training data. The model also generalizes well to other monocular depth estimation benchmarks. We also discuss the implications and the advantages of solving monocular depth estimation using stereo methods.", "title": "" }, { "docid": "f8093849e9157475149d00782c60ae60", "text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.", "title": "" }, { "docid": "25eea5205d1f8beaa8c4a857da5714bc", "text": "To backpropagate the gradients through discrete stochastic layers, we encode the true gradients into a multiplication between random noises and the difference of the same function of two different sets of discrete latent variables, which are correlated with these random noises. The expectations of that multiplication over iterations are zeros combined with spikes from time to time. To modulate the frequencies, amplitudes, and signs of the spikes to capture the temporal evolution of the true gradients, we propose the augment-REINFORCE-merge (ARM) estimator that combines data augmentation, the score-function estimator, permutation of the indices of latent variables, and variance reduction for Monte Carlo integration using common random numbers. The ARM estimator provides low-variance and unbiased gradient estimates for the parameters of discrete distributions, leading to state-of-the-art performance in both auto-encoding variational Bayes and maximum likelihood inference, for discrete latent variable models with one or multiple discrete stochastic layers.", "title": "" }, { "docid": "fdda2a3c2148fcfd79bda7d688410b0b", "text": "Large scale dissemination of power grid entities such as distributed energy resources (DERs), electric vehicles (EVs), and smart meters has provided diverse challenges for Smart Grid automation. Novel control models such as virtual power plants (VPPs), microgrids, and smart houses introduce a new set of automation and integration demands that surpass capabilities of currently deployed solutions. Therefore, there is a strong need for finding an alternative technical approach, which can resolve identified issues and fulfill automation prerequisites implied by the Smart Grid vision. This paper presents a novel standards-compliant solution for accelerated Smart Grid integration and automation based on semantic services. Accordingly, two most influential industrial automation standards, IEC 61850 and OPC Unified Architecture (OPC UA) have been extensively analyzed in order to provide a value-added service-oriented integration framework for the Smart Grid.", "title": "" }, { "docid": "55b1eb2df97e5d8e871e341c80514ab1", "text": "Modern digital still cameras sample the color spectrum using a color filter array coated to the CCD array such that each pixel samples only one color channel. The result is a mosaic of color samples which is used to reconstruct the full color image by taking the information of the pixels’ neighborhood. This process is called demosaicking. While standard literature evaluates the performance of these reconstruction algorithms by comparison of a ground-truth image with a reconstructed Bayer pattern image in terms of grayscale comparison, this work gives an evaluation concept to asses the geometrical accuracy of the resulting color images. Only if no geometrical distortions are created during the demosaicking process, it is allowed to use such images for metric calculations, e.g. 3D reconstruction or arbitrary metrical photogrammetric processing.", "title": "" }, { "docid": "071124f28583ef9f09363f0da0eb2bc6", "text": "Objective: This study proposes and evaluates a novel data-driven spatial filtering approach for enhancing steady-state visual evoked potentials (SSVEPs) detection toward a high-speed brain-computer interface (BCI) speller. Methods: Task-related component analysis (TRCA), which can enhance reproducibility of SSVEPs across multiple trials, was employed to improve the signal-to-noise ratio (SNR) of SSVEP signals by removing background electroencephalographic (EEG) activities. An ensemble method was further developed to integrate TRCA filters corresponding to multiple stimulation frequencies. This study conducted a comparison of BCI performance between the proposed TRCA-based method and an extended canonical correlation analysis (CCA)-based method using a 40-class SSVEP dataset recorded from 12 subjects. An online BCI speller was further implemented using a cue-guided target selection task with 20 subjects and a free-spelling task with 10 of the subjects. Results: The offline comparison results indicate that the proposed TRCA-based approach can significantly improve the classification accuracy compared with the extended CCA-based method. Furthermore, the online BCI speller achieved averaged information transfer rates (ITRs) of 325.33 ± 38.17 bits/min with the cue-guided task and 198.67 ± 50.48 bits/min with the free-spelling task. Conclusion: This study validated the efficiency of the proposed TRCA-based method in implementing a high-speed SSVEP-based BCI. Significance: The high-speed SSVEP-based BCIs using the TRCA method have great potential for various applications in communication and control.", "title": "" } ]
scidocsrr
6386f2159ab4a40587aa9acaf1f6a46e
Trajectory clustering via deep representation learning
[ { "docid": "1b53378e33f24f59eb0486f2978bebee", "text": "The advances in location-acquisition and mobile computing techniques have generated massive spatial trajectory data, which represent the mobility of a diversity of moving objects, such as people, vehicles, and animals. Many techniques have been proposed for processing, managing, and mining trajectory data in the past decade, fostering a broad range of applications. In this article, we conduct a systematic survey on the major research into trajectory data mining, providing a panorama of the field as well as the scope of its research topics. Following a road map from the derivation of trajectory data, to trajectory data preprocessing, to trajectory data management, and to a variety of mining tasks (such as trajectory pattern mining, outlier detection, and trajectory classification), the survey explores the connections, correlations, and differences among these existing techniques. This survey also introduces the methods that transform trajectories into other data formats, such as graphs, matrices, and tensors, to which more data mining and machine learning techniques can be applied. Finally, some public trajectory datasets are presented. This survey can help shape the field of trajectory data mining, providing a quick understanding of this field to the community.", "title": "" }, { "docid": "cf264a124cc9f68cf64cacb436b64fa3", "text": "Clustering validation has long been recognized as one of the vital issues essential to the success of clustering applications. In general, clustering validation can be categorized into two classes, external clustering validation and internal clustering validation. In this paper, we focus on internal clustering validation and present a detailed study of 11 widely used internal clustering validation measures for crisp clustering. From five conventional aspects of clustering, we investigate their validation properties. Experiment results show that S\\_Dbw is the only internal validation measure which performs well in all five aspects, while other measures have certain limitations in different application scenarios.", "title": "" } ]
[ { "docid": "f4e6c5c5c7fccbf0f72ff681cd3a8762", "text": "Program specifications are important for many tasks during software design, development, and maintenance. Among these, temporal specifications are particularly useful. They express formal correctness requirements of an application's ordering of specific actions and events during execution, such as the strict alternation of acquisition and release of locks. Despite their importance, temporal specifications are often missing, incomplete, or described only informally. Many techniques have been proposed that mine such specifications from execution traces or program source code. However, existing techniques mine only simple patterns, or they mine a single complex pattern that is restricted to a particular set of manually selected events. There is no practical, automatic technique that can mine general temporal properties from execution traces.\n In this paper, we present Javert, the first general specification mining framework that can learn, fully automatically, complex temporal properties from execution traces. The key insight behind Javert is that real, complex specifications can be formed by composing instances of small generic patterns, such as the alternating pattern ((ab)) and the resource usage pattern ((ab c)). In particular, Javert learns simple generic patterns and composes them using sound rules to construct large, complex specifications. We have implemented the algorithm in a practical tool and conducted an extensive empirical evaluation on several open source software projects. Our results are promising; they show that Javert is scalable, general, and precise. It discovered many interesting, nontrivial specifications in real-world code that are beyond the reach of existing automatic techniques.", "title": "" }, { "docid": "ccc4c7499d1344fb6581ed22abc0c445", "text": "A dual-band X-/Military Ka-band (MKa-band) single-aperture reflectarray structure that transmits and receives at both MKaand X-bands is introduced in this paper. Frequency selective surface (FSS) is used as the ground plane for the MKaband reflectarray. A cascade configuration of FSS-backed reflectarrays is designed for each of the receive and transmit bands of MKa-band to reduce the coupling between the elements of these two bands when they are etched on the same substrate. Additional FSS's are implemented to further enhance the isolation between the bands. The MKa-band elements are located on top of the X-band reflectarray that is composed of elements etched on a perfect electric conductor (PEC)-backed ground plane. The reflectarray elements also convert circular to linear polarization which results in a simplified feed structure.", "title": "" }, { "docid": "670b1d7cf683732c38d197126e094a74", "text": "Deep learning software demands reliability and performance. However, many of the existing deep learning frameworks are software libraries that act as an unsafe DSL in Python and a computation graph interpreter. We present DLVM, a design and implementation of a compiler infrastructure with a linear algebra intermediate representation, algorithmic differentiation by adjoint code generation, domainspecific optimizations and a code generator targeting GPU via LLVM. Designed as a modern compiler infrastructure inspired by LLVM, DLVM is more modular and more generic than existing deep learning compiler frameworks, and supports tensor DSLs with high expressivity. With our prototypical staged DSL embedded in Swift, we argue that the DLVM system enables a form of modular, safe and performant frameworks for deep learning.", "title": "" }, { "docid": "b8b82691002e3d694d5766ea3269a78e", "text": "This article presents a framework for improving the Software Configuration Management (SCM) process, that includes a maturity model to assess software organizations and an approach to guide the transition from diagnosis to action planning. The maturity model and assessment tool are useful to identify the degree of satisfaction for practices considered key for SCM. The transition approach is also important because the application of a model to produce a diagnosis is just a first step, organizations are demanding the generation of action plans to implement the recommendations. The proposed framework has been used to assess a number of software organizations and to generate the basis to build an action plan for improvement. In summary, this article shows that the maturity model and action planning approach are instrumental to reach higher SCM control and visibility, therefore producing higher quality software.", "title": "" }, { "docid": "4dc8b11b9123c6a25dcf4765d77cb6ca", "text": "Accurate and reliable information about land use and land cover is essential for change detection and monitoring of the specified area. It is also useful in the updating the geographical information about the area. Over the past decade, a significant amount of research has been conducted concerning the application of different classifier and image fusion technique in this area. In this paper, introductions to the land use and land cover classification techniques are given and the results from a number of different techniques are compared. It has been found that, in general fusion technique perform better than either conventional classifier or supervised/unsupervised classification.", "title": "" }, { "docid": "8b266924ebd2fccd4c8204700868ec51", "text": "class Expression { abstract Expression smallStep(State state) throws CanNotReduce; abstract Type typeCheck(Environment env) throws TypeError; } abstract class Value extends Expression { final Expression smallStep(State state) throws CanNotReduce{ throw new CanNotReduce(\"I’m a value\"); } }class Value extends Expression { final Expression smallStep(State state) throws CanNotReduce{ throw new CanNotReduce(\"I’m a value\"); } } class CanNotReduce extends Exception{ CanNotReduce(String reason) {super(reason);} } class TypeError extends Exception { TypeError(String reason) {super(reason);}} class Bool extends Value { boolean value; Bool(boolean b) { value = b; } public String toString() { return value ? \"TRUE\" : \"FALSE\"; } Type typeCheck(Environment env) throws TypeError { return Type.BOOL; } } class Int extends Value { int value; Int(int i) { value = i; } public String toString(){return \"\"+ value;} Type typeCheck(Environment env) throws TypeError { return Type.INT; } } class Skip extends Value { public String toString(){return \"SKIP\";} Type typeCheck(Environment env) throws TypeError { return Type.UNIT; } }", "title": "" }, { "docid": "7f53f16a4806d8179725cd9aa4537800", "text": "Corpus linguistics is one of the fastest-growing methodologies in contemporary linguistics. In a conversational format, this article answers a few questions that corpus linguists regularly face from linguists who have not used corpus-based methods so far. It discusses some of the central assumptions (‘formal distributional differences reflect functional differences’), notions (corpora, representativity and balancedness, markup and annotation), and methods of corpus linguistics (frequency lists, concordances, collocations), and discusses a few ways in which the discipline still needs to mature. At a recent LSA meeting ... [with an obvious bow to Frederick Newmeyer] Question: So, I hear you’re a corpus linguist. Interesting, I get to see more and more abstracts and papers and even job ads where experience with corpus-based methods are mentioned, but I actually know only very little about this area. So, what’s this all about? Answer: Yes, it’s true, it’s really an approach that’s gaining more and more prominence in the field. In an editorial of the flagship journal of the discipline, Joseph (2004:382) actually wrote ‘we seem to be witnessing as well a shift in the way some linguists find and utilize data – many papers now use corpora as their primary data, and many use internet data’. Question: My impression exactly. Now, you say ‘approach’, but that’s something I’ve never really understood. Corpus linguistics – is that a theory or model or a method or what? Answer: Good question and, as usual, people differ in their opinions. One well-known corpus linguist, for example, considers corpus linguistics – he calls it computer corpus linguistics – a ‘new philosophical approach [...]’ Leech (1992:106). Many others, including myself, consider it a method(ology), no more, but also no less (cf. McEnery et al. 2006:7f ). However, I don’t think this difference would result in many practical differences. Taylor (2008) discusses this issue in more detail, and for an amazingly comprehensive overview of how huge and diverse the field has become, cf. Lüdeling and Kytö (2008, 2009). Question: Hm ... But if you think corpus linguistics is a methodology, .... Well, let me ask you this: usually, linguists try to interpret the data they investigate against the background of some theory. Generative grammarians interpret their acceptability judgments within Government and Binding Theory or the Minimalist Program; some psycholinguists interpret their reaction time data within, for example, a connectionist interactive Language and Linguistics Compass 3 (2009): 1–17, 10.1111/j.1749-818x.2009.00149.x a 2009 The Author Journal Compilation a 2009 Blackwell Publishing Ltd activation model – now if corpus linguistics is only a methodology, then what is the theory within which you interpret your findings? Answer: Again as usual, there’s no simple answer to this question; it depends .... There are different perspectives one can take. One is that many corpus linguists would perhaps even say that for them, linguistic theory is not of the same prime importance as it is in, for example, generative approaches. Correspondingly, I think it’s fair to say that a large body of corpus-linguistic work has a rather descriptive or applied focus and does actually not involve much linguistic theory. Another one is that corpus linguistic methods are a method just as acceptability judgments, experimental data, etc. and that linguists of every theoretical persuasion can use corpus data. If a linguist investigates how lexical items become more and more used as grammatical markers in a corpus, then the results are descriptive and ⁄ or most likely interpreted within some form of grammaticalization theory. If a linguist studies how German second language learners of English acquire the formation of complex clauses, then he will either just describe what he finds or interpret it within some theory of second language acquisition and so on... . There’s one other, more general way to look at it, though. I can of course not speak for all corpus linguists, but I myself think that a particular kind of linguistic theory is actually particularly compatible with corpus-linguistic methods. These are usage-based cognitive-linguistic theories, and they’re compatible with corpus linguistics in several ways. (You’ll find some discussion in Schönefeld 1999.) First, the units of language assumed in cognitive linguistics and corpus linguistics are very similar: what is a unit in probably most versions of cognitive linguistics or construction grammar is a symbolic unit or a construction, which is an element that covers morphemes, words, etc. Such symbolic units or constructions are often defined broadly enough to match nearly all of the relevant corpus-linguistic notions (cf. Gries 2008a): collocations, colligations, phraseologisms, .... Lastly, corpus-linguistic analyses are always based on the evaluation of some kind of frequencies, and frequency as well as its supposed mental correlate of cognitive entrenchment is one of several central key explanatory mechanisms within cognitively motivated approaches (cf., e.g. Bybee and Hopper 1997; Barlow and Kemmer 2000; Ellis 2002a,b; Goldberg 2006). Question: Wait a second – ‘corpus-linguistic analyses are always based on the evaluation of some kind of frequencies?’ What does that mean? I mean, most linguistic research I know is not about frequencies at all – if corpus linguistics is all about frequencies, then what does corpus linguistics have to contribute? Answer: Well, many corpus linguists would probably not immediately agree to my statement, but I think it’s true anyway. There are two things to be clarified here. First, frequency of what? The answer is, there are no meanings, no functions, no concepts in corpora – corpora are (usually text) files and all you can get out of such files is distributional (or quantitative ⁄ statistical) information: ) frequencies of occurrence of linguistic elements, i.e. how often morphemes, words, grammatical patterns etc. occur in (parts of) a corpus, etc.; this information is usually represented in so-called frequency lists; ) frequencies of co-occurrence of these elements, i.e. how often morphemes occur with particular words, how often particular words occur in a certain grammatical 2 Stefan Th. Gries a 2009 The Author Language and Linguistics Compass 3 (2009): 1–17, 10.1111/j.1749-818x.2009.00149.x Journal Compilation a 2009 Blackwell Publishing Ltd construction, etc.; this information is mostly shown in so-called concordances in which all occurrences of, say, the word searched for are shown in their respective contexts. Figure 1 is an example. As a linguist, you don’t just want to talk about frequencies or distributional information, which is why corpus linguists must make a particular fundamental assumption or a conceptual leap, from frequencies to the things linguists are interested in, but frequencies is where it all starts. Second, what kind of frequency? The answer is that the notion frequency doesn’t presuppose that the relevant linguistic phenomenon occurs in a corpus 100 or 1000 times – the notion of frequency also includes phenomena that occur only once or not at all. For example, there are statistical methods and models out there that can handle non-occurrence or estimate frequencies of unseen items. Thus, corpus linguistics is concerned with whether ) something (an individual element or the co-occurrence of more than one individual element) is attested in corpora; i.e. whether the observed frequency (of occurrence or co-occurrence) is 0 or larger; ) something is attested in corpora more often than something else; i.e. whether an observed frequency is larger than the observed frequency of something else; ) something is observed more or less often than you would expect by chance [this is a more profound issue than it may seem at first; Stefanowitsch (2006) discusses this in more detail]. This also implies that statistical methods can play a large part in corpus linguistics, but this is one area where I think the discipline must still mature or evolve. Fig. 1. A concordance output from AntConc 3.2.2w. What is Corpus Linguistics? 3 a 2009 The Author Language and Linguistics Compass 3 (2009): 1–17, 10.1111/j.1749-818x.2009.00149.x Journal Compilation a 2009 Blackwell Publishing Ltd Question: What do you mean? Answer: Well, this is certainly a matter of debate, but I think that a field that developed in part out of a dissatisfaction concerning methods and data in linguistics ought to be very careful as far as its own methods and data are concerned. It is probably fair to say that many linguists turned to corpus data because they felt there must be more to data collection than researchers intuiting acceptability judgments about what one can say and what one cannot; cf. Labov (1975) and, say, Wasow and Arnold (2005:1485) for discussion and exemplification of the mismatch between the reliability of judgment data by prominent linguists of that time and the importance that was placed on them, as well as McEnery and Wilson (2001: Ch. 1), Sampson (2001: Chs 2, 8, and 10), and the special issue of Corpus Linguistics and Linguistic Theory (CLLT ) 5.1 (2008) on corpus linguistic positions regarding many of Chomsky’s claims in general and the method of acceptability judgments in particular. However, since corpus data only provide distributional information in the sense mentioned earlier, this also means that corpus data must be evaluated with tools that have been designed to deal with distributional information and the discipline that provides such tools is statistics. And this is actually completely natural: psychologists and psycholinguists undergo comprehensive training in experimental methods and the statistical tools relevant to these methods so it’s only fair that corpus linguists do the same in their domain. After all, it would be kind of a double standard to on the one hand bash many theoretical li", "title": "" }, { "docid": "4d8fbd31a27b9221109b971caa535386", "text": "Fog computing, also called “clouds at the edge,” is an emerging paradigm allocating services near the devices to improve the quality of service (QoS). The explosive prevalence of Internet of Things, big data, and fog computing in the context of cloud computing makes it extremely challenging to explore both cloud and fog resource scheduling strategy so as to improve the efficiency of resources utilization, satisfy the users’ QoS requirements, and maximize the profit of both resource providers and users. This paper proposes a resource allocation strategy for fog computing based on priced timed Petri nets (PTPNs), by which the user can choose the satisfying resources autonomously from a group of preallocated resources. Our strategy comprehensively considers the price cost and time cost to complete a task, as well as the credibility evaluation of both users and fog resources. We construct the PTPN models of tasks in fog computing in accordance with the features of fog resources. Algorithm that predicts task completion time is presented. Method of computing the credibility evaluation of fog resource is also proposed. In particular, we give the dynamic allocation algorithm of fog resources. Simulation results demonstrate that our proposed algorithms can achieve a higher efficiency than static allocation strategies in terms of task completion time and price.", "title": "" }, { "docid": "11cfe05879004f225aee4b3bda0ce30b", "text": "Data mining system contain large amount of private and sensitive data such as healthcare, financial and criminal records. These private and sensitive data can not be share to every one, so privacy protection of data is required in data mining system for avoiding privacy leakage of data. Data perturbation is one of the best methods for privacy preserving. We used data perturbation method for preserving privacy as well as accuracy. In this method individual data value are distorted before data mining application. In this paper we present min max normalization transformation based data perturbation. The privacy parameters are used for measurement of privacy protection and the utility measure shows the performance of data mining technique after data distortion. We performed experiment on real life dataset and the result show that min max normalization transformation based data perturbation method is effective to protect confidential information and also maintain the performance of data mining technique after data distortion.", "title": "" }, { "docid": "dda8427a6630411fc11e6d95dbff08b9", "text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.", "title": "" }, { "docid": "83f5af68f54f9db0608d8173432188f9", "text": "JaTeCS is an open source Java library that supports research on automatic text categorization and other related problems, such as ordinal regression and quantification, which are of special interest in opinion mining applications. It covers all the steps of an experimental activity, from reading the corpus to the evaluation of the experimental results. As JaTeCS is focused on text as the main input data, it provides the user with many text-dedicated tools, e.g.: data readers for many formats, including the most commonly used text corpora and lexical resources, natural language processing tools, multi-language support, methods for feature selection and weighting, the implementation of many machine learning algorithms as well as wrappers for well-known external software (e.g., SVMlight) which enable their full control from code. JaTeCS support its expansion by abstracting through interfaces many of the typical tools and procedures used in text processing tasks. The library also provides a number of “template” implementations of typical experimental setups (e.g., train-test, k-fold validation, grid-search optimization, randomized runs) which enable fast realization of experiments just by connecting the templates with data readers, learning algorithms and evaluation measures.", "title": "" }, { "docid": "f291c66ebaa6b24d858103b59de792b7", "text": "In this study, the authors investigated the hypothesis that women's sexual orientation and sexual responses in the laboratory correlate less highly than do men's because women respond primarily to the sexual activities performed by actors, whereas men respond primarily to the gender of the actors. The participants were 20 homosexual women, 27 heterosexual women, 17 homosexual men, and 27 heterosexual men. The videotaped stimuli included men and women engaging in same-sex intercourse, solitary masturbation, or nude exercise (no sexual activity); human male-female copulation; and animal (bonobo chimpanzee or Pan paniscus) copulation. Genital and subjective sexual arousal were continuously recorded. The genital responses of both sexes were weakest to nude exercise and strongest to intercourse. As predicted, however, actor gender was more important for men than for women, and the level of sexual activity was more important for women than for men. Consistent with this result, women responded genitally to bonobo copulation, whereas men did not. An unexpected result was that homosexual women responded more to nude female targets exercising and masturbating than to nude male targets, whereas heterosexual women responded about the same to both sexes at each activity level.", "title": "" }, { "docid": "c2aa986c09f81c6ab54b0ac117d03afb", "text": "Many companies have developed strategies that include investing heavily in information technology (IT) in order to enhance their performance. Yet, this investment pays off for some companies but not others. This study proposes that organization learning plays a significant role in determining the outcomes of IT. Drawing from resource theory and IT literature, the authors develop the concept of IT competency. Using structural equations modeling with data collected from managers in 271 manufacturing firms, they show that organizational learning plays a significant role in mediating the effects of IT competency on firm performance. Copyright  2003 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "13bd14ebf972d855be2b22f5bf9b6998", "text": "The access of visually impaired users to imagery in social media is constrained by the availability of suitable alt text. It is unknown how imperfections in emerging tools for automatic caption generation may help or hinder blind users’ understanding of social media posts with embedded imagery. In this paper, we study how crowdsourcing can be used both for evaluating the value provided by existing automated approaches and for enabling workflows that provide scalable and useful alt text to blind users. Using real-time crowdsourcing, we designed experiences that varied the depth of interaction of the crowd in assisting visually impaired users at caption interpretation, and measured trade-offs in effectiveness, scalability, and reusability. We show that the shortcomings of existing AI image captioning systems frequently hinder a user’s understanding of an image they cannot see to a degree that even clarifying conversations with sighted assistants cannot correct. Our detailed analysis of the set of clarifying conversations collected from our studies led to the design of experiences that can effectively assist users in a scalable way without the need for real-time interaction. They also provide lessons and guidelines that human captioners and the designers of future iterations of AI captioning systems can use to improve labeling of social media imagery for blind users.", "title": "" }, { "docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2", "text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.", "title": "" }, { "docid": "890b1d7bf396b3051e5bd2e969122d71", "text": "We envision small cells mounted on unmanned aerial vehicles, to complement existing macrocell infrastructure. We demonstrate through numerical analysis that clustering algorithms can be used to position the airborne access points and select users to offload from the macrocells. We compare the performance of these deployments against equivalent simulated picocell deployments. We demonstrate that due to their ability to position themselves around exact user locations while maintaining a direct line-of-sight link the airborne access points provide a significantly improved received signal strength than the static picocell alternatives. We also find that the airborne access points provide superior service quality even in the presence of user and access point positioning errors.", "title": "" }, { "docid": "30b508c7b576c88705098ac18657664b", "text": "The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument (e.g., driver-less cars) face is how to ensure that these instruments will not engage in unethical conduct (not to be conflated with illegal conduct). The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.", "title": "" }, { "docid": "1406e39d95505da3d7ab2b5c74c2e068", "text": "Context: During requirements engineering, prioritization is performed to grade or rank requirements in their order of importance and subsequent implementation releases. It is a major step taken in making crucial decisions so as to increase the economic value of a system. Objective: The purpose of this study is to identify and analyze existing prioritization techniques in the context of the formulated research questions. Method: Search terms with relevant keywords were used to identify primary studies that relate requirements prioritization classified under journal articles, conference papers, workshops, symposiums, book chapters and IEEE bulletins. Results: 73 Primary studies were selected from the search processes. Out of these studies; 13 were journal articles, 35 were conference papers and 8 were workshop papers. Furthermore, contributions from symposiums as well as IEEE bulletins were 2 each while the total number of book chapters amounted to 13. Conclusion: Prioritization has been significantly discussed in the requirements engineering domain. However , it was generally discovered that, existing prioritization techniques suffer from a number of limitations which includes: lack of scalability, methods of dealing with rank updates during requirements evolution, coordination among stakeholders and requirements dependency issues. Also, the applicability of existing techniques in complex and real setting has not been reported yet.", "title": "" }, { "docid": "b229aa8b39b3df3fec941ce4791a2fe9", "text": "Translating information between text and image is a fundamental problem in artificial intelligence that connects natural language processing and computer vision. In the past few years, performance in image caption generation has seen significant improvement through the adoption of recurrent neural networks (RNN). Meanwhile, text-to-image generation begun to generate plausible images using datasets of specific categories like birds and flowers. We've even seen image generation from multi-category datasets such as the Microsoft Common Objects in Context (MSCOCO) through the use of generative adversarial networks (GANs). Synthesizing objects with a complex shape, however, is still challenging. For example, animals and humans have many degrees of freedom, which means that they can take on many complex shapes. We propose a new training method called Image-Text-Image (I2T2I) which integrates text-to-image and image-to-text (image captioning) synthesis to improve the performance of text-to-image synthesis. We demonstrate that I2T2I can generate better multi-categories images using MSCOCO than the state-of-the-art. We also demonstrate that I2T2I can achieve transfer learning by using a pre-trained image captioning module to generate human images on the MPII Human Pose dataset (MHP) without using sentence annotation.", "title": "" }, { "docid": "7e33c62ce15c9eb7894a0feff3d2cfb4", "text": "Revenue management has been used in a variety of industries and generally takes the form of managing demand by manipulating length of customer usage and price. Supply mix is rarely considered, although it can have considerable impact on revenue. In this research, we focused on developing an optimal supply mix, specifically on determining the supply mix that would maximize revenue. We used data from a Chevys restaurant, part of a large chain of Mexican restaurants, in conjunction with a simulation model to evaluate and enumerate all possible supply (table) mixes. Compared to the restaurant’s existing table mix, the optimal mix is capable of handling a 30% increase in customer volume without increasing waiting times beyond their original levels. While our study was in a restaurant context, the results of this research are applicable to other service businesses.", "title": "" } ]
scidocsrr
d618d7bb00434265a497decf1247ab7f
Integrating the Internet of Things with Business Process Management: A Process-aware Framework for Smart Objects
[ { "docid": "17f8affa7807932f58950303c3b62296", "text": "The Internet of Things (IoT) has grown in recent years to a huge branch of research: RFID, sensors and actuators as typical IoT devices are increasingly used as resources integrated into new value added applications of the Future Internet and are intelligently combined using standardised software services. While most of the current work on IoT integration focuses on areas of the actual technical implementation, little attention has been given to the integration of the IoT paradigm and its devices coming with native software components as resources in business processes of traditional enterprise resource planning systems. In this paper, we identify and integrate IoT resources as a novel automatic resource type on the business process layer beyond the classical human resource task-centric view of the business process model in order to face expanding resource planning challenges of future enterprise environments.", "title": "" } ]
[ { "docid": "5962b5655d389bbdc5274650d365cd37", "text": "Swelling of the upper lip can result from various diseases such as salivary tumors, infectious and inflammatory diseases and cysts. Among the latter, dentigerous cysts, typically involving unerupted teeth, are sometimes associated with supernumerary teeth in the maxillary anterior incisors region called the mesiodens. We report an unusual case of a large dentigerous cyst associated with an impacted mesiodens in a 42-year-old male who presented with a slow-growing swelling in the upper lip.", "title": "" }, { "docid": "b898a5e8d209cf8ed7d2b8bfae0e58e2", "text": "Large datasets often have unreliable labels—such as those obtained from Amazon's Mechanical Turk or social media platforms—and classifiers trained on mislabeled datasets often exhibit poor performance. We present a simple, effective technique for accounting for label noise when training deep neural networks. We augment a standard deep network with a softmax layer that models the label noise statistics. Then, we train the deep network and noise model jointly via end-to-end stochastic gradient descent on the (perhaps mislabeled) dataset. The augmented model is underdetermined, so in order to encourage the learning of a non-trivial noise model, we apply dropout regularization to the weights of the noise model during training. Numerical experiments on noisy versions of the CIFAR-10 and MNIST datasets show that the proposed dropout technique outperforms state-of-the-art methods.", "title": "" }, { "docid": "8acd410ff0757423d09928093e7e8f63", "text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .", "title": "" }, { "docid": "9fa361bd7611197c99515fbd91146658", "text": "The connecting rod fatigue of universal tractor (U650) was investigated through the ANSYS software application and its lifespan was estimated. The reason for performing this research showed the connecting rod behavior affected by fatigue phenomenon due to the cyclic loadings and to consider the results for more savings in time and costs, as two very significant parameters relevant to manufacturing. The results indicate that with fully reverse loading, one can estimate longevity of a connecting rod and also find the critical points that more possibly the crack growth initiate from. Furthermore, the allowable number of load cycles and using fully reverse loading was gained 10. It is suggested that the results obtained can be useful to bring about modifications in the process of connecting rod manufacturing.", "title": "" }, { "docid": "b771737351b984881e0fce7f9bb030e8", "text": "BACKGROUND\nConsidering the high prevalence of dementia, it would be of great value to develop effective tools to improve cognitive function. We examined the effects of a human-type communication robot on cognitive function in elderly women living alone.\n\n\nMATERIAL/METHODS\nIn this study, 34 healthy elderly female volunteers living alone were randomized to living with either a communication robot or a control robot at home for 8 weeks. The shape, voice, and motion features of the communication robot resemble those of a 3-year-old boy, while the control robot was not designed to talk or nod. Before living with the robot and 4 and 8 weeks after living with the robot, experiments were conducted to evaluate a variety of cognitive functions as well as saliva cortisol, sleep, and subjective fatigue, motivation, and healing.\n\n\nRESULTS\nThe Mini-Mental State Examination score, judgement, and verbal memory function were improved after living with the communication robot; those functions were not altered with the control robot. In addition, the saliva cortisol level was decreased, nocturnal sleeping hours tended to increase, and difficulty in maintaining sleep tended to decrease with the communication robot, although alterations were not shown with the control. The proportions of the participants in whom effects on attenuation of fatigue, enhancement of motivation, and healing could be recognized were higher in the communication robot group relative to the control group.\n\n\nCONCLUSIONS\nThis study demonstrates that living with a human-type communication robot may be effective for improving cognitive functions in elderly women living alone.", "title": "" }, { "docid": "968555bbada2d930b97d8bb982580535", "text": "With the recent developments in three-dimensional (3-D) scanner technologies and photogrammetric techniques, it is now possible to acquire and create accurate models of historical and archaeological sites. In this way, unrestricted access to these sites, which is highly desirable from both a research and a cultural perspective, is provided. Through the process of virtualisation, numerous virtual collections are created. These collections must be archives, indexed and visualised over a very long period of time in order to be able to monitor and restore them as required. However, the intrinsic complexities and tremendous importance of ensuring long-term preservation and access to these collections have been widely overlooked. This neglect may lead to the creation of a so-called “Digital Rosetta Stone”, where models become obsolete and the data cannot be interpreted or virtualised. This paper presents a framework for the long-term preservation of 3-D culture heritage data as well as the application thereof in monitoring, restoration and virtual access. The interplay between raw data and model is considered as well as the importance of calibration. Suitable archiving and indexing techniques are described and the issue of visualisation over a very long period of time is addressed. An approach to experimentation though detachment, migration and emulation is presented.", "title": "" }, { "docid": "6a4815ee043e83994e4345b6f4352198", "text": "Object detection – the computer vision task dealing with detecting instances of objects of a certain class (e.g ., ’car’, ’plane’, etc.) in images – attracted a lot of attention from the community during the last 5 years. This strong interest can be explained not only by the importance this task has for many applications but also by the phenomenal advances in this area since the arrival of deep convolutional neural networks (DCNN). This article reviews the recent literature on object detection with deep CNN, in a comprehensive way, and provides an in-depth view of these recent advances. The survey covers not only the typical architectures (SSD, YOLO, Faster-RCNN) but also discusses the challenges currently met by the community and goes on to show how the problem of object detection can be extended. This survey also reviews the public datasets and associated state-of-the-art algorithms.", "title": "" }, { "docid": "1ce476577e092ee91d54afc672f29196", "text": "In this paper we continue to investigate how the deep neural network (DNN) based acoustic models for automatic speech recognition can be trained without hand-crafted feature extraction. Previously, we have shown that a simple fully connected feedforward DNN performs surprisingly well when trained directly on the raw time signal. The analysis of the weights revealed that the DNN has learned a kind of short-time time-frequency decomposition of the speech signal. In conventional feature extraction pipelines this is done manually by means of a filter bank that is shared between the neighboring analysis windows. Following this idea, we show that the performance gap between DNNs trained on spliced hand-crafted features and DNNs trained on raw time signal can be strongly reduced by introducing 1D-convolutional layers. Thus, the DNN is forced to learn a short-time filter bank shared over a longer time span. This also allows us to interpret the weights of the second convolutional layer in the same way as 2D patches learned on critical band energies by typical convolutional neural networks. The evaluation is performed on an English LVCSR task. Trained on the raw time signal, the convolutional layers allow to reduce the WER on the test set from 25.5% to 23.4%, compared to an MFCC based result of 22.1% using fully connected layers.", "title": "" }, { "docid": "a0840cf58ca21b738543924f6ed1a2f3", "text": "Emojis have been widely used in textual communications as a new way to convey nonverbal cues. An interesting observation is the various emoji usage patterns among different users. In this paper, we investigate the correlation between user personality traits and their emoji usage patterns, particularly on overall amounts and specific preferences. To achieve this goal, we build a large Twitter dataset which includes 352,245 users and over 1.13 billion tweets associated with calculated personality traits and emoji usage patterns. Our correlation and emoji prediction results provide insights into the power of diverse personalities that lead to varies emoji usage patterns as well as its potential in emoji recommendation", "title": "" }, { "docid": "f9bd24894ed3eace01f51966c61f2a5d", "text": "Ethanolic extract from the fruits of Pimpinella anisoides, an aromatic plant and a spice, exhibited activity against AChE and BChE, with IC(50) values of 227.5 and 362.1 microg/ml, respectively. The most abundant constituents of the extract were trans-anethole, (+)-limonene and (+)-sabinene. trans-Anethole exhibited the highest activity against AChE and BChE with IC(50) values of 134.7 and 209.6 microg/ml, respectively. The bicyclic monoterpene (+)-sabinene exhibited a promising activity against AChE (IC(50) of 176.5 microg/ml) and BChE (IC(50) of 218.6 microg/ml).", "title": "" }, { "docid": "6580bf1cdf9fdf89fa7e2f2b40ed1c51", "text": "NAND flash memories have bit errors that are corrected by error-correction codes (ECC). We present raw error data from multi-level-cell devices from four manufacturers, identify the root-cause mechanisms, and estimate the resulting uncorrectable bit error rates (UBER). Write, retention, and read-disturb errors all contribute. Accurately estimating the UBER requires care in characterization to include all write errors, which are highly erratic, and guardbanding for variation in raw bit error rate. NAND UBER values can be much better than 10-15, but UBER is a strong function of program/erase cycling and subsequent retention time, so UBER specifications must be coupled with maximum specifications for these quantities.", "title": "" }, { "docid": "98cc82852083eae53d06621f37cde9e5", "text": "Automatically recognizing a large number of action categories from videos is of significant importance for video understanding. Most existing works focused on the design of more discriminative feature representation, and have achieved promising results when the positive samples are enough. However, very limited efforts were spent on recognizing a novel action without any positive exemplars, which is often the case in the real settings due to the large amount of action classes and the users’ queries dramatic variations. To address this issue, we propose to perform action recognition when no positive exemplars of that class are provided, which is often known as the zero-shot learning. Different from other zero-shot learning approaches, which exploit attributes as the intermediate layer for the knowledge transfer, our main contribution is SIR, which directly leverages the semantic inter-class relationships between the known and unknown actions followed by label transfer learning. The inter-class semantic relationships are automatically measured by continuous word vectors, which learned by the skip-gram model using the large-scale text corpus. Extensive experiments on the UCF101 dataset validate the superiority of our method over fully-supervised approaches using few positive exemplars.", "title": "" }, { "docid": "3ac4705cd79dc6e9c5410bb28f53d948", "text": "Unauthorized rogue access points (APs), such as those brought into a corporate campus by employees, pose a security threat as they may be poorly managed or insufficiently secured. Any attacker in the vicinity can easily get onto the internal network through a rogue AP, bypassing all perimeter security measures. Existing detection solutions work well for detecting layer-2 rogue APs. It is a challenge, however, to accurately detect a layer-3 rogue AP that is protected by WEP or other security measures. In this paper, we describe a new rogue AP detection method to address this problem. Our solution uses a verifier on the internal wired network to send test traffic towards wireless edge, and uses wireless sniffers to identify rouge APs that relay the test packets. To quickly sweep all possible rogue APs, the verifier uses a greedy algorithm to schedule the channels for the sniffers to listen to. To work with the encrypted AP traffic, the sniffers use a probabilistic algorithm that only relies on observed packet size. Using extensive experiments, we show that the proposed approach can robustly detect rogue APs with moderate network overhead.", "title": "" }, { "docid": "3d8daed65bfd41a3610627e896837a4a", "text": "BACKGROUND\nDrug-resistant tuberculosis threatens recent gains in the treatment of tuberculosis and human immunodeficiency virus (HIV) infection worldwide. A widespread epidemic of extensively drug-resistant (XDR) tuberculosis is occurring in South Africa, where cases have increased substantially since 2002. The factors driving this rapid increase have not been fully elucidated, but such knowledge is needed to guide public health interventions.\n\n\nMETHODS\nWe conducted a prospective study involving 404 participants in KwaZulu-Natal Province, South Africa, with a diagnosis of XDR tuberculosis between 2011 and 2014. Interviews and medical-record reviews were used to elicit information on the participants' history of tuberculosis and HIV infection, hospitalizations, and social networks. Mycobacterium tuberculosis isolates underwent insertion sequence (IS)6110 restriction-fragment-length polymorphism analysis, targeted gene sequencing, and whole-genome sequencing. We used clinical and genotypic case definitions to calculate the proportion of cases of XDR tuberculosis that were due to inadequate treatment of multidrug-resistant (MDR) tuberculosis (i.e., acquired resistance) versus those that were due to transmission (i.e., transmitted resistance). We used social-network analysis to identify community and hospital locations of transmission.\n\n\nRESULTS\nOf the 404 participants, 311 (77%) had HIV infection; the median CD4+ count was 340 cells per cubic millimeter (interquartile range, 117 to 431). A total of 280 participants (69%) had never received treatment for MDR tuberculosis. Genotypic analysis in 386 participants revealed that 323 (84%) belonged to 1 of 31 clusters. Clusters ranged from 2 to 14 participants, except for 1 large cluster of 212 participants (55%) with a LAM4/KZN strain. Person-to-person or hospital-based epidemiologic links were identified in 123 of 404 participants (30%).\n\n\nCONCLUSIONS\nThe majority of cases of XDR tuberculosis in KwaZulu-Natal, South Africa, an area with a high tuberculosis burden, were probably due to transmission rather than to inadequate treatment of MDR tuberculosis. These data suggest that control of the epidemic of drug-resistant tuberculosis requires an increased focus on interrupting transmission. (Funded by the National Institute of Allergy and Infectious Diseases and others.).", "title": "" }, { "docid": "31c16e6c916030b8f6e76d56e35d47ef", "text": "Assume that a multi-user multiple-input multiple-output (MIMO) communication system must be designed to cover a given area with maximal energy efficiency (bits/Joule). What are the optimal values for the number of antennas, active users, and transmit power? By using a new model that describes how these three parameters affect the total energy efficiency of the system, this work provides closed-form expressions for their optimal values and interactions. In sharp contrast to common belief, the transmit power is found to increase (not decrease) with the number of antennas. This implies that energy efficient systems can operate at high signal-to-noise ratio (SNR) regimes in which the use of interference-suppressing precoding schemes is essential. Numerical results show that the maximal energy efficiency is achieved by a massive MIMO setup wherein hundreds of antennas are deployed to serve relatively many users using interference-suppressing regularized zero-forcing precoding.", "title": "" }, { "docid": "827d54d02e82c5c58dde0bcb428ba1d4", "text": "We present a novel unsupervised approach for multilingual sentiment analysis driven by compositional syntax-based rules. On the one hand, we exploit some of the main advantages of unsupervised algorithms: (1) the interpretability of their output , in contrast with most supervised models , which behave as a black box and (2) their robustness across different corpora and domains. On the other hand, by introducing the concept of compositional operations and exploiting syntactic information in the form of universal dependencies , we tackle one of their main drawbacks: their rigidity on data that are differently structured depending on the language. Experiments show an improvement both over existing unsupervised methods, and over state-of-the-art supervised models when evaluating outside their corpus of origin. The system is freely available 1 .", "title": "" }, { "docid": "e664941234f6ea6a74fcf49c80adcfcf", "text": "There is increasing interest in the potential neuropsychological impact of sports-related concussion. A meta-analysis of the relevant literature was conducted to determine the impact of sports-related concussion across six cognitive domains. The analysis was based on 21 studies involving 790 cases of concussion and 2014 control cases. The overall effect of concussion (d = 0.49) was comparable to the effect found in the non-sports-related mild traumatic brain injury population (d = 0.54; Belanger et al., 2005). Using sports-concussed participants with a history of prior head injury appears to inflate the effect sizes associated with the current sports-related concussion. Acute effects (within 24 hr of injury) of concussion were greatest for delayed memory, memory acquisition, and global cognitive functioning (d = 1.00, 1.03, and 1.42, respectively). However, no residual neuropsychological impairments were found when testing was completed beyond 7 days postinjury. These findings were moderated by cognitive domain and comparison group (control group versus preconcussion self-control). Specifically, delayed memory in studies utilizing a control group remained problematic at 7 days. The implications and limitations of these findings are discussed.", "title": "" }, { "docid": "98110985cd175f088204db452a152853", "text": "We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, produces photo-realistic results that we validate via a perceptual user study.", "title": "" }, { "docid": "7b08d9e80d61788c9fdd01cdac917f5b", "text": "Resonant dc-dc converters offer several advantages over the more conventional PWM converters. Some of these advantages include: 1) low switching losses and low transistor stresses; 2) medium speed diodes are sufficient (transistor parasitic, inverse-parallel diodes can be used, even for frequencies in the hundreds of kilohertz); and 3) ability to step the input voltage up or down. This paper presents an analysis of a resonant converter which contains a capacitive-input output filter, rather than the more conventional inductor-input output filter. The switching waveforms are derived and design curves presented along with experimental data. The results are compared to the inductor-input filter case obtained from an earlier paper.", "title": "" }, { "docid": "0c975acb5ab3f413078171840b17b232", "text": "We have analysed associated factors in 164 patients with acute compartment syndrome whom we treated over an eight-year period. In 69% there was an associated fracture, about half of which were of the tibial shaft. Most patients were men, usually under 35 years of age. Acute compartment syndrome of the forearm, with associated fracture of the distal end of the radius, was again seen most commonly in young men. Injury to soft tissues, without fracture, was the second most common cause of the syndrome and one-tenth of the patients had a bleeding disorder or were taking anticoagulant drugs. We found that young patients, especially men, were at risk of acute compartment syndrome after injury. When treating such injured patients, the diagnosis should be made early, utilising measurements of tissue pressure.", "title": "" } ]
scidocsrr
7d8338308fbd210286c141ee380127f9
Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization
[ { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" } ]
[ { "docid": "2a718f193be63630087bd6c5748b332a", "text": "This study investigates the intrasentential assignment of reference to pronouns (him, her) and anaphors (himself, herself) as characterized by Binding Theory in a subgroup of \"Grammatical specifically language-impaired\" (SLI) children. The study aims to (1) provide further insight into the underlying nature of Grammatical SLI in children and (2) elucidate the relationship between different sources of knowledge, that is, syntactic knowledge versus knowledge of lexical properties and pragmatic inference in the assignment of intrasentential coreference. In two experiments, using a picture-sentence pair judgement task, the children's knowledge of the lexical properties versus syntactic knowledge (Binding Principles A and B) in the assignment of reflexives and pronouns was investigated. The responses of 12 Grammatical SLI children (aged 9:3 to 12:10) and three language ability (LA) control groups of 12 children (aged 5:9 to 9:1) were compared. The results indicated that the SLI children and the LA controls may use a combination of conceptual-lexical and pragmatic knowledge (e.g., semantic gender, reflexive marking of the predicate, and assignment of theta roles) to help assign reference to anaphors and pronouns. The LA controls also showed appropriate use of the syntactic knowledge. In contrast, the SLI children performed at chance when syntactic information was crucially required to rule out inappropriate coreference. The data are consistent with an impairment with the (innate) syntactic knowledge characterized by Binding Theory which underlies reference assignment to anaphors and pronouns. We conclude that the SLI children's syntactic representation is underspecified with respect to coindexation between constituents and the syntactic properties of pronouns. Support is provided for the proposal that Grammatical SLI children have a modular language deficit with syntactic dependent structural relationships between constituents, that is, a Representational Deficit with Dependent Relationships (RDDR). Further consideration of the linguistic characteristics of this deficit is made in relation to the hypothesized syntactic representations of young normally developing children.", "title": "" }, { "docid": "a1b7f477c339f30587a2f767327b4b41", "text": "Software game is a kind of application that is used not only for entertainment, but also for serious purposes that can be applicable to different domains such as education, business, and health care. Multidisciplinary nature of the game development processes that combine sound, art, control systems, artificial intelligence (AI), and human factors, makes the software game development practice different from traditional software development. However, the underline software engineering techniques help game development to achieve maintainability, flexibility, lower effort and cost, and better design. The purpose of this study is to assesses the state of the art research on the game development software engineering process and highlight areas that need further consideration by researchers. In the study, we used a systematic literature review methodology based on well-known digital libraries. The largest number of studies have been reported in the production phase of the game development software engineering process life cycle, followed by the pre-production phase. By contrast, the post-production phase has received much less research activity than the pre-production and production phases. The results of this study suggest that the game development software engineering process has many aspects that need further attention from researchers; that especially includes the postproduction phase.", "title": "" }, { "docid": "205a38ac9f2df57a33481d36576e7d54", "text": "Business process improvement initiatives typically employ various process analysis techniques, including evidence-based analysis techniques such as process mining, to identify new ways to streamline current business processes. While plenty of process mining techniques have been proposed to extract insights about the way in which activities within processes are conducted, techniques to understand resource behaviour are limited. At the same time, an understanding of resources behaviour is critical to enable intelligent and effective resource management an important factor which can significantly impact overall process performance. The presence of detailed records kept by today’s organisations, including data about who, how, what, and when various activities were carried out by resources, open up the possibility for real behaviours of resources to be studied. This paper proposes an approach to analyse one aspect of resource behaviour: the manner in which a resource prioritises his/her work. The proposed approach has been formalised, implemented, and evaluated using a number of synthetic and real datasets. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "099a291a9a0adaf1b6d276387ab73ca5", "text": "BACKGROUND\nAround the world, populations are aging and there is a growing concern about ways that older adults can maintain their health and well-being while living in their homes.\n\n\nOBJECTIVES\nThe aim of this paper was to conduct a systematic literature review to determine: (1) the levels of technology readiness among older adults and, (2) evidence for smart homes and home-based health-monitoring technologies that support aging in place for older adults who have complex needs.\n\n\nRESULTS\nWe identified and analyzed 48 of 1863 relevant papers. Our analyses found that: (1) technology-readiness level for smart homes and home health monitoring technologies is low; (2) the highest level of evidence is 1b (i.e., one randomized controlled trial with a PEDro score ≥6); smart homes and home health monitoring technologies are used to monitor activities of daily living, cognitive decline and mental health, and heart conditions in older adults with complex needs; (3) there is no evidence that smart homes and home health monitoring technologies help address disability prediction and health-related quality of life, or fall prevention; and (4) there is conflicting evidence that smart homes and home health monitoring technologies help address chronic obstructive pulmonary disease.\n\n\nCONCLUSIONS\nThe level of technology readiness for smart homes and home health monitoring technologies is still low. The highest level of evidence found was in a study that supported home health technologies for use in monitoring activities of daily living, cognitive decline, mental health, and heart conditions in older adults with complex needs.", "title": "" }, { "docid": "dc8af68ed9bbfd8e24c438771ca1d376", "text": "Pedestrian detection has progressed significantly in the last years. However, occluded people are notoriously hard to detect, as their appearance varies substantially depending on a wide range of occlusion patterns. In this paper, we aim to propose a simple and compact method based on the FasterRCNN architecture for occluded pedestrian detection. We start with interpreting CNN channel features of a pedestrian detector, and we find that different channels activate responses for different body parts respectively. These findings motivate us to employ an attention mechanism across channels to represent various occlusion patterns in one single model, as each occlusion pattern can be formulated as some specific combination of body parts. Therefore, an attention network with self or external guidances is proposed as an add-on to the baseline FasterRCNN detector. When evaluating on the heavy occlusion subset, we achieve a significant improvement of 8pp to the baseline FasterRCNN detector on CityPersons and on Caltech we outperform the state-of-the-art method by 4pp.", "title": "" }, { "docid": "7eb5b730d47da0ee7be8f6c7f4963a2e", "text": "D.T. Lennon†1, H. Moon†1, L.C. Camenzind, Liuqi Yu, D.M. Zumbühl, G.A.D. Briggs, M.A. Osborne, E.A. Laird, and N. Ares Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH, United Kingdom Department of Physics, University of Basel, 4056 Basel, Switzerland Department of Engineering, University of Oxford, Walton Well Road, Oxford OX2 6ED, United Kingdom Department of Physics, Lancaster University, Lancaster, LA1 4YB, United Kingdom (Dated: October 25, 2018)", "title": "" }, { "docid": "2d9e905b5cb06a214fb5b36b9215b766", "text": "Existing context-aware adaptation techniques are limited in their support for user personalization. There is relatively less developed research involving adaptive user modeling for user applications in the emerging areas of mobile and pervasive computing. This paper describes the creation of a User Profile Ontology for context-aware application personalization within mobile environments. We analyze users’ behavior and characterize users’ needs for context-aware applications. Special emphasis is placed in the ontological modeling of dynamic components for use in adaptable applications. We illustrate the use of the model in the context of a case study, focusing on providing personalized services to older people via smart-device technologies.", "title": "" }, { "docid": "ecfa876df3c83b98ff6c85530e611548", "text": "Hand-crafted rules and reinforcement learning (RL) are two popular choices to obtain dialogue policy. The rule-based policy is often reliable within predefined scope but not self-adaptable, whereas RL is evolvable with data but often suffers from a bad initial performance. We employ a companion learning framework to integrate the two approaches for on-line dialogue policy learning, in which a predefined rule-based policy acts as a teacher and guides a data-driven RL system by giving example actions as well as additional rewards. A novel agent-aware dropout Deep Q-Network (AAD-DQN) is proposed to address the problem of when to consult the teacher and how to learn from the teacher’s experiences. AADDQN, as a data-driven student policy, provides (1) two separate experience memories for student and teacher, (2) an uncertainty estimated by dropout to control the timing of consultation and learning. Simulation experiments showed that the proposed approach can significantly improve both safety and efficiency of on-line policy optimization compared to other companion learning approaches as well as supervised pre-training using static dialogue corpus.", "title": "" }, { "docid": "ca5b9cd1634431254e1a454262eecb40", "text": "This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.", "title": "" }, { "docid": "0965f1390233e71da72fbc8f37394add", "text": "Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.", "title": "" }, { "docid": "549f9f07d2fe87e882e0f9a0de8bfe99", "text": "We introduce a layered, heterogeneous spectral reflectance model for human skin. The model captures the inter-scattering of light among layers, each of which may have an independent set of spatially-varying absorption and scattering parameters. For greater physical accuracy and control, we introduce an infinitesimally thin absorbing layer between scattering layers. To obtain parameters for our model, we use a novel acquisition method that begins with multi-spectral photographs. By using an inverse rendering technique, along with known chromophore spectra, we optimize for the best set of parameters for each pixel of a patch. Our method finds close matches to a wide variety of inputs with low residual error.\n We apply our model to faithfully reproduce the complex variations in skin pigmentation. This is in contrast to most previous work, which assumes that skin is homogeneous or composed of homogeneous layers. We demonstrate the accuracy and flexibility of our model by creating complex skin visual effects such as veins, tattoos, rashes, and freckles, which would be difficult to author using only albedo textures at the skin's outer surface. Also, by varying the parameters to our model, we simulate effects from external forces, such as visible changes in blood flow within the skin due to external pressure.", "title": "" }, { "docid": "3f2d9b5257896a4469b7e1c18f1d4e41", "text": "Data envelopment analysis (DEA) is a method for measuring the efficiency of peer decision making units (DMUs). Recently DEA has been extended to examine the efficiency of two-stage processes, where all the outputs from the first stage are intermediate measures that make up the inputs to the second stage. The resulting two-stage DEA model provides not only an overall efficiency score for the entire process, but as well yields an efficiency score for each of the individual stages. Due to the existence of intermediate measures, the usual procedure of adjusting the inputs or outputs by the efficiency scores, as in the standard DEA approach, does not necessarily yield a frontier projection. The current paper develops an approach for determining the frontier points for inefficient DMUs within the framework of two-stage DEA. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "613e2c58df0153f40fb5b77c989fa8e3", "text": "Considering the growing demand of location-based services in indoor environments and development of Wi-Fi in recent years, indoor localization based on fingerprinting has attracted many researchers interest. In this paper, we introduce a novel fuzzy Least Squares Support Vector Machine (LS-SVM) based indoor fingerprinting system by using the received signal strength (RSS). In the offline phase, RSS values of all Wi-Fi signals detected from the available access points are collected at different reference points with known locations and are stored in a database. In the online phase, the target position is estimated by calculating fuzzy membership functions of samples and using formulation of fuzzy LS-SVM method. Simulation results show that average estimation error of the proposed method is 2.56m, while average positioning error of traditional LS-SVM methods was 4.61m.", "title": "" }, { "docid": "2e39ec6079098b042064e02a8f1cbd1c", "text": "Image processing is one of most growing research area these days and now it is very much integrated with the industrial production. Generally speaking, It is very difficult for us to distinguish the exact number of the copper core in the tiny wire, However, in order to ensure that the wire meets the requirements of production, we have to know the accurate number of copper core in the wire. Here the paper will introduce a method of image edge detection to determine the exact number of the copper core in the tiny wire based on OpenCV with rich computer vision and image processing algorithms and functions. Firstly, we use high-resolution camera to take picture of the internal structure of the wire. Secondly, we use OpenCV image processing functions to implement image preprocessing. Thirdly we use morphological opening and closing operations to segment image because of their blur image edges. Finally the exact number of copper core can be clearly distinguished through contour tracking. By using of Borland C++ Builder 6.0, experimental results show that OpenCV based image edge detection methods are simple, high code integration, and high image edge positioning accuracy. ", "title": "" }, { "docid": "0107d7777a01050a75fbe06bde3a397b", "text": "To review our current knowledge of the pathologic bone metabolism in otosclerosis and to discuss the possibilities of non-surgical, pharmacological intervention. Otosclerosis has been suspected to be associated with defective measles virus infection, local inflammation and consecutive bone deterioration in the human otic capsule. In the early stages of otosclerosis, different pharmacological agents may delay the progression or prevent further deterioration of the disease and consecutive hearing loss. Although effective anti-osteoporotic drugs have become available, the use of sodium fluoride and bisphosphonates in otosclerosis has not yet been successful. Bioflavonoids may relieve tinnitus due to otosclerosis, but there is no data available on long-term application and effects on sensorineural hearing loss. In the initial inflammatory phase, corticosteroids or non-steroidal anti-inflammatory drugs may be effective; however, extended systemic application may lead to serious side effects. Vitamin D administration may have effects on the pathological bone loss, as well as on inflammation. No information has been reported on the use of immunosuppressive drugs. Anti-cytokine targeted biological therapy, however, may be feasible. Indeed, one study on the local administration of infliximab has been reported. Potential targets of future therapy may include osteoprotegerin, RANK ligand, cathepsins and also the Wnt-β-catenin pathway. Finally, anti-measles vaccination may delay the progression of the disease and potentially decrease the number of new cases. In conclusion, stapes surgery remains to be widely accepted treatment of conductive hearing loss due to otosclerosis. Due to lack of solid evidence, the place of pharmacological treatment targeting inflammation and bone metabolism needs to be determined by future studies.", "title": "" }, { "docid": "44da229583f7c6576870f87d33ac0842", "text": "Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems—BN’s error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN’s usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN’s computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code.", "title": "" }, { "docid": "01ff7e55830977622482ab018acd2cfe", "text": "Dictionary learning has been widely used in many image processing tasks. In most of these methods, the number of basis vectors is either set by experience or coarsely evaluated empirically. In this paper, we propose a new scale adaptive dictionary learning framework, which jointly estimates suitable scales and corresponding atoms in an adaptive fashion according to the training data, without the need of prior information. We design an atom counting function and develop a reliable numerical scheme to solve the challenging optimization problem. Extensive experiments on texture and video data sets demonstrate quantitatively and visually that our method can estimate the scale, without damaging the sparse reconstruction ability.", "title": "" }, { "docid": "b06c18822c119b72fe6d55bb58478a2b", "text": "The Sphinx-4 speech recognition system is the latest addition to Carnegie Mellon University's repository of Sphinx speech recognition systems. It has been jointly designed by Carnegie Mellon University, Sun Microsystems Laboratories and Mitsubishi Electric Research Laboratories. It is differently designed from the earlier Sphinx systems in terms of modularity, flexibility and algorithmic aspects. It uses newer search strategies, is universal in its acceptance of various kinds of grammars and language models, types of acoustic models and feature streams. Algorithmic innovations included in the system design enable it to incorporate multiple information sources in an elegant manner. The system is entirely developed on the JavaTM platform and is highly portable, flexible, and easier to use with multithreading. This paper describes the salient features of the Sphinx-4 decoder and includes preliminary performance measures relating to speed and accuracy.", "title": "" }, { "docid": "8411c13863aeb4338327ea76e0e2725b", "text": "There is often the need to update an installed Intrusion Detection System (IDS) due to new attack methods or upgraded computing environments. Since many current IDSs are constructed by manual encoding of expert security knowledge, changes to IDSs are expensive and slow. In this paper, we describe a data mining framework for adaptively building Intrusion Detection (ID) models. The central idea is to utilize auditing programs to extract an extensive set of features that describe each network connection or host session, and apply data mining programs to learn rules that accurately capture the behavior of intrusions and normal activities. These rules can then be used for misuse detection and anomaly detection. Detection models for new intrusions or specific components of a network system are incorporated into an existing IDS through a meta-learning (or co-operative learning) process, which produces a meta detection model that combines evidence from multiple models. We discuss the strengths of our data mining programs, namely, classification, meta-learning, association rules, and frequent episodes. We report our results of applying these programs to the (extensively gathered) network audit data from the DARPA Intrusion Detection Evaluation Program.", "title": "" }, { "docid": "800870b3404edaea957580fbc5a80bce", "text": "The purpose of this project was to investigate the antimicrobial effect of phytochemicals extracted from onion and ginger (fresh and boiled) at different concentrations, against Escherichia coli and Staphylococcus aureus using different methods (MIC, MBC, Disk and well diffusion). This project was chosen because onion and ginger are very common spices and have been claimed to contain several antimicrobial agents.", "title": "" } ]
scidocsrr
ca9db36f52ba9228d15a818af63d9ae3
Female genital injuries resulting from consensual and non-consensual vaginal intercourse.
[ { "docid": "5eb304f9287785a65dd159e42a51eb8c", "text": "The forensic examination following rape has two primary purposes: to provide health care and to collect evidence. Physical injuries need treatment so that they heal without adverse consequences. The pattern of injuries also has a forensic significance in that injuries are linked to the outcome of legal proceedings. This literature review investigates the variables related to genital injury prevalence and location that are reported in a series of retrospective reviews of medical records. The author builds the case that the prevalence and location of genital injury provide only a partial description of the nature of genital trauma associated with sexual assault and suggests a multidimensional definition of genital injury pattern. Several of the cited studies indicate that new avenues of investigation, such as refined measurement strategies for injury severity and skin color, may lead to advancements in health care, forensic, and criminal justice science.", "title": "" } ]
[ { "docid": "fc8063bddea3c70d77636683a03a52d7", "text": "Speaker attributed variability are undesirable in speaker independent speech recognition systems. The gender of the speaker is one of the influential sources of this variability. Common speech recognition systems tuned to the ensemble statistics over many speakers to compensate the inherent variability of speech signal. In this paper we will separate the datasets based on the gender to build gender dependent hidden Markov model for each word. The gender separation criterion is the average pitch frequency of the speaker. Experimental evaluation shows significant improvement in word recognition accuracy over the gender independent method with a slight increase in the processing computation.", "title": "" }, { "docid": "1f37b0d252de40c55eee0109c168983b", "text": "The algorithm may be programmed without multiplication or division instructions and is eficient with respect to speed of execution and memory utilization. This paper describes an algorithm for computer control of a type of digital plotter that is now in common use with digital computers .' The plotter under consideration is capable of executing, in response to an appropriate pulse, any one of the eight linear movements shown in Figure 1. Thus, the plotter can move linearly from a point on a mesh to any adjacent point on the mesh. A typical mesh size is 1/100th of an inch. The data to be plotted are expressed in an (x , y) rectangular coordinate system which has been scaled with respect to the mesh; i.e., the data points lie on mesh points and consequently have integral coordinates. It is assumed that the data include a sufficient number of appropriately selected points to produce a satisfactory representation of the curve by connecting the points with line segments, as illustrated in Figure 2. In Figure 3, the line segment connecting", "title": "" }, { "docid": "69b5c883c7145d2184f77c92e61b2542", "text": "WiFi network traffics will be expected to increase sharply in the coming years, since WiFi network is commonly used for local area connectivity. Unfortunately, there are difficulties in WiFi network research beforehand, since there is no common dataset between researchers on this area. Recently, AWID dataset was published as a comprehensive WiFi network dataset, which derived from real WiFi traces. The previous work on this AWID dataset was unable to classify Impersonation Attack sufficiently. Hence, we focus on optimizing the Impersonation Attack detection. Feature selection can overcome this problem by selecting the most important features for detecting an arbitrary class. We leverage Artificial Neural Network (ANN) for the feature selection and apply Stacked Auto Encoder (SAE), a deep learning algorithm as a classifier for AWID Dataset. Our experiments show that the reduced input features have significantly improved to detect the Impersonation Attack.", "title": "" }, { "docid": "cd8f880b2c290ac6066beb4010d90001", "text": "The miniaturization of integrated circuits based on complementary metal oxide semiconductor (CMOS) technology meets a significant slowdown in this decade due to several technological and scientific difficulties. Spintronic devices such as magnetic tunnel junction (MTJ) nanopillar become one of the most promising candidates for the next generation of memory and logic chips thanks to their non-volatility, infinite endurance, and high density. A magnetic processor based on spintronic devices is then expected to overcome the issue of increasing standby power due to leakage currents and high dynamic power dedicated to data moving. For the purpose of fabricating such a non-volatile magnetic processor, a new design of multi-bit magnetic adder (MA)-the basic element of arithmetic/logic unit for any processor-whose input and output data are stored in perpendicular magnetic anisotropy (PMA) domain wall (DW) racetrack memory (RM)-is presented in this paper. The proposed multi-bit MA circuit promises nearly zero standby power, instant ON/OFF capability, and smaller die area. By using an accurate racetrack memory spice model, we validated this design and simulated its performance such as speed, power and area, etc.", "title": "" }, { "docid": "180a271a86f9d9dc71cc140096d08b2f", "text": "This communication demonstrates for the first time the capability to independently control the real and imaginary parts of the complex propagation constant in planar, printed circuit board compatible leaky-wave antennas. The structure is based on a half-mode microstrip line which is loaded with an additional row of periodic metallic posts, resulting in a substrate integrated waveguide SIW with one of its lateral electric walls replaced by a partially reflective wall. The radiation mechanism is similar to the conventional microstrip leaky-wave antenna operating in its first higher-order mode, with the novelty that the leaky-mode leakage rate can be controlled by virtue of a sparse row of metallic vias. For this topology it is demonstrated that it is possible to independently control the antenna pointing angle and main lobe beamwidth while achieving high radiation efficiencies, thus providing low-cost, low-profile, simply fed, and easily integrable leaky-wave solutions for high-gain frequency beam-scanning applications. Several prototypes operating at 15 GHz have been designed, simulated, manufactured and tested, to show the operation principle and design flexibility of this one dimensional leaky-wave antenna.", "title": "" }, { "docid": "529a329c6d0cd82b7565426359bd04e0", "text": "Despite the significant advancement in wireless technologies over the years, IEEE 802.11 still emerges as the de-facto standard to achieve the required short to medium range wireless device connectivity in anywhere from offices to homes. With it being ranked the highest among all deployed wireless technologies in terms of market adoption, vulnerability exploitation and attacks targeting it have also been commonly observed. IEEE 802.11 security has thus become a key concern over the years. In this paper, we analysed the threats and attacks targeting the IEEE 802.11 network and also identified the challenges of achieving accurate threat and attack classification, especially in situations where the attacks are novel and have never been encountered by the detection and classification system before. We then proposed a solution based on anomaly detection and classification using a deep learning approach. The deep learning approach self-learns the features necessary to detect network anomalies and is able to perform attack classification accurately. In our experiments, we considered the classification as a multi-class problem (that is, legitimate traffic, flooding type attacks, injection type attacks and impersonation type attacks), and achieved an overall accuracy of 98.6688% in classifying the attacks through the proposed solution.", "title": "" }, { "docid": "110742230132649f178d2fa99c8ffade", "text": "Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.", "title": "" }, { "docid": "fab33f2e32f4113c87e956e31674be58", "text": "We consider the problem of decomposing the total mutual information conveyed by a pair of predictor random variables about a target random variable into redundant, uniqueand synergistic contributions. We focus on the relationship be tween “redundant information” and the more familiar information theoretic notions of “common information.” Our main contri bution is an impossibility result. We show that for independent predictor random variables, any common information based measure of redundancy cannot induce a nonnegative decompositi on of the total mutual information. Interestingly, this entai ls that any reasonable measure of redundant information cannot be deri ved by optimization over a single random variable. Keywords—common and private information, synergy, redundancy, information lattice, sufficient statistic, partial information decomposition", "title": "" }, { "docid": "3fc075e0ed303ef50fdcf943d3951b58", "text": "In this paper, a statistically optimal solution to the Perspective-n-Point (PnP) problem is presented. Many solutions to the PnP problem are geometrically optimal, but do not consider the uncertainties of the observations. In addition, it would be desirable to have an internal estimation of the accuracy of the estimated rotation and translation parameters of the camera pose. Thus, we propose a novel maximum likelihood solution to the PnP problem, that incorporates image observation uncertainties and remains real-time capable at the same time. Further, the presented method is general, as is works with 3D direction vectors instead of 2D image points and is thus able to cope with arbitrary central camera models. This is achieved by projecting (and thus reducing) the covariance matrices of the observations to the corresponding vector tangent space.", "title": "" }, { "docid": "ca1c232e84e7cb26af6852007f215715", "text": "Word embedding-based methods have received increasing attention for their flexibility and effectiveness in many natural language-processing (NLP) tasks, including Word Similarity (WS). However, these approaches rely on high-quality corpus and neglect prior knowledge. Lexicon-based methods concentrate on human’s intelligence contained in semantic resources, e.g., Tongyici Cilin, HowNet, and Chinese WordNet, but they have the drawback of being unable to deal with unknown words. This article proposes a three-stage framework for measuring the Chinese word similarity by incorporating prior knowledge obtained from lexicons and statistics into word embedding: in the first stage, we utilize retrieval techniques to crawl the contexts of word pairs from web resources to extend context corpus. In the next stage, we investigate three types of single similarity measurements, including lexicon similarities, statistical similarities, and embedding-based similarities. Finally, we exploit simple combination strategies with math operations and the counter-fitting combination strategy using optimization method. To demonstrate our system’s efficiency, comparable experiments are conducted on the PKU-500 dataset. Our final results are 0.561/0.516 of Spearman/Pearson rank correlation coefficient, which outperform the state-of-the-art performance to the best of our knowledge. Experiment results on Chinese MC-30 and SemEval-2012 datasets show that our system also performs well on other Chinese datasets, which proves its transferability. Besides, our system is not language-specific and can be applied to other languages, e.g., English.", "title": "" }, { "docid": "42325b507cb2529187a870e30ab727f2", "text": "Most sentence embedding models typically represent each sentence only using word surface, which makes these models indiscriminative for ubiquitous homonymy and polysemy. In order to enhance representation capability of sentence, we employ conceptualization model to assign associated concepts for each sentence in the text corpus, and then learn conceptual sentence embedding (CSE). Hence, this semantic representation is more expressive than some widely-used text representation models such as latent topic model, especially for short-text. Moreover, we further extend CSE models by utilizing a local attention-based model that select relevant words within the context to make more efficient prediction. In the experiments, we evaluate the CSE models on two tasks, text classification and information retrieval. The experimental results show that the proposed models outperform typical sentence embed-ding models.", "title": "" }, { "docid": "2528b23554f934a67b3ed66f7df9d79e", "text": "In this paper, we implemented an approach to predict final exam scores from early course assessments of the students during the semester. We used a linear regression model to check which part of the evaluation of the course assessment affects final exam score the most. In addition, we explained the origins of data mining and data mining in education. After preprocessing and preparing data for the task in hand, we implemented the linear regression model. The results of our work show that quizzes are most accurate predictors of final exam scores compared to other kinds of assessments.", "title": "" }, { "docid": "6fa37e865d7b40c11733b3bde3dfdf91", "text": "Web shells are programs that are written for a specific purpose in Web scripting languages, such as PHP, ASP, ASP.NET, JSP, PERL-CGI, etc. Web shells provide a means to communicate with the server’s operating system via the interpreter of the web scripting languages. Hence, web shells can execute OS specific commands over HTTP. Usually, web attacks by malicious users are made by uploading one of these web shells to compromise the target web servers. Though there have been several approaches to detect such malicious web shells, no standard dataset has been built to compare various web shell detection techniques. In this paper, we present a collection of web shell files, WebSHArk 1.0, as a standard dataset for current and future studies in malicious web shell detection. To provide baseline results for future studies and for the improvement of current tools, we also present some benchmark results by scanning the WebSHArk dataset directory with three web shell scanning tools that are publicly available on the Internet. The WebSHArk 1.0 dataset is only available upon request via email to one of the authors, due to security and legal issues.", "title": "" }, { "docid": "b8d8785968023a38d742abc15c01ee28", "text": "Cryptocurrencies (or digital tokens, digital currencies, e.g., BTC, ETH, XRP, NEO) have been rapidly gaining ground in use, value, and understanding among the public, bringing astonishing profits to investors. Unlike other money and banking systems, most digital tokens do not require central authorities. Being decentralized poses significant challenges for credit rating. Most ICOs are currently not subject to government regulations, which makes a reliable credit rating system for ICO projects necessary and urgent. In this paper, we introduce ICORATING, the first learning–based cryptocurrency rating system. We exploit natural-language processing techniques to analyze various aspects of 2,251 digital currencies to date, such as white paper content, founding teams, Github repositories, websites, etc. Supervised learning models are used to correlate the life span and the price change of cryptocurrencies with these features. For the best setting, the proposed system is able to identify scam ICO projects with 0.83 precision. We hope this work will help investors identify scam ICOs and attract more efforts in automatically evaluating and analyzing ICO projects. 1 2 Author contributions: J. Li designed research; Z. Sun, Z. Deng, F. Li and P. Shi prepared the data; S. Bian and A. Yuan contributed analytic tools; P. Shi and Z. Deng labeled the dataset; J. Li, W. Monroe and W. Wang designed the experiments; J. Li, W. Wu, Z. Deng and T. Zhang performed the experiments; J. Li and T. Zhang wrote the paper; W. Monroe and A. Yuan proofread the paper. Author Contacts: Figure 1: Market capitalization v.s. time. Figure 2: The number of new ICO projects v.s. time.", "title": "" }, { "docid": "5b34cc85e267f28c3eda238620f4646a", "text": "An electrostatic chuck is one of the useful device holding a thin object flat on a bed by electrostatic force. The authors have investigated the fundamental characteristics of an electrostatic chuck consisted of a pair of comb type electrodes and a thin insulation layer between the electrodes and an object. When a thin polymer film is used as an insulation, the holding force for a wafer was large enough in practical use, while the large residual force remains after removing the DC applied voltage. Thus, it was concluded that AC applied voltage will be more preferable than DC, though the electrostatic force for DC applied voltage is somewhat greater than that for AC voltage. Since the electrostatic chuck is generally used in high temperature atmosphere, for example plasma etching, in the semiconductor industry, the insulating layer must be heat resistant. By using a thin ceramic plate, which was made specially for this purpose, the fundamental characteristics of the electrostatic chuck has been investigated. The greater holding force was obtained with a ceramic plate than that with a polymer film. Furthermore, almost no residual force was observed even for the DC applied voltage. The experimental results are reported both in air and in vacuum condition.", "title": "" }, { "docid": "4d4fdd2956ee315d39a94e7501b077ad", "text": "While in recent years machine learning (ML) based approaches have been the popular approach in developing endto-end question answering systems, such systems often struggle when additional knowledge is needed to correctly answer the questions. Proposed alternatives involve translating the question and the natural language text to a logical representation and then use logical reasoning. However, this alternative falters when the size of the text gets bigger. To address this we propose an approach that does logical reasoning over premises written in natural language text. The proposed method uses recent features of Answer Set Programming (ASP) to call external NLP modules (which may be based on ML) which perform simple textual entailment. To test our approach we develop a corpus based on the life cycle questions and showed that Our system achieves up to 18% performance gain when compared to standard MCQ solvers. Developing intelligent agents that can understand natural language, reason and use commonsense knowledge has been one of the long term goals of AI. To track the progress towards this goal, several question answering challenges have been proposed (Levesque, Davis, and Morgenstern 2012; Clark et al. 2018; Richardson, Burges, and Renshaw 2013; Rajpurkar et al. 2016). Our work here is related to the school level science question answering challenge, ARISTO (Clark 2015; Clark et al. 2018). As shown in (Clark et al. 2018) existing IR based and end-to-end machine learning systems work well on a subset of science questions but there exists a significant amount of questions that appears to be hard for existing solvers. In this work we focus on one particular genre of such questions, namely questions about life cycles (and more generally, sequences), even though they have a small presence in the corpus. To get a better understanding of the “life cycle” questions and the “hard” ones among them consider the questions from Table 1. The text in Table 1, which describes the life cycle of a frog does not contain all the knowledge that is necessary to answer the questions. In fact, all the questions require some additional knowledge that is not given in the text. Question 1 requires knowing the definition of “middle” of a sequence. Question 2 requires the knowledge of “between”. Question Copyright c © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Life Cycle of a Frog order: egg→ tadpole→ tadpole with legs→ adult egg Tiny frog eggs are laid in masses in the water by a female frog. The eggs hatch into tadpoles. tadpole (also called the polliwog) This stage hatches from the egg. The tadpole spends its time swimming in the water, eating and growing. Tadpoles breathe using gills and have a tail. tadpole with legs In this stage the tadpole sprouts legs (and then arms), has a longer body, and has a more distinct head. It still breathes using gills and has a tail. froglet In this stage, the almost mature frog breathes with lungs and still has some of its tail. adult The adult frog breathes with lungs and has no tail (it has been absorbed by the body). 1. What is the middle stage in a frogs life? (A) tadpole with legs (B) froglet 2. What is a stage that comes between tadpole and adult in the life cycle of a frog? (A) egg (B) froglet 3. What best indicates that a frog has reached the adult stage? (A) When it has lungs (B) When its tail has been absorbed by the body Table 1: A text for life cycle of a Frog with few questions. 3 on other hand requires the knowledge of “a good indicator”. Note that for question 3, knowing whether an adult frog has lungs or if it is the adult stage where the frog loses its tail is not sufficient to decide if option (A) is the indicator or option (B). In fact an adult frog satisfies both the conditions. An adult frog has lungs and the tail gets absorbed in the adult stage. It is the uniqueness property that decides that option (B) is an indicator for the adult stage. We believe to answer these questions the system requires access to this knowledge. Since this additional knowledge of “middle”, “between”, “indicator” (and some related ones which are shown later) is applicable to any sequence in general and is not specific to only life cycles, we aim to provide this knowledge to the question answering system and then plan to train it so that it can recognize the question types. The paradigm of declarative programming provides a natural solution for adding background knowledge. Also the existing semantic parsers perform well on recognizing questions categories. However the existing declarative programming based question answering methods demand the premises (here the life cycle text) to be given in a logical form. For the domain of life cycle question answering this seems a very demanding and impractical requirement due to the wide variety of sentences that can be present in a life cycle text. Also a life cycle text in our dataset contains 25 lines on average which makes the translation more challenging. The question that we then address is, “can the system utilize the additional knowledge (for e.g. the knowledge of an “indicator”) without requiring the entire text to be given in a formal language?” We show that by using Answer Set Programming and some of its recent features (function symbols) to call external modules that are trained to do simple textual entailment, it is possible do declaratively reasoning over text. We have developed a system following this approach that answers questions from life cycle text by declaratively reasoning about concepts such as “middle”, “between”, “indicator” over premises given in natural language text. To evaluate our method a new dataset has been created with the help of Amazon Mechanical Turk. The entire dataset contains 5811 questions that are created from 41 life cycle texts. A part of this dataset is used for testing. Our system achieved up to 18% performance improvements when compared to standard baselines. Our contributions in this work are two-fold: (a) we propose a novel declarative programming method that accepts natural language texts as premises, which as a result extends the range of applications where declarative programming can be applied and also brings down the development time significantly; (b) we create a new dataset of life cycle texts and questions (https://goo.gl/YmNQKp), which contains annotated logical forms for each question. Background Answer Set Programming An Answer Set Program is a collection of rules of the form, L0 :L1, ..., Lm,not Lm+1, ...,not Ln. where each of the Li’s is a literal in the sense of classical logic. Intuitively, the above rule means that if L1, ..., Lm are true and if Lm+1, ..., Ln can be safely assumed to be false then L0 must be true (Gelfond and Lifschitz 1988). The lefthand side of an ASP rule is called the head and the righthand side is called the body. The symbol :(“if”) is dropped if the body is empty; such rules are called facts. Throughout this paper, predicates and constants in a rule start with a lower case letter, while variables start with a capital letter. The following ASP program represents question 3 from Table 1 with three facts and one rule. Listing 1: a sample question representation qIndicator(frog,adult). option(a, has(lungs)). option(b, hasNo(tail)). ans(X):option(X,V), indicator(O,S,V),", "title": "" }, { "docid": "b60474e6e2fa0f08241819bac709d6fd", "text": "Patriarchy is the prime obstacle to women’s advancement and development. Despite differences in levels of domination the broad principles remain the same, i.e. men are in control. The nature of this control may differ. So it is necessary to understand the system, which keeps women dominated and subordinate, and to unravel its workings in order to work for women’s development in a systematic way. In the modern world where women go ahead by their merit, patriarchy there creates obstacles for women to go forward in society. Because patriarchal institutions and social relations are responsible for the inferior or secondary status of women. Patriarchal society gives absolute priority to men and to some extent limits women’s human rights also. Patriarchy refers to the male domination both in public and private spheres. In this way, feminists use the term ‘patriarchy’ to describe the power relationship between men and women as well as to find out the root cause of women’s subordination. This article, hence, is an attempt to analyse the concept of patriarchy and women’s subordination in a theoretical perspective.", "title": "" }, { "docid": "570e03101ae116e2f20ab6337061ec3f", "text": "This study explored the potential for using seed cake from hemp (Cannabis sativa L.) as a protein feed for dairy cows. The aim was to evaluate the effects of increasing the proportion of hempseed cake (HC) in the diet on milk production and milk composition. Forty Swedish Red dairy cows were involved in a 5-week dose-response feeding trial. The cows were allocated randomly to one of four experimental diets containing on average 494 g/kg of grass silage and 506 g/kg of concentrate on a dry matter (DM) basis. Diets containing 0 g (HC0), 143 g (HC14), 233 g (HC23) or 318 g (HC32) HC/kg DM were achieved by replacing an increasing proportion of compound pellets with cold-pressed HC. Increasing the proportion of HC resulted in dietary crude protein (CP) concentrations ranging from 126 for HC0 to 195 g CP/kg DM for HC32. Further effects on the composition of the diet with increasing proportions of HC were higher fat and NDF and lower starch concentrations. There were no linear or quadratic effects on DM intake, but increasing the proportion of HC in the diet resulted in linear increases in fat and NDF intake, as well as CP intake (P < 0.001), and a linear decrease in starch intake (P < 0.001). The proportion of HC had significant quadratic effects on the yields of milk, energy-corrected milk (ECM) and milk protein, fat and lactose. The curvilinear response of all yield parameters indicated maximum production from cows fed diet HC14. Increasing the proportion of HC resulted in linear decreases in both milk protein and milk fat concentration (P = 0.005 and P = 0.017, respectively), a linear increase in milk urea (P < 0.001), and a linear decrease in CP efficiency (milk protein/CP intake; P < 0.001). In conclusion, the HC14 diet, corresponding to a dietary CP concentration of 157 g/kg DM, resulted in the maximum yields of milk and ECM by dairy cows in this study.", "title": "" }, { "docid": "0b44a994625563ae892e9737c40cc9ac", "text": "A conventional 3D printer utilizes horizontal plane layerings to produce a 3D printed part. However, there are drawbacks associated with horizontal plane layerings motions, e.g., support material needed to printed an overhang structure. To enable multi-plane printing, an industrial robot arm platform is proposed for additive manufacturing. The concept being explored is the integration of existing additive manufacturing process technologies with an industrial robot arm to create a 3D printer with a multi-plane layering capability. The objective is to perform multi-plane toolpath motions that will leverage the increased capability of the robot arm platform compared to conventional gantry-style 3D printers. This approach enables print layering in multiple planes whereas existing conventional 3D printers are restricted to a single toolpath plane (e.g. x-y plane). This integration combines the fused deposition modeling techniques using an extruder head that is typically used in 3D printing and a 6 degree of freedom robot arm. Here, a Motoman SV3X is used as the platform for the robot arm. A higher level controller is used to coordinate the robot and extruder. For the higher level controller to communicate with the robot arm controller, interface software based on the MotoCom SDK libraries was implemented. The integration of the robotic arm and extruder enables multiplane toolpath motions to be utilized in the production of 3D printed parts. Using this integrated system, a test block with an overhang structure has been 3D printed without the use of support material .", "title": "" }, { "docid": "380d5763b687dd8add69a9fe306ad06a", "text": "OBJECTIVE\nThe objective of this study was to identify the healing process and outcome of hymenal injuries in prepubertal and adolescent girls.\n\n\nMETHODS\nThis multicenter, retrospective project used photographs to document the healing process and outcome of hymenal trauma that was sustained by 239 prepubertal and pubertal girls whose ages ranged from 4 months to 18 years.\n\n\nRESULTS\nThe injuries that were sustained by the 113 prepubertal girls consisted of 21 accidental or noninflicted injuries, 73 secondary to abuse, and 19 \"unknown cause\" injuries. All 126 pubertal adolescents were sexual assault victims. The hymenal injuries healed at various rates and except for the deeper lacerations left no evidence of the previous trauma. Abrasions and \"mild\" submucosal hemorrhages disappeared within 3 to 4 days, whereas \"marked\" hemorrhages persisted for 11 to 15 days. Only petechiae and blood blisters proved to be \"markers\" for determining the approximate age of an injury. Petechiae resolved within 48 hours in the prepubertal girls and 72 hours in the adolescents. A blood blister was detected at 34 days in an adolescent. As lacerations healed, their observed depth became shallower and their configuration smoothed out. Of the girls who sustained \"superficial,\" \"intermediate,\" or \"deep\" lacerations, 15 of 18 prepubertal girls had smooth and continuous appearing hymenal rims, whereas 24 of 41 adolescents' hymens had a normal, \"scalloped\" appearance and 30 of 34 had no disruption of continuity on healing. The final \"width\" of a hymenal rim was dependent on the initial depth of the laceration. No scar tissue formation was observed in either group of girls.\n\n\nCONCLUSIONS\nThe hymenal injuries healed rapidly and except for the more extensive lacerations left no evidence of a previous injury. There were no significant differences in the healing process and the outcome of the hymenal injuries in the 2 groups of girls.", "title": "" } ]
scidocsrr
901a4c113ec10d01b934f80bb6ac0dc8
Software clones in scratch projects: on the presence of copy-and-paste in computational thinking learning
[ { "docid": "c536e79078d7d5778895e5ac7f02c95e", "text": "Block-based programming languages like Scratch, Alice and Blockly are becoming increasingly common as introductory languages in programming education. There is substantial research showing that these visual programming environments are suitable for teaching programming concepts. But, what do people do when they use Scratch? In this paper we explore the characteristics of Scratch programs. To this end we have scraped the Scratch public repository and retrieved 250,000 projects. We present an analysis of these projects in three different dimensions. Initially, we look at the types of blocks used and the size of the projects. We then investigate complexity, used abstractions and programming concepts. Finally we detect code smells such as large scripts, dead code and duplicated code blocks. Our results show that 1) most Scratch programs are small, however Scratch programs consisting of over 100 sprites exist, 2) programming abstraction concepts like procedures are not commonly used and 3) Scratch programs do suffer from code smells including large scripts and unmatched broadcast signals.", "title": "" } ]
[ { "docid": "686e8892c22a740fbd781f0cc0150a9d", "text": "Difficulty with handwriting is one of the most frequent reasons that children in the public schools are referred to occupational therapy. Current research on the influence of ergonomic factors, such as pencil grip and pressure, and perceptual-motor factors traditionally believed to affect handwriting, is reviewed. Factors such as visual perception show little relationship to handwriting, whereas tactile-kinesthetic, visual-motor, and motor planning appear to be more closely related to handwriting. By better understanding the ergonomic and perceptual-motor factors that contribute to and influence handwriting, therapists will be better able to design rationally based intervention programs.", "title": "" }, { "docid": "189ecff4c6f01ba870908fa4abc8db91", "text": "Graph processing is becoming increasingly prevalent across many application domains. In spite of this prevalence, there is little research about how graphs are actually used in practice. We conducted an online survey aimed at understanding: (i) the types of graphs users have; (ii) the graph computations users run; (iii) the types of graph software users use; and (iv) the major challenges users face when processing their graphs. We describe the responses of the participants to our questions, highlighting common patterns and challenges. The participants’ responses revealed surprising facts about graph processing in practice, which we hope can guide future research.", "title": "" }, { "docid": "b1e4fb97e4b1d31e4064f174e50f17d3", "text": "We propose an inverse reinforcement learning (IRL) approach using Deep QNetworks to extract the rewards in problems with large state spaces. We evaluate the performance of this approach in a simulation-based autonomous driving scenario. Our results resemble the intuitive relation between the reward function and readings of distance sensors mounted at different poses on the car. We also show that, after a few learning rounds, our simulated agent generates collision-free motions and performs human-like lane change behaviour.", "title": "" }, { "docid": "ce0cfd1dd69e235f942b2e7583b8323b", "text": "Increasing use of the World Wide Web as a B2C commercial tool raises interest in understanding the key issues in building relationships with customers on the Internet. Trust is believed to be the key to these relationships. Given the differences between a virtual and a conventional marketplace, antecedents and consequences of trust merit re-examination. This research identifies a number of key factors related to trust in the B2C context and proposes a framework based on a series of underpinning relationships among these factors. The findings in this research suggest that people are more likely to purchase from the web if they perceive a higher degree of trust in e-commerce and have more experience in using the web. Customer’s trust levels are likely to be influenced by the level of perceived market orientation, site quality, technical trustworthiness, and user’s web experience. People with a higher level of perceived site quality seem to have a higher level of perceived market orientation and trustworthiness towards e-commerce. Furthermore, people with a higher level of trust in e-commerce are more likely to participate in e-commerce. Positive ‘word of mouth’, money back warranty and partnerships with well-known business partners, rank as the top three effective risk reduction tactics. These findings complement the previous findings on e-commerce and shed light on how to establish a trust relationship on the World Wide Web.  2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9ace030a915a6ec8bf8f35b918c8c8aa", "text": "Why are boys at risk? To address this question, I use the perspective of regulation theory to offer a model of the deeper psychoneurobiological mechanisms that underlie the vulnerability of the developing male. The central thesis of this work dictates that significant gender differences are seen between male and female social and emotional functions in the earliest stages of development, and that these result from not only differences in sex hormones and social experiences but also in rates of male and female brain maturation, specifically in the early developing right brain. I present interdisciplinary research which indicates that the stress-regulating circuits of the male brain mature more slowly than those of the female in the prenatal, perinatal, and postnatal critical periods, and that this differential structural maturation is reflected in normal gender differences in right-brain attachment functions. Due to this maturational delay, developing males also are more vulnerable over a longer period of time to stressors in the social environment (attachment trauma) and toxins in the physical environment (endocrine disruptors) that negatively impact right-brain development. In terms of differences in gender-related psychopathology, I describe the early developmental neuroendocrinological and neurobiological mechanisms that are involved in the increased vulnerability of males to autism, early onset schizophrenia, attention deficit hyperactivity disorder, and conduct disorders as well as the epigenetic mechanisms that can account for the recent widespread increase of these disorders in U.S. culture. I also offer a clinical formulation of early assessments of boys at risk, discuss the impact of early childcare on male psychopathogenesis, and end with a neurobiological model of optimal adult male socioemotional functions.", "title": "" }, { "docid": "5bee5208fa2676b7a7abf4ef01f392b8", "text": "Artificial Intelligence (AI) is a general term that implies the use of a computer to model intelligent behavior with minimal human intervention. AI is generally accepted as having started with the invention of robots. The term derives from the Czech word robota, meaning biosynthetic machines used as forced labor. In this field, Leonardo Da Vinci's lasting heritage is today's burgeoning use of robotic-assisted surgery, named after him, for complex urologic and gynecologic procedures. Da Vinci's sketchbooks of robots helped set the stage for this innovation. AI, described as the science and engineering of making intelligent machines, was officially born in 1956. The term is applicable to a broad range of items in medicine such as robotics, medical diagnosis, medical statistics, and human biology-up to and including today's \"omics\". AI in medicine, which is the focus of this review, has two main branches: virtual and physical. The virtual branch includes informatics approaches from deep learning information management to control of health management systems, including electronic health records, and active guidance of physicians in their treatment decisions. The physical branch is best represented by robots used to assist the elderly patient or the attending surgeon. Also embodied in this branch are targeted nanorobots, a unique new drug delivery system. The societal and ethical complexities of these applications require further reflection, proof of their medical utility, economic value, and development of interdisciplinary strategies for their wider application.", "title": "" }, { "docid": "11ecb3df219152d33020ba1c4f8848bb", "text": "Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet, which began as a research experiment, was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, particularly for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification and to an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design-in particular, the software defined networking (SDN) paradigm-offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods and present a survey of its applications to networking.", "title": "" }, { "docid": "704df193801e9cd282c0ce2f8a72916b", "text": "We present our preliminary work in developing augmented reali ty systems to improve methods for the construction, inspection, and renovatio n of architectural structures. Augmented reality systems add virtual computer-generated mate rial to the surrounding physical world. Our augmented reality systems use see-through headworn displays to overlay graphics and sounds on a person’s naturally occurring vision and hearing. As the person moves about, the position and orientation of his or her head is tracked, allowing the overlaid material to remai n tied to the physical world. We describe an experimental augmented reality system tha t shows the location of columns behind a finished wall, the location of re-bar s inside one of the columns, and a structural analysis of the column. We also discuss our pre liminary work in developing an augmented reality system for improving the constructio n of spaceframes. Potential uses of more advanced augmented reality systems are presented.", "title": "" }, { "docid": "a37aae87354ff25bf7937adc7a9f8e62", "text": "Vectorizing hand-drawn sketches is an important but challenging task. Many businesses rely on fashion, mechanical or structural designs which, sooner or later, need to be converted in vectorial form. For most, this is still a task done manually. This paper proposes a complete framework that automatically transforms noisy and complex hand-drawn sketches with different stroke types in a precise, reliable and highly-simplified vectorized model. The proposed framework includes a novel line extraction algorithm based on a multi-resolution application of Pearson’s cross correlation and a new unbiased thinning algorithm that can get rid of scribbles and variable-width strokes to obtain clean 1-pixel lines. Other contributions include variants of pruning, merging and edge linking procedures to post-process the obtained paths. Finally, a modification of the original Schneider’s vectorization algorithm is designed to obtain fewer control points in the resulting Bézier splines. All the steps presented in this framework have been extensively tested and compared with state-of-the-art algorithms, showing (both qualitatively and quantitatively) their outperformance. Moreover they exhibit fast real-time performance, making them suitable for integration in any computer graphics toolset.", "title": "" }, { "docid": "13e61389de352298bf9581bc8a8714cc", "text": "A bacterial gene (neo) conferring resistance to neomycin-kanamycin antibiotics has been inserted into SV40 hybrid plasmid vectors and introduced into cultured mammalian cells by DNA transfusion. Whereas normal cells are killed by the antibiotic G418, those that acquire and express neo continue to grow in the presence of G418. In the course of the selection, neo DNA becomes associated with high molecular weight cellular DNA and is retained even when cells are grown in the absence of G418 for extended periods. Since neo provides a marker for dominant selections, cell transformation to G418 resistance is an efficient means for cotransformation of nonselected genes.", "title": "" }, { "docid": "3fa30df910c964bb2bf27a885aa59495", "text": "In an Intelligent Environment, he user and the environment work together in a unique manner; the user expresses what he wishes to do, and the environment recognizes his intentions and helps out however it can. If well-implemented, such an environment allows the user to interact with it in the manner that is most natural for him personally. He should need virtually no time to learn to use it and should be more productive once he has. But to implement a useful and natural Intelligent Environment, he designers are faced with a daunting task: they must design a software system that senses what its users do, understands their intentions, and then responds appropriately. In this paper we argue that, in order to function reasonably in any of these ways, an Intelligent Environment must make use of declarative representations of what the user might do. We present our evidence in the context of the Intelligent Classroom, a facility that aids a speaker in this way and uses its understanding to produce a video of his presentation.", "title": "" }, { "docid": "6133ec98d838c576f1441e9d7fa58528", "text": "Since repositories are a key tool in making scholarly knowledge open access (OA), determining their web presence and visibility on the Web (both are proxies of web impact) is essential, particularly in Google (search engine par excellence) and Google Scholar (a tool increasingly used by researchers to search for academic information). The few studies conducted so far have been limited to very specific geographic areas (USA), which makes it necessary to find out what is happening in other regions that are not part of mainstream academia, and where repositories play a decisive role in the visibility of scholarly production. The main objective of this study is to ascertain the web presence and visibility of Latin American repositories in Google and Google Scholar through the application of page count and web mention indicators respectively. For a sample of 137 repositories, the results indicate that the indexing ratio is low in Google, and virtually nonexistent in Google Scholar; they also indicate a complete lack of correspondence between the repository records and the data produced by these two search tools. These results are mainly attributable to limitations arising from the use of description schemas that are incompatible with Google Scholar (repository design) and the reliability of web mention indicators (search engines). We conclude that neither Google nor Google Scholar accurately represent the actual size of OA content published by Latin American repositories; this may indicate a non-indexed, hidden side to OA, which could be limiting the dissemination and consumption of OA scholarly literature.", "title": "" }, { "docid": "c95b4720e567003c078b7858c3b43590", "text": "The fate of differentiation of G1E cells is determined, among other things, by a handful of transcription factors (TFs) binding the neighborhood of appropriate gene targets. The problem of understanding the dynamics of gene expression regulation is a feature learning problem on high dimensional space determined by the sizes of gene neighborhoods, but that can be projected on a much lower dimensional manifold whose space depends on the number of TFs and the number of ways they interact. To learn this manifold, we train a deep convolutional network on the activity of TF binding on 20Kb gene neighborhoods labeled by binarized levels of target gene expression. After supervised training of the model we achieve 77% accuracy as estimated by 10-fold CV. We discuss methods for the representation of the model knowledge back into the input space. We use this representation to highlight important patterns and genome locations with biological importance.", "title": "" }, { "docid": "a9f70ea201e17bca3b97f6ef7b2c1c15", "text": "Network embedding task aims at learning low-dimension latent representations of vertices while preserving the structure of a network simultaneously. Most existing network embedding methods mainly focus on static networks, which extract and condense the network information without temporal information. However, in the real world, networks keep evolving, where the linkage states between the same vertex pairs at consequential timestamps have very close correlations. In this paper, we propose to study the network embedding problem and focus on modeling the linkage evolution in the dynamic network setting. To address this problem, we propose a deep dynamic network embedding method. More specifically, the method utilizes the historical information obtained from the network snapshots at past timestamps to learn latent representations of the future network. In the proposed embedding method, the objective function is carefully designed to incorporate both the network internal and network dynamic transition structures. Extensive empirical experiments prove the effectiveness of the proposed model on various categories of real-world networks, including a human contact network, a bibliographic network, and e-mail networks. Furthermore, the experimental results also demonstrate the significant advantages of the method compared with both the state-of-the-art embedding techniques and several existing baseline methods.", "title": "" }, { "docid": "836bdb7960c7679c4d7b4285f04b65b4", "text": "PURPOSE\nBendamustine hydrochloride is an alkylating agent with novel mechanisms of action. This phase II multicenter study evaluated the efficacy and toxicity of bendamustine in patients with B-cell non-Hodgkin's lymphoma (NHL) refractory to rituximab.\n\n\nPATIENTS AND METHODS\nPatients received bendamustine 120 mg/m(2) intravenously on days 1 and 2 of each 21-day cycle. Outcomes included response, duration of response, progression-free survival, and safety.\n\n\nRESULTS\nSeventy-six patients, ages 38 to 84 years, with predominantly stage III/IV indolent (80%) or transformed (20%) disease were treated; 74 were assessable for response. Twenty-four (32%) were refractory to chemotherapy. Patients received a median of two prior unique regimens. An overall response rate of 77% (15% complete response, 19% unconfirmed complete response, and 43% partial) was observed. The median duration of response was 6.7 months (95% CI, 5.1 to 9.9 months), 9.0 months (95% CI, 5.8 to 16.7) for patients with indolent disease, and 2.3 months (95% CI, 1.7 to 5.1) for those with transformed disease. Thirty-six percent of these responses exceeded 1 year. The most frequent nonhematologic adverse events included nausea and vomiting, fatigue, constipation, anorexia, fever, cough, and diarrhea. Grade 3 or 4 reversible hematologic toxicities included neutropenia (54%), thrombocytopenia (25%), and anemia (12%).\n\n\nCONCLUSION\nSingle-agent bendamustine produced durable objective responses with acceptable toxicity in heavily pretreated patients with rituximab-refractory, indolent NHL. These findings are promising and will serve as a benchmark for future clinical trials in this novel patient population.", "title": "" }, { "docid": "1e1b5ae673204208a1afbca9267bfa69", "text": "Article History Received: 19 March 2018 Revised: 30 April 2018 Accepted: 2 May 2018 Published: 5 May 2018", "title": "" }, { "docid": "d7bf9a0b87a1062fd07794660d86f9dc", "text": "Portraiture plays a substantial role in traditional painting, yet it has not been studied in depth in painterly rendering research. The difficulty in rendering human portraits is due to our acute visual perception to the structure of human face. To achieve satisfactory results, a portrait rendering algorithm should account for facial structure. In this paper, we present an example-based method to render portrait paintings from photographs, by transferring brush strokes from previously painted portrait templates by artists. These strokes carry rich information about not only the facial structure but also how artists depict the structure with large and decisive brush strokes and vibrant colors. With a dictionary of portrait painting templates for different types of faces, we show that this method can produce satisfactory results.", "title": "" }, { "docid": "e3da610a131922990edaa6216ff4a025", "text": "Learning high-level image representations using object proposals has achieved remarkable success in multi-label image recognition. However, most object proposals provide merely coarse information about the objects, and only carefully selected proposals can be helpful for boosting the performance of multi-label image recognition. In this paper, we propose an object-proposal-free framework for multi-label image recognition: random crop pooling (RCP). Basically, RCP performs stochastic scaling and cropping over images before feeding them to a standard convolutional neural network, which works quite well with a max-pooling operation for recognizing the complex contents of multi-label images. To better fit the multi-label image recognition task, we further develop a new loss function-the dynamic weighted Euclidean loss-for the training of the deep network. Our RCP approach is amazingly simple yet effective. It can achieve significantly better image recognition performance than the approaches using object proposals. Moreover, our adapted network can be easily trained in an end-to-end manner. Extensive experiments are conducted on two representative multi-label image recognition data sets (i.e., PASCAL VOC 2007 and PASCAL VOC 2012), and the results clearly demonstrate the superiority of our approach.", "title": "" }, { "docid": "5c444fcd85dd89280eee016fd1cbd175", "text": "Over the last years, object detection has become a more and more active field of research in robotics. An important problem in object detection is the need for sufficient labeled training data to learn good classifiers. In this paper we show how to significantly reduce the need for manually labeled training data by leveraging data sets available on the World Wide Web. Specifically, we show how to use objects from Google’s 3D Warehouse to train an object detection system for 3D point clouds collected by robots navigating through both urban and indoor environments. In order to deal with the different characteristics of the web data and the real robot data, we additionally use a small set of labeled point clouds and perform domain adaptation. Our experiments demonstrate that additional data taken from the 3D Warehouse along with our domain adaptation greatly improves the classification accuracy on real-world environments.", "title": "" }, { "docid": "2eebc7477084b471f9e9872ba8751359", "text": "Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.", "title": "" } ]
scidocsrr
e2212dc4b06cd38f849ac5ae8f9baf57
IBM PAIRS curated big data service for accelerated geospatial data analytics and discovery
[ { "docid": "e1e51c40df9888f591d4625d666063bc", "text": "DTU Orbit (17/12/2018) Geospatial Big Data Handling Theory and Methods: A Review and Research Challenges Big data has now become a strong focus of global interest that is increasingly attracting the attention of academia, industry, government and other organizations. Big data can be situated in the disciplinary area of traditional geospatial data handling theory and methods. The increasing volume and varying format of collected geospatial big data presents challenges in storing, managing, processing, analyzing, visualizing and verifying the quality of data. This has implications for the quality of decisions made with big data. Consequently, this position paper of the International Society for Photogrammetry and Remote Sensing (ISPRS) Technical Commission II (TC II) revisits the existing geospatial data handling methods and theories to determine if they are still capable of handling emerging geospatial big data. Further, the paper synthesises problems, major issues and challenges with current developments as well as recommending what needs to be developed further in the near future.", "title": "" }, { "docid": "99a874fd9545649f517eb2a949a9b934", "text": "Sensor miniaturisation, improved battery technology and the availability of low-cost yet advanced Unmanned Aerial Vehicles (UAV) have provided new opportunities for environmental remote sensing. The UAV provides a platform for close-range aerial photography. Detailed imagery captured from micro-UAV can produce dense point clouds using multi-view stereopsis (MVS) techniques combining photogrammetry and computer vision. This study applies MVS techniques to imagery acquired from a multi-rotor micro-UAV of a natural coastal site in southeastern Tasmania, Australia. A very dense point cloud (<1–3 cm point spacing) is produced in an arbitrary coordinate system using full resolution imagery, whereas other studies usually downsample the original imagery. The point cloud is sparse in areas of complex vegetation and where surfaces have a homogeneous texture. Ground control points collected with Differential Global Positioning System (DGPS) are identified and used for georeferencing via a Helmert transformation. This study compared georeferenced point clouds to a Total Station survey in order to assess and quantify their geometric accuracy. The results indicate that a georeferenced point cloud accurate to 25–40 mm can be obtained from imagery acquired from ∼50 m. UAV-based image capture provides the spatial and temporal resolution required to map and monitor natural landscapes. This paper assesses the accuracy of the generated point clouds based on field survey points. Based on our key findings we conclude that sub-decimetre terrain change (in this case coastal erosion) can be monitored. Remote Sens. 2012, 4 1574", "title": "" } ]
[ { "docid": "776de4218230e161570d599440183354", "text": "For the first time, we present a state-of-the-art energy-efficient 16nm technology integrated with FinFET transistors, 0.07um2 high density (HD) SRAM, Cu/low-k interconnect and high density MiM for mobile SoC and computing applications. This technology provides 2X logic density and >35% speed gain or >55% power reduction over our 28nm HK/MG planar technology. To our knowledge, this is the smallest fully functional 128Mb HD FinFET SRAM (with single fin) test-chip demonstrated with low Vccmin for 16nm node. Low leakage (SVt) FinFET transistors achieve excellent short channel control with DIBL of <;30 mV/V and superior Idsat of 520/525 uA/um at 0.75V and Ioff of 30 pA/um for NMOS and PMOS, respectively.", "title": "" }, { "docid": "3e0fe6ac8819153639b950a4993b111b", "text": "This paper reports on status of components development for miniaturized nuclear magnetic resonance gyroscopes (micro-NMRG). The reported components are (1) coils to generate and control the magnetic field, demonstrating experimental magnetic field to current ratio of B/I=214.1 uT/A and resulting in an estimated magnetic field homogeneity of H=354ppm, (2) a micro-fabricated spherical cells demonstrating confined alkali metal and noble gas, (3) a heater to keep the alkali metal in the vapor state, showing the capability of heating the micro-cell up to 160 degrees C with 1.44W of power, (4) backbone structure with integrated reflectors, demonstrating the ability to preserve 90.9% of initial light polarization. The introduced design utilized glassblowing process on a wafer-level for fabricating miniaturized NMR cell and 3D-folded-MEMS approach for fabricating the coils, heaters, and reflectors. The field homogeneity of the introduced coil design is capable of achieving the transverse-relaxation time of T2=7.5s. The projected ARW of the current design of micro-NMRG is 0.1 deg/rt-hr", "title": "" }, { "docid": "19443768282cf17805e70ac83288d303", "text": "Interactive narrative is a form of storytelling in which users affect a dramatic storyline through actions by assuming the role of characters in a virtual world. This extended abstract outlines the SCHEHERAZADE-IF system, which uses crowdsourcing and artificial intelligence to automatically construct text-based interactive narrative experiences.", "title": "" }, { "docid": "d614eb429aa62e7d568acbba8ac7fe68", "text": "Four women, who previously had undergone multiple unsuccessful in vitro fertilisation (IVF) cycles because of failure of implantation of good quality embryos, were identified as having coexisting uterine adenomyosis. Endometrial biopsies showed that adenomyosis was associated with a prominent aggregation of macrophages within the superficial endometrial glands, potentially interfering with embryo implantation. The inactivation of adenomyosis by an ultra-long pituitary downregulation regime promptly resulted in successful pregnancy for all women in this case series.", "title": "" }, { "docid": "9504c6c6286f6bd57e5e443d6fdcced9", "text": "Comparisons of two assessment measures for ADHD: the ADHD Behavior Checklist and the Integrated Visual and Auditory Continuous Performance Test (IVA CPT) were examined using undergraduates (n=44) randomly assigned to a control or a simulated malingerer condition and undergraduates with a valid diagnosis of ADHD (n=16). It was predicted that malingerers would successfully fake ADHD on the rating scale but not on the CPT for which they would overcompensate, scoring lower than all other groups. Analyses indicated that the ADHD Behavior Rating Scale was successfully faked for childhood and current symptoms. IVA CPT could not be faked on 81% of its scales. The CPT's impairment index results revealed: sensitivity 94%, specificity 91%, PPP 88%, NPP 95%. Results provide support for the inclusion of a CPT in assessment of adult ADHD.", "title": "" }, { "docid": "48623054af5217d48b05aed57a67ae66", "text": "This paper proposes an ontology-based approach to analyzing and assessing the security posture for software products. It provides measurements of trust for a software product based on its security requirements and evidence of assurance, which are retrieved from an ontology built for vulnerability management. Our approach differentiates with the previous work in the following aspects: (1) It is a holistic approach emphasizing that the system assurance cannot be determined or explained by its component assurance alone. Instead, the software system as a whole determines its assurance level. (2) Our approach is based on widely accepted standards such as CVSS, CVE, CWE, CPE, and CAPEC. Our ontology integrated these standards seamlessly thus provides a solid foundation for security assessment. (3) Automated tools have been built to support our approach, delivering the environmental scores for software products.", "title": "" }, { "docid": "740a83306dddd3123a910acbbd01ff80", "text": "We present a framework to understand GAN training as alternating density ratio estimation, and approximate divergence minimization. This provides an interpretation for the mismatched GAN generator and discriminator objectives often used in practice, and explains the problem of poor sample diversity. Further, we derive a family of generator objectives that target arbitrary f -divergences without minimizing a lower bound, and use them to train generative image models that target either improved sample quality or greater sample diversity.", "title": "" }, { "docid": "23e39951466e614fd3a1ac9c9fcdf5ef", "text": "Dengue is a life threatening disease prevalent in several developed as well as developing countries like India. This is a virus born disease caused by breeding of Aedes mosquito. Datasets that are available for dengue describe information about the patients suffering with dengue disease and without dengue disease along with their symptoms like: Fever Temperature, WBC, Platelets, Severe Headache, Vomiting, Metallic Taste, Joint Pain, Appetite, Diarrhea, Hematocrit, Hemoglobin, and how many days suffer in different city. In this paper we discuss various algorithm approaches of data mining that have been utilized for dengue disease prediction. Data mining is a well known technique used by health organizations for classification of diseases such as dengue, diabetes and cancer in bioinformatics research. In the proposed approach we have used WEKA with 10 cross validation to evaluate data and compare results. Weka has an extensive collection of different machine learning and data mining algorithms. In this paper we have firstly classified the dengue data set and then compared the different data mining techniques in weka through Explorer, knowledge flow and Experimenter interfaces. Furthermore in order to validate our approach we have used a dengue dataset with 108 instances but weka used 99 rows and 18 attributes to determine the prediction of disease and their accuracy using classifications of different algorithms to find out the best performance. The main objective of this paper is to classify data and assist the users in extracting useful information from data and easily identify a suitable algorithm for accurate predictive model from it. From the findings of this paper it can be concluded that Naïve Bayes and J48 are the best performance algorithms for classified accuracy because they achieved maximum accuracy= 100% with 99 correctly classified instances, maximum ROC = 1 , had least mean absolute error and it took minimum time for building this model through Explorer and Knowledge flow results.", "title": "" }, { "docid": "813a3988b84745ec768959d1c98ac0a8", "text": "To enhance effectiveness, a user's query can be rewritten internally by the search engine in many ways, for example by applying proximity, or by expanding the query with related terms. However, approaches that benefit effectiveness often have a negative impact on efficiency, which has impacts upon the user satisfaction, if the query is excessively slow. In this paper, we propose a novel framework for using the predicted execution time of various query rewritings to select between alternatives on a per-query basis, in a manner that ensures both effectiveness and efficiency. In particular, we propose the prediction of the execution time of ephemeral (e.g., proximity) posting lists generated from uni-gram inverted index posting lists, which are used in establishing the permissible query rewriting alternatives that may execute in the allowed time. Experiments examining both the effectiveness and efficiency of the proposed approach demonstrate that a 49% decrease in mean response time (and 62% decrease in 95th-percentile response time) can be attained without significantly hindering the effectiveness of the search engine.", "title": "" }, { "docid": "ca509048385b8cf28bd7b89c685f21b2", "text": "Recent studies on knowledge base completion, the task of recovering missing relationships based on recorded relations, demonstrate the importance of learning embeddings from multi-step relations. However, due to the size of knowledge bases, learning multi-step relations directly on top of observed instances could be costly. In this paper, we propose Implicit ReasoNets (IRNs), which is designed to perform large-scale inference implicitly through a search controller and shared memory. Unlike previous work, IRNs use training data to learn to perform multi-step inference through the shared memory, which is also jointly updated during training. While the inference procedure is not operating on top of observed instances for IRNs, our proposed model outperforms all previous approaches on the popular FB15k benchmark by more than 5.7%.", "title": "" }, { "docid": "c20e31ddee311a1703fb5ff3687a1215", "text": "The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. Most works in deep learning have achieved a great success on regular input representations, but they are hard to be directly applied to classify point clouds due to the irregularity and inhomogeneity of the data. In this paper, a deep neural network with spatial pooling (DNNSP) is proposed to classify large-scale point clouds without rasterization. The DNNSP first obtains the point-based feature descriptors of all points in each point cluster. The distance minimum spanning tree-based pooling is then applied in the point feature representation to describe the spatial information among the points in the point clusters. The max pooling is next employed to aggregate the point-based features into the cluster-based features. To assure the DNNSP is invariant to the point permutation and sizes of the point clusters, the point-based feature representation is determined by the multilayer perception (MLP) and the weight sharing for each point is retained, which means that the weight of each point in the same layer is the same. In this way, the DNNSP can learn the features of points scaled from the entire regions to the centers of the point clusters, which makes the point cluster-based feature representations robust and discriminative. Finally, the cluster-based features are input to another MLP for point cloud classification. We have evaluated qualitatively and quantitatively the proposed method using several airborne laser scanning and terrestrial laser scanning point cloud data sets. The experimental results have demonstrated the effectiveness of our method in improving classification accuracy.", "title": "" }, { "docid": "c1090b530ab719bdd012ebb3b80cf361", "text": "Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. Among the existing solutions the systems relying on electroencephalograms (EEG) occupy the most prominent place due to their non-invasiveness. However, the process of translating EEG signals into computer commands is far from trivial, since it requires the optimization of many different parameters that need to be tuned jointly. In this report, we focus on the category of EEG-based BCIs that rely on Steady-State-Visual-Evoked Potentials (SSVEPs) and perform a comparative evaluation of the most promising algorithms existing in the literature. More specifically, we define a set of algorithms for each of the various different parameters composing a BCI system (i.e. filtering, artifact removal, feature extraction, feature selection and classification) and study each parameter independently by keeping all other parameters fixed. The results obtained from this evaluation process are provided together with a dataset consisting of the 256-channel, EEG signals of 11 subjects, as well as a processing toolbox for reproducing the results and supporting further experimentation. In this way, we manage to make available for the community a state-of-the-art baseline for SSVEP-based BCIs that can be used as a basis for introducing novel methods and approaches.", "title": "" }, { "docid": "ef800faf0f7928295e57ca02df48e1e7", "text": "The electron field-emission properties of carbon nanotubes enable the fabrication of cold cathodes for a variety of vacuum device applications. The utilization of these cathodes is an attractive alternative for the replacement of thermionic or hot cathodes for generating x-rays.[1] Miniature X-ray tubes [2,3,4] have been fabricated using a triode-style carbon nanotube-based cathodes.[5] In this paper, we report the results of characterization studies, such as beam current dependence on the control gate voltage. Also, results on focal spot measurements and electron-beam modeling will be presented. It is shown that the new x-ray tubes exhibit superior lifetime, stable focus spot, and are capable of pulsed operation. The miniature tube has also been integrated with a high-voltage power supply for operation in a constant anode-current mode, providing a stable platform for x-ray analysis applications. [1] “Generation of Continuous and Pulse Diagnostic Imaging X-Ray Radiation Using a Carbon-Nanotube-Based Field Emission Cathode”, G. Z. Yue, Q. Qiu, Bo Gao, Y. Cheng, J. Zhang, H. Shimoda, S. Chang, J. P. Lu, and O. Zhou, Appl. Phys. Lett. 81, 355357 (2002). [2] “Characterization Techniques for Miniature Low-Power X-Ray Tubes”, A. ReyesMena, Melany Moras, Charles Jensen, Steven D. Liddiard, and D. Clark Turner, Advance In X-ray Analysis 47, 2003. [3] “Improvements in Low-Power, End-Window, Transmission-Target X-Ray Tubes”, Charles Jensen, Stephen M. Elliott, Steven D. Liddiard, A. Reyes-Mena, Melany Moras, and D. Clark Turner, Advance In X-ray Analysis 47, 2003. [4] “Mobile Miniature X-Ray Source”, D. Clark Turner, Arturo Reyes, Hans K. Pew, Mark W. Lund, Michael Lines, Paul Moody, and Sergei Voronov, US Patent 6661876. [5] “X-Ray Generating Mechanism Using Electron Field Emission Cathode”, Otto Zhou and J. Lu, US Patent 6,553,096.", "title": "" }, { "docid": "a8d7f6dcaf55ebd5ec580b2b4d104dd9", "text": "In this paper we investigate social tags as a novel highvolume source of semantic metadata for music, using techniques from the fields of information retrieval and multivariate data analysis. We show that, despite the ad hoc and informal language of tagging, tags define a low-dimensional semantic space that is extremely well-behaved at the track level, in particular being highly organised by artist and musical genre. We introduce the use of Correspondence Analysis to visualise this semantic space, and show how it can be applied to create a browse-by-mood interface for a psychologically-motivated two-dimensional subspace rep resenting musical emotion.", "title": "" }, { "docid": "8db6d5115156ebd347577dd81cf916f1", "text": "Measurement of chlorophyll concentration is gaining more-and-more importance in evaluating the status of the marine ecosystem. For wide areas monitoring a reliable architecture of wireless sensors network is required. In this paper, we present a network of smart sensors, based on ISO/IEC/IEEE 21451 suite of standards, for in situ and in continuous space-time monitoring of surface water bodies, in particular for seawater. The system is meant to be an important tool for evaluating water quality and a valid support to strategic decisions concerning critical environment issues. The aim of the proposed system is to capture possible extreme events and collect long-term periods of data.", "title": "" }, { "docid": "e8df1006565902d1b2f5189a02944bca", "text": "A research and development collaboration has been started with the goal of producing a prototype hadron calorimeter section for the purpose of proving the Particle Flow Algorithm concept for the International Linear Collider. Given the unique requirements of a Particle Flow Algorithm calorimeter, custom readout electronics must be developed to service these detectors. This paper introduces the DCal or Digital Calorimetry Chip, a custom integrated circuit developed in a 0.25um CMOS process specifically for this International Linear Collider project. The DCal is capable of handling 64 channels, producing a 1-bit Digital-to-Analog conversion of the input (i.e. hit/no hit). It maintains a 24-bit timestamp and is capable of operating either in an externally triggered mode or in a self-triggered mode. Moreover, it is capable of operating either with or without a pipeline delay. Finally, in order to permit the testing of different calorimeter technologies, its analog front end is capable of servicing Particle Flow Algorithm calorimeters made from either Resistive Plate Chambers or Gaseous Electron Multipliers.", "title": "" }, { "docid": "2b97e03fa089cdee0bf504dd85e5e4bb", "text": "One of the most severe threats to revenue and quality of service in telecom providers is fraud. The advent of new technologies has provided fraudsters new techniques to commit fraud. SIM box fraud is one of such fraud that has emerged with the use of VOIP technologies. In this work, a total of nine features found to be useful in identifying SIM box fraud subscriber are derived from the attributes of the Customer Database Record (CDR). Artificial Neural Networks (ANN) has shown promising solutions in classification problems due to their generalization capabilities. Therefore, supervised learning method was applied using Multi layer perceptron (MLP) as a classifier. Dataset obtained from real mobile communication company was used for the experiments. ANN had shown classification accuracy of 98.71 %.", "title": "" }, { "docid": "7a9a7b888b9e3c2b82e6c089d05e2803", "text": "Background:\nBullous pemphigoid (BP) is a chronic, autoimmune blistering skin disease that affects patients' daily life and psychosocial well-being.\n\n\nObjective:\nThe aim of the study was to evaluate the quality of life, anxiety, depression and loneliness in BP patients.\n\n\nMethods:\nFifty-seven BP patients and fifty-seven healthy controls were recruited for the study. The quality of life of each patient was assessed using the Dermatology Life Quality Index (DLQI) scale. Moreover, they were evaluated for anxiety and depression according to the Hospital Anxiety Depression Scale (HADS-scale), while loneliness was measured through the Loneliness Scale-Version 3 (UCLA) scale.\n\n\nResults:\nThe mean DLQI score was 9.45±3.34. Statistically significant differences on the HADS total scale and in HADS-depression subscale (p=0.015 and p=0.002, respectively) were documented. No statistically significant difference was found between the two groups on the HADS-anxiety subscale. Furthermore, significantly higher scores were recorded on the UCLA Scale compared with healthy volunteers (p=0.003).\n\n\nConclusion:\nBP had a significant impact on quality of life and the psychological status of patients, probably due to the appearance of unattractive lesions on the skin, functional problems and disease chronicity.", "title": "" }, { "docid": "006e11d03b1cdf8dcf85ba3967373d8d", "text": "Collaboration in three-dimensional space: “spatial workspace collaboration” is introduced and an approach supporting its use via a video mediated communication system is described. Verbal expression analysis is primarily focused on. Based on experiment results, movability of a focal point, sharing focal points, movability of a shared workspace, and the ability to confirm viewing intentions and movements were determined to be system requirements necessary to support spatial workspace collaboration. A newly developed SharedView system having the capability to support spatial workspace collaboration is also introduced, tested, and some experimental results described.", "title": "" }, { "docid": "b3fdd9e446c427022eee637f62ffefa4", "text": "Software maintenance constitutes a major phase of the software life cycle. Studies indicate that software maintenance is responsible for a significant percentage of a system’s overall cost and effort. The software engineering community has identified four major types of software maintenance, namely, corrective, perfective, adaptive, and preventive maintenance. Software maintenance can be seen from two major points of view. First, the classic view where software maintenance provides the necessary theories, techniques, methodologies, and tools for keeping software systems operational once they have been deployed to their operational environment. Most legacy systems subscribe to this view of software maintenance. The second view is a more modern emerging view, where maintenance is an integral part of the software development process and it should be applied from the early stages in the software life cycle. Regardless of the view by which we consider software maintenance, the fact is that it is the driving force behind software evolution, a very important aspect of a software system. This entry provides an in-depth discussion of software maintenance techniques, methodologies, tools, and emerging trends. Q1", "title": "" } ]
scidocsrr
7dcb8c4d5cc091c23056207ed06cbc7c
Temporal and Spatial Clustering for a Parking Prediction Service
[ { "docid": "b66e878b1d907c684637bf308ee9fd3f", "text": "The search for free parking places is a promising application for vehicular ad hoc networks (VANETs). In order to guide drivers to a free parking place at their destination, it is necessary to estimate the occupancy state of the parking lots within the destination area at time of arrival. In this paper, we present a model to predict parking lot occupancy based on information exchanged among vehicles. In particular, our model takes the age of received parking lot information and the time needed to arrive at a certain parking lot into account and estimates the future parking situation at time of arrival. It is based on queueing theory and uses a continuous-time homogeneous Markov model. We have evaluated the model in a simulation study based on a detailed model of the city of Brunswick, Germany.", "title": "" } ]
[ { "docid": "aac17c2c975afaa3f55e42e698d398b3", "text": "Many state-of-the-art Large Vocabulary Continuous Speech Recognition (LVCSR) Systems are hybrids of neural networks and Hidden Markov Models (HMMs). Recently, more direct end-to-end methods have been investigated, in which neural architectures were trained to model sequences of characters [1,2]. To our knowledge, all these approaches relied on Connectionist Temporal Classification [3] modules. We investigate an alternative method for sequence modelling based on an attention mechanism that allows a Recurrent Neural Network (RNN) to learn alignments between sequences of input frames and output labels. We show how this setup can be applied to LVCSR by integrating the decoding RNN with an n-gram language model and by speeding up its operation by constraining selections made by the attention mechanism and by reducing the source sequence lengths by pooling information over time. Recognition accuracies similar to other HMM-free RNN-based approaches are reported for the Wall Street Journal corpus.", "title": "" }, { "docid": "164e5bde10882e3f7a6bcdf473eb7387", "text": "This paper is written for (social science) researchers seeking to analyze the wealth of social media now available. It presents a comprehensive review of software tools for social networking media, wikis, really simple syndication feeds, blogs, newsgroups, chat and news feeds. For completeness, it also includes introductions to social media scraping, storage, data cleaning and sentiment analysis. Although principally a review, the paper also provides a methodology and a critique of social media tools. Analyzing social media, in particular Twitter feeds for sentiment analysis, has become a major research and business activity due to the availability of web-based application programming interfaces (APIs) provided by Twitter, Facebook and News services. This has led to an ‘explosion’ of data services, software tools for scraping and analysis and social media analytics platforms. It is also a research area undergoing rapid change and evolution due to commercial pressures and the potential for using social media data for computational (social science) research. Using a simple taxonomy, this paper provides a review of leading software tools and how to use them to scrape, cleanse and analyze the spectrum of social media. In addition, it discussed the requirement of an experimental computational environment for social media research and presents as an illustration the system architecture of a social media (analytics) platform built by University College London. The principal contribution of this paper is to provide an overview (including code fragments) for scientists seeking to utilize social media scraping and analytics either in their research or business. The data retrieval techniques that are presented in this paper are valid at the time of writing this paper (June 2014), but they are subject to change since social media data scraping APIs are rapidly changing.", "title": "" }, { "docid": "3fa8b8a93716a85f8573bd1cb8d215f2", "text": "Vision-based research for intelligent vehicles have traditionally focused on specific regions around a vehicle, such as a front looking camera for, e.g., lane estimation. Traffic scenes are complex and vital information could be lost in unobserved regions. This paper proposes a framework that uses four visual sensors for a full surround view of a vehicle in order to achieve an understanding of surrounding vehicle behaviors. The framework will assist the analysis of naturalistic driving studies by automating the task of data reduction of the observed trajectories. To this end, trajectories are estimated using a vehicle detector together with a multiperspective optimized tracker in each view. The trajectories are transformed to a common ground plane, where they are associated between perspectives and analyzed to reveal tendencies around the ego-vehicle. The system is tested on sequences from 2.5 h of drive on US highways. The multiperspective tracker is tested in each view as well as for the ability to associate vehicles bet-ween views with a 92% recall score. A case study of vehicles approaching from the rear shows certain patterns in behavior that could potentially influence the ego-vehicle.", "title": "" }, { "docid": "df9b16a3a07c550464b899de60cdc212", "text": "This paper addresses human activity recognition based on a new feature descriptor. For a binary human silhouette, an extended radon transform, R transform, is employed to represent low-level features. The advantage of the R transform lies in its low computational complexity and geometric invariance. Then a set of HMMs based on the extracted features are trained to recognize activities. Compared with other commonly-used feature descriptors, R transform is robust to frame loss in video, disjoint silhouettes and holes in the shape, and thus achieves better performance in recognizing similar activities. Rich experiments have proved the efficiency of the proposed method.", "title": "" }, { "docid": "80bfff01fbb1f6453b37d39b3b8b63f8", "text": "We consider regularized empirical risk minimization problems. In particular, we minimize the sum of a smooth empirical risk function and a nonsmooth regularization function. When the regularization function is block separable, we can solve the minimization problems in a randomized block coordinate descent (RBCD) manner. Existing RBCD methods usually decrease the objective value by exploiting the partial gradient of a randomly selected block of coordinates in each iteration. Thus they need all data to be accessible so that the partial gradient of the block gradient can be exactly obtained. However, such a \"batch\" setting may be computationally expensive in practice. In this paper, we propose a mini-batch randomized block coordinate descent (MRBCD) method, which estimates the partial gradient of the selected block based on a mini-batch of randomly sampled data in each iteration. We further accelerate the MRBCD method by exploiting the semi-stochastic optimization scheme, which effectively reduces the variance of the partial gradient estimators. Theoretically, we show that for strongly convex functions, the MRBCD method attains lower overall iteration complexity than existing RBCD methods. As an application, we further trim the MRBCD method to solve the regularized sparse learning problems. Our numerical experiments shows that the MRBCD method naturally exploits the sparsity structure and achieves better computational performance than existing methods.", "title": "" }, { "docid": "ac6e7d8ee24d6e38765d43c85106b237", "text": "The drivers behind microplastic (up to 5mm in diameter) consumption by animals are uncertain and impacts on foundational species are poorly understood. We investigated consumption of weathered, unfouled, biofouled, pre-production and microbe-free National Institute of Standards plastic by a scleractinian coral that relies on chemosensory cues for feeding. Experiment one found that corals ingested many plastic types while mostly ignoring organic-free sand, suggesting that plastic contains phagostimulents. Experiment two found that corals ingested more plastic that wasn't covered in a microbial biofilm than plastics that were biofilmed. Additionally, corals retained ~8% of ingested plastic for 24h or more and retained particles appeared stuck in corals, with consequences for energetics, pollutant toxicity and trophic transfer. The potential for chemoreception to drive plastic consumption in marine taxa has implications for conservation.", "title": "" }, { "docid": "3cd436930a9679f702628956b8329c92", "text": "The rapid pace of innovations in information and communication technology (ICT) industry over the past decade has greatly improved people’s mobile communication experience. This, in turn, has escalated exponential growth in the number of connected mobile devices and data traffic volume in wireless networks. Researchers and network service providers have faced many challenges in providing seamless, ubiquitous, reliable, and high-speed data service to mobile users. Mathematical optimization, as a powerful tool, plays an important role in addressing such challenging issues. This dissertation addresses several radio resource allocation problems in 4G and 5G mobile communication systems, in order to improve network performance in terms of throughput, energy, or fairness. Mathematical optimization is applied as the main approach to analyze and solve the problems. Theoretical analysis and algorithmic solutions are derived. Numerical results are obtained to validate our theoretical findings and demonstrate the algorithms’ ability of attaining optimal or near-optimal solutions. Five research papers are included in the dissertation. In Paper I, we study a set of optimization problems of consecutive-channel allocation in single carrier-frequency division multiple access (SC-FDMA) systems. We provide a unified algorithmic framework to optimize the channel allocation and improve system performance. The next three papers are devoted to studying energy-saving problems in orthogonal frequency division multiple access (OFDMA) systems. In Paper II, we investigate a problem of jointly minimizing energy consumption at both transmitter and receiver sides. An energy-efficient scheduling algorithm is developed to provide optimality bounds and near-optimal solutions. Next in Paper III, we derive fundamental properties for energy minimization in load-coupled OFDMA networks. Our analytical results", "title": "" }, { "docid": "c5fa73d74225b29230e33ec2e8bb3a63", "text": "This paper presents Discriminative Locality Alignment Network (DLANet), a novel manifold-learningbased discriminative learnable feature, for wild scene classification. Based on a convolutional structure, DLANet learns the filters of multiple layers by applying DLA and exploits the block-wise histograms of the binary codes of feature maps to generate the local descriptors. A DLA layer maximizes the margin between the inter-class patches and minimizes the distance of the intra-class patches in the local region. In particular, we construct a two-layer DLANet by stacking two DLA layers and a feature layer. It is followed by a popular framework of scene classification, which combines Locality-constrained Linear Coding–Spatial Pyramid Matching (LLC–SPM) and linear Support Vector Machine (SVM). We evaluate DLANet on NYU Depth V1, Scene-15 and MIT Indoor-67. Experiments show that DLANet performs well on depth image. It outperforms the carefully tuned features, including SIFT and is also competitive to the other reported methods. & 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d8ecf2146b216c0de9b911e26264842b", "text": "Conditional Random Fields (CRFs) are an effective tool for a variety of different data segmentation and labeling tasks including visual scene interpretation, which seeks to partition images into their constituent semantic-level regions and assign appropriate class labels to each region. For accurate labeling it is important to capture the global context of the image as well as local information. We introduce a CRF based scene labeling model that incorporates both local features and features aggregated over the whole image or large sections of it. Secondly, traditional CRF learning requires fully labeled datasets which can be costly and troublesome to produce. We introduce a method for learning CRFs from datasets with many unlabeled nodes by marginalizing out the unknown labels so that the log-likelihood of the known ones can be maximized by gradient ascent. Loopy Belief Propagation is used to approximate the marginals needed for the gradient and log-likelihood calculations and the Bethe free-energy approximation to the log-likelihood is monitored to control the step size. Our experimental results show that effective models can be learned from fragmentary labelings and that incorporating top-down aggregate features significantly improves the segmentations. The resulting segmentations are compared to the state-of-the-art on three different image datasets.", "title": "" }, { "docid": "3d3589a002f8195bb20324dd8a8f5d76", "text": "Vacuum-based end effectors are widely used in industry and are often preferred over parallel-jaw and multifinger grippers due to their ability to lift objects with a single point of contact. Suction grasp planners often target planar surfaces on point clouds near the estimated centroid of an object. In this paper, we propose a compliant suction contact model that computes the quality of the seal between the suction cup and local target surface and a measure of the ability of the suction grasp to resist an external gravity wrench. To characterize grasps, we estimate robustness to perturbations in end-effector and object pose, material properties, and external wrenches. We analyze grasps across 1,500 3D object models to generate Dex-Net 3.0, a dataset of 2.8 million point clouds, suction grasps, and grasp robustness labels. We use Dex-Net 3.0 to train a Grasp Quality Convolutional Neural Network (GQ-CNN) to classify robust suction targets in point clouds containing a single object. We evaluate the resulting system in 350 physical trials on an ABB YuMi fitted with a pneumatic suction gripper. When evaluated on novel objects that we categorize as Basic (prismatic or cylindrical), Typical (more complex geometry), and Adversarial (with few available suction-grasp points) Dex-Net 3.0 achieves success rates of 98%, 82%, and 58% respectively, improving to 81% in the latter case when the training set includes only adversarial objects. Code, datasets, and supplemental material can be found at http://berkeleyautomation.github.io/dex-net.", "title": "" }, { "docid": "eadc810575416fccea879c571ddfbfd2", "text": "In this paper, we propose a zoom-out-and-in network for generating object proposals. A key observation is that it is difficult to classify anchors of different sizes with the same set of features. Anchors of different sizes should be placed accordingly based on different depth within a network: smaller boxes on high-resolution layers with a smaller stride while larger boxes on low-resolution counterparts with a larger stride. Inspired by the conv/deconv structure, we fully leverage the low-level local details and high-level regional semantics from two feature map streams, which are complimentary to each other, to identify the objectness in an image. A map attention decision (MAD) unit is further proposed to aggressively search for neuron activations among two streams and attend the most contributive ones on the feature learning of the final loss. The unit serves as a decision-maker to adaptively activate maps along certain channels with the solely purpose of optimizing the overall training loss. One advantage of MAD is that the learned weights enforced on each feature channel is predicted on-the-fly based on the input context, which is more suitable than the fixed enforcement of a convolutional kernel. Experimental results on three datasets demonstrate the effectiveness of our proposed algorithm over other state-of-the-arts, in terms of average recall for region proposal and average precision for object detection.", "title": "" }, { "docid": "97ae60400b1099fb2627c4776f88bf88", "text": "The development of an effective mechanism to detect suspicious transactions is a critical problem for financial institutions in their endeavor to prevent anti-money laundering activities. This research addresses this problem by proposing an ontology based expert-system for suspicious transaction detection. The ontology consists of domain knowledge and a set of (SWRL) rules that together constitute an expert system. The native reasoning support in ontology is used to deduce new knowledge from the predefined rules about suspicious transactions. The presented expert-system has been tested on a real data set of more than 8 million transactions of a commercial bank. The novelty of the approach lies in the use of ontology driven technique that not only minimizes the data modeling cost but also makes the expert-system extendable and reusable for different applications.", "title": "" }, { "docid": "5432e79349a798083f7b13369307ad80", "text": "Existing recommendation algorithms treat recommendation problem as rating prediction and the recommendation quality is measured by RMSE or other similar metrics. However, we argued that when it comes to E-commerce product recommendation, recommendation is more than rating prediction by realizing the fact price plays a critical role in recommendation result. In this work, we propose to build E-commerce product recommender systems based on fundamental economic notions. We first proposed an incentive compatible method that can effectively elicit consumer's willingness-to-pay in a typical E-commerce setting and in a further step, we formalize the recommendation problem as maximizing total surplus. We validated the proposed WTP elicitation algorithm through crowd sourcing and the results demonstrated that the proposed approach can achieve higher seller profit by personalizing promotion. We also proposed a total surplus maximization (TSM) based recommendation framework. We specified TSM by three of the most representative settings - e-commerce where the product quantity can be viewed as infinity, P2P lending where the resource is bounded and freelancer marketing where the resource (job) can be assigned to one freelancer. The experimental results of the corresponding datasets shows that TSM exceeds existing approach in terms of total surplus.", "title": "" }, { "docid": "29fc090c5d1e325fd28e6bbcb690fb8d", "text": "Many forensic computing practitioners work in a high workload and low resource environment. With the move by the discipline to seek ISO 17025 laboratory accreditation, practitioners are finding it difficult to meet the demands of validation and verification of their tools and still meet the demands of the accreditation framework. Many agencies are ill-equipped to reproduce tests conducted by organizations such as NIST since they cannot verify the results with their equipment and in many cases rely solely on an independent validation study of other peoples' equipment. This creates the issue of tools in reality never being tested. Studies have shown that independent validation and verification of complex forensic tools is expensive and time consuming, and many practitioners also use tools that were not originally designed for forensic purposes. This paper explores the issues of validation and verification in the accreditation environment and proposes a paradigm that will reduce the time and expense required to validate and verify forensic software tools", "title": "" }, { "docid": "a936f3ea3a168c959c775dbb50a5faf2", "text": "From the Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts. Address correspondence to Dr. Schmahmann, Department of Neurology, VBK 915, Massachusetts General Hospital, Fruit St., Boston, MA 02114; jschmahmann@partners.org (E-mail). Copyright 2004 American Psychiatric Publishing, Inc. Disorders of the Cerebellum: Ataxia, Dysmetria of Thought, and the Cerebellar Cognitive Affective Syndrome", "title": "" }, { "docid": "0bce954374d27d4679eb7562350674fc", "text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.", "title": "" }, { "docid": "5ffb3e630e5f020365e471e94d678cbb", "text": "This paper presents one perspective on recent developments related to software engineering in the industrial automation sector that spans from manufacturing factory automation to process control systems and energy automation systems. The survey's methodology is based on the classic SWEBOK reference document that comprehensively defines the taxonomy of software engineering domain. This is mixed with classic automation artefacts, such as the set of the most influential international standards and dominating industrial practices. The survey focuses mainly on research publications which are believed to be representative of advanced industrial practices as well.", "title": "" }, { "docid": "847336f1d05e8242d4d2e60cd0cc98e6", "text": "With the recent rapid growth of the Semantic Web (SW), the processes of searching and querying content that is both massive in scale and heterogeneous have become increasingly challenging. User-friendly interfaces, which can support end users in querying and exploring this novel and diverse, structured information space, are needed to make the vision of the SW a reality. We present a survey on ontology-based Question Answering (QA), which has emerged in recent years to exploit the opportunities offered by structured semantic information on the Web. First, we provide a comprehensive perspective by analyzing the general background and history of the QA research field, from influential works from the artificial intelligence and database communities developed in the 70s and later decades, through open domain QA stimulated by the QA track in TREC since 1999, to the latest commercial semantic QA solutions, before tacking the current state of the art in open userfriendly interfaces for the SW. Second, we examine the potential of this technology to go beyond the current state of the art to support end-users in reusing and querying the SW content. We conclude our review with an outlook for this novel research area, focusing in particular on the R&D directions that need to be pursued to realize the goal of efficient and competent retrieval and integration of answers from large scale, heterogeneous, and continuously evolving semantic sources. MODUL University of Vienna, Austria.", "title": "" }, { "docid": "e2bc0f8d8275b93e8559b460590149c8", "text": "Lot of research work has been done on cluster based mining on relational databases. K-means is a basic algorithm, which is used in many of them. The main drawback of k-means is that it does not give a high precision rate and results are affected by random initialization of cluster centroids. It may produce empty clusters depending on the initial centroids, which reduce the performance of the system. In this paper, we have proposed an Improved K-means algorithm, which improve data clustering by removing empty clusters. Further, it improves the computational time of the algorithm by reusing stored information of previous iterations. The results obtained from our experiments show improvement in accuracy, precision rate and efficiency of the algorithm. The complexity of the algorithm is also reduced from O(nlk) to O(nk).", "title": "" }, { "docid": "1649b2776fcc2b8a736306128f8a2331", "text": "The paradigm of simulated annealing is applied to the problem of drawing graphs “nicely.” Our algorithm deals with general undirected graphs with straight-line edges, and employs several simple criteria for the aesthetic quality of the result. The algorithm is flexible, in that the relative weights of the criteria can be changed. For graphs of modest size it produces good results, competitive with those produced by other methods, notably, the “spring method” and its variants.", "title": "" } ]
scidocsrr
d93aabc9c65c058ad45d1663bdc793b5
Event Representations for Automated Story Generation with Deep Neural Nets
[ { "docid": "7a6876aa158c9bc717bd77319f4d2494", "text": "Scripts encode knowledge of prototypical sequences of events. We describe a Recurrent Neural Network model for statistical script learning using Long Short-Term Memory, an architecture which has been demonstrated to work well on a range of Artificial Intelligence tasks. We evaluate our system on two tasks, inferring held-out events from text and inferring novel events from text, substantially outperforming prior approaches on both tasks.", "title": "" }, { "docid": "c5f6a559d8361ad509ec10bbb6c3cc9b", "text": "In this paper we present a system for automatic story generation that reuses existing stories to produce a new story that matches a given user query. The plot structure is obtained by a case-based reasoning (CBR) process over a case base of tales and an ontology of explicitly declared relevant knowledge. The resulting story is generated as a sketch of a plot described in natural language by means of natural language generation (NLG) techniques.", "title": "" }, { "docid": "fde3c86c90cabfb6e35ec1310b62a8de", "text": "The LSDSem’17 shared task is the Story Cloze Test, a new evaluation for story understanding and script learning. This test provides a system with a four-sentence story and two possible endings, and the system must choose the correct ending to the story. Successful narrative understanding (getting closer to human performance of 100%) requires systems to link various levels of semantics to commonsense knowledge. A total of eight systems participated in the shared task, with a variety of approaches including end-to-end neural networks, feature-based regression models, and rule-based methods. The highest performing system achieves an accuracy of 75.2%, a substantial improvement over the previous state-of-the-art.", "title": "" } ]
[ { "docid": "2f695b3ee94443705ba0f757bf655ae1", "text": "CORFU1 organizes a cluster of flash devices as a single, shared log that can be accessed concurrently by multiple clients over the network. The CORFU shared log makes it easy to build distributed applications that require strong consistency at high speeds, such as databases, transactional key-value stores, replicated state machines, and metadata services. CORFU can be viewed as a distributed SSD, providing advantages over conventional SSDs such as distributed wear-leveling, network locality, fault tolerance, incremental scalability and geodistribution. A single CORFU instance can support up to 200K appends/sec, while reads scale linearly with cluster size. Importantly, CORFU is designed to work directly over network-attached flash devices, slashing cost, power consumption and latency by eliminating storage servers.", "title": "" }, { "docid": "f63374051d4826ad55549d22260d0835", "text": "Interest has been growing in opportunities to build and deploy statistical models that can infer a computer user's current interruptability from computer activity and relevant contextual information. We describe a system that intermittently asks users to assess their perceived interruptability during a training phase and that builds decision-theoretic models with the ability to predict the cost of interrupting the user. The models are used at run-time to compute the expected cost of interruptions, providing a mediator for incoming notifications, based on a consideration of a user's current and recent history of computer activity, meeting status, location, time of day, and whether a conversation is detected.", "title": "" }, { "docid": "60922247ab6ec494528d3a03c0909231", "text": "This paper proposes a new \"zone controlled induction heating\" (ZCIH) system. The ZCIH system consists of two or more sets of a high-frequency inverter and a split work coil, which adjusts the coil current amplitude in each zone independently. The ZCIH system has capability of controlling the exothermic distribution on the work piece to avoid the strain caused by a thermal expansion. As a result, the ZCIH system enables a rapid heating performance as well as an temperature uniformity. This paper proposes current phase control making the coil current in phase with each other, to adjust the coil current amplitude even when a mutual inductance exists between the coils. This paper presents operating principle, theoretical analysis, and experimental results obtained from a laboratory setup and a six-zone prototype for a semiconductor processing.", "title": "" }, { "docid": "9acb22396046a27e5318ab4ae08f6030", "text": "Interest in graphene centres on its excellent mechanical, electrical, thermal and optical properties, its very high specific surface area, and our ability to influence these properties through chemical functionalization. There are a number of methods for generating graphene and chemically modified graphene from graphite and derivatives of graphite, each with different advantages and disadvantages. Here we review the use of colloidal suspensions to produce new materials composed of graphene and chemically modified graphene. This approach is both versatile and scalable, and is adaptable to a wide variety of applications.", "title": "" }, { "docid": "c07a0053f43d9e1f98bb15d4af92a659", "text": "We present a zero-shot learning approach for text classification, predicting which natural language understanding domain can handle a given utterance. Our approach can predict domains at runtime that did not exist at training time. We achieve this extensibility by learning to project utterances and domains into the same embedding space while generating each domain-specific embedding from a set of attributes that characterize the domain. Our model is a neural network trained via ranking loss. We evaluate the performance of this zero-shot approach on a subset of a virtual assistant’s third-party domains and show the effectiveness of the technique on new domains not observed during training. We compare to generative baselines and show that our approach requires less storage and performs better on new domains.", "title": "" }, { "docid": "3d7fabdd5f56c683de20640abccafc44", "text": "The capacity to exercise control over the nature and quality of one's life is the essence of humanness. Human agency is characterized by a number of core features that operate through phenomenal and functional consciousness. These include the temporal extension of agency through intentionality and forethought, self-regulation by self-reactive influence, and self-reflectiveness about one's capabilities, quality of functioning, and the meaning and purpose of one's life pursuits. Personal agency operates within a broad network of sociostructural influences. In these agentic transactions, people are producers as well as products of social systems. Social cognitive theory distinguishes among three modes of agency: direct personal agency, proxy agency that relies on others to act on one's behest to secure desired outcomes, and collective agency exercised through socially coordinative and interdependent effort. Growing transnational embeddedness and interdependence are placing a premium on collective efficacy to exercise control over personal destinies and national life.", "title": "" }, { "docid": "590e4b3726aa1f92232451432fb7a36b", "text": "Necrophagous insects are important in the decomposition of cadavers. The close association between insects and corpses and the use of insects in medicocriminal investigations is the subject of forensic entomology. The present paper reviews the historical background of this discipline, important postmortem processes, and discusses the scientific basis underlying attempts to determine the time interval since death. Using medical techniques, such as the measurement of body temperature or analysing livor and rigor mortis, time since death can only be accurately measured for the first two or three days after death. In contrast, by calculating the age of immature insect stages feeding on a corpse and analysing the necrophagous species present, postmortem intervals from the first day to several weeks can be estimated. These entomological methods may be hampered by difficulties associated with species identification, but modern DNA techniques are contributing to the rapid and authoritative identification of necrophagous insects. Other uses of entomological data include the toxicological examination of necrophagous larvae from a corpse to identify and estimate drugs and toxicants ingested by the person when alive and the proof of possible postmortem manipulations. Forensic entomology may even help in investigations dealing with people who are alive but in need of care, by revealing information about cases of neglect.", "title": "" }, { "docid": "9d84f58c0a2c8694bf2fe8d2ba0da601", "text": "Most existing Speech Emotion Recognition (SER) systems rely on turn-wise processing, which aims at recognizing emotions from complete utterances and an overly-complicated pipeline marred by many preprocessing steps and hand-engineered features. To overcome both drawbacks, we propose a real-time SER system based on end-to-end deep learning. Namely, a Deep Neural Network (DNN) that recognizes emotions from a one second frame of raw speech spectrograms is presented and investigated. This is achievable due to a deep hierarchical architecture, data augmentation, and sensible regularization. Promising results are reported on two databases which are the eNTERFACE database and the Surrey Audio-Visual Expressed Emotion (SAVEE) database.", "title": "" }, { "docid": "10867422856a6f6cbe88193244706684", "text": "Geographic setting Pleistocene Sahul was a large continent (Fig. 32.1). When sea levels were at their lowest (20–22 kya bp), it covered nearly 11 million square kilometres, roughly the same area as sub-Saharan Africa or Eurasia west of the Ural Mountains. Relief was moderate: more than 90 per cent of its surface was less than 500 metres above maximum low sea level. Significant uplands included only the New Guinea Highlands on the north and the Great Dividing Ranges on the east (maximum elevations ~5200 m and ~2400 m above maximum low sea level, respectively). Major biomes in modern Australia-New Guinea include tropical forest, sub-tropical savanna, and low temperate desert. Small but important patches of temperate forest are found in the southeastern and southwestern corners of mainland Australia and in Tasmania. During the last glacial cycle (75–10 kya bp), cooler, drier climates and lower CO2 levels generally reduced the distribution of tree cover, increased the size of the arid zone, and favoured the development of glacial and peri-glacial habitats in both the northern highlands and far southeast.", "title": "" }, { "docid": "3c110962dea6593b067ffda625cc37b1", "text": "Synopsis Manual therapy interventions are popular among individual health care providers and their patients; however, systematic reviews do not strongly support their effectiveness. Small treatment effect sizes of manual therapy interventions may result from a \"one-size-fits-all\" approach to treatment. Mechanistic-based treatment approaches to manual therapy offer an intriguing alternative for identifying patients likely to respond to manual therapy. However, the current lack of knowledge of the mechanisms through which manual therapy interventions inhibit pain limits such an approach. The nature of manual therapy interventions further confounds such an approach, as the related mechanisms are likely a complex interaction of factors related to the patient, the provider, and the environment in which the intervention occurs. Therefore, a model to guide both study design and the interpretation of findings is necessary. We have previously proposed a model suggesting that the mechanical force from a manual therapy intervention results in systemic neurophysiological responses leading to pain inhibition. In this clinical commentary, we provide a narrative appraisal of the model and recommendations to advance the study of manual therapy mechanisms. J Orthop Sports Phys Ther 2018;48(1):8-18. doi:10.2519/jospt.2018.7476.", "title": "" }, { "docid": "bc1218f0b3dd3772154b9bd43d2dcd65", "text": "Online information has become important data source to analyze the public opinion and behavior, which is significant for social management and business decision. Web crawler systems target at automatically download and parse web pages to extract expected online information. However, as the rapid increasing of web pages and the heterogeneous page structures, the performance and the rules of parsing have become two serious challenges to web crawler systems. In this paper, we propose a distributed and generic web crawler system (DGWC), in which spiders are scheduled to parallel access and parse web pages to improve performance, utilized a shared and memory based database. Furthermore, we package the spider program and the dependencies in a container called Docker to make the system easily horizontal scaling. Last but not the least, a statistics-based approach is proposed to extract the main text using supervised-learning classifier instead of parsing the page structures. Experimental results on real-world data validate the efficiency and effectiveness of DGWC.", "title": "" }, { "docid": "c5e3c54de920537d04dfce79c7aa2782", "text": "BACKGROUND AND PURPOSE\nThe validity of hospital discharge diagnoses is essential in improving stroke surveillance and estimating healthcare costs of stroke. The aim of this study was to assess sensitivity, positive predictive value, and accuracy of discharge diagnoses compared with a stroke register.\n\n\nMETHODS\nA record linkage was made between a population-based stroke register and the discharge records of the hospital serving the population of the stroke register (n=70 000). The stroke register (including patients aged 15 and older and with no upper age limit), applied here as a \"gold standard,\" was used to estimate sensitivity, positive predictive value, and accuracy of the discharge diagnoses classification. The length of stay in hospital by stroke patients was measured.\n\n\nRESULTS\nIdentifying cerebrovascular diseases by hospital discharge diagnoses (International Classification of Diseases, 9th Revision [ICD-9], codes 430 to 438.9, first admission) lead to a substantial overestimation of stroke in the target population. Restricting the retrieval to acute stroke diagnoses (ICD-9 codes 430, 431, 434, and 436) gave an incidence estimate closer to the \"true\" incidence rate in the stroke register. Selecting ICD-9 codes 430 to 438 of cerebrovascular diseases gave the highest sensitivity (86%). The highest positive predictive value (68%) was achieved by selecting acute stroke diagnoses (ICD-9 codes 430, 431, 434, and 436), at the expense of a lower sensitivity (81%). Accuracy of ICD codes 430 to 438.9 (n=678) revealed the highest proportion of incident strokes identified by the acute stroke diagnoses (ICD-9 codes 430, 431, 434, and 436). Seventy-four percent of hospital discharge diagnoses classified as first-ever stroke kept the original diagnosis. Only 4.6% of the discharge diagnoses were classified as nonstroke diagnoses after validation. The estimation of length of stay in the hospital was improved by selection of acute stroke diagnoses from hospital discharge data (ICD-9 codes 430, 431, 434, and 436), which gave the same estimate of length of stay, a median of 8 days (2.5 percentile=0 and 97.5 percentile=56), compared with a median of 8 days (2.5 percentile=0 and 97.5 percentile=51) based on the stroke register.\n\n\nCONCLUSIONS\nHospital discharge data may overestimate stroke incidence and underestimate the length of stay in the hospital, unless selection routines of hospital discharge diagnoses are restricted to acute stroke diagnoses (ICD-9 codes 430, 431, 434, and 436). If supplemented by a validation procedure, including estimates of sensitivity, positive predictive value, and accuracy, hospital discharge data may provide valid information on hospital-based stroke incidence and lead to better allocation of health resources. Distinguishing subtypes of stroke from hospital discharge diagnoses should not be performed unless coding practices are improved.", "title": "" }, { "docid": "fb4354f13ce08ec1a30bcae90c812c37", "text": "The problem of image registration subsumes a number of problems and techniques in multiframe image analysis, including the computation of optic flow (general pixel-based motion), stereo correspondence, structure from motion, and feature tracking. We present a new registration algorithm based on spline representations of the displacement field which can be specialized to solve all of the above mentioned problems. In particular, we show how to compute local flow, global (parametric) flow, rigid flow resulting from camera egomotion, and multiframe versions of the above problems. Using a spline-based description of the flow removes the need for overlapping correlation windows, and produces an explicit measure of the correlation between adjacent flow estimates. We demonstrate our algorithm on multiframe image registration and the recovery of 3D projective scene geometry. We also provide results on a number of standard motion sequences.", "title": "" }, { "docid": "8ab05713986a3fcb1ebe6973be40b13c", "text": "Long-term care nursing staff are subject to considerable occupational stress and report high levels of burnout, yet little is known about how stress and social support are associated with burnout in this population. The present study utilized the job demands-resources model of burnout to examine relations between job demands (occupational and personal stress), job resources (sources and functions of social support), and burnout in a sample of nursing staff at a long-term care facility (N = 250). Hierarchical linear regression analyses revealed that job demands (greater occupational stress) were associated with more emotional exhaustion, more depersonalization, and less personal accomplishment. Job resources (support from supervisors and friends or family members, reassurance of worth, opportunity for nurturing) were associated with less emotional exhaustion and higher levels of personal accomplishment. Interventions to reduce burnout that include a focus on stress and social support outside of work may be particularly beneficial for long-term care staff.", "title": "" }, { "docid": "f0472c6d3c47a72fc255d96971ece6fa", "text": "This work presents the transient thermal analysis of a permanent magnet (PM) synchronous traction motor. The motor has magnets inset into the surface of the rotor to give a maximum field-weakening range of between 2 and 2.5. Both analytically based lumped circuit and numerical finite element methods have been used to simulate the motor. A comparison of the two methods is made showing the advantages and disadvantages of each. Simulation results are compared with practical measurements.", "title": "" }, { "docid": "e50d156bde3479c27119231073705f70", "text": "The economic theory of the consumer is a combination of positive and normative theories. Since it is based on a rational maximizing model it describes how consumers should choose, but it is alleged to also describe how they do choose. This paper argues that in certain well-defined situations many consumers act in a manner that is inconsistent with economic theory. In these situations economic theory will make systematic errors in predicting behavior. Kahneman and Tversky's prospect theory is proposed as the basis for an alternative descriptive theory. Topics discussed are: underweighting of opportunity costs, failure to ignore sunk costs, search behavior, choosing not to choose and regret, and precommitment and self-control.", "title": "" }, { "docid": "fa240a48947a43b9130ee7f48c3ad463", "text": "Content distribution on today's Internet operates primarily in two modes: server-based and peer-to-peer (P2P). To leverage the advantages of both modes while circumventing their key limitations, a third mode: peer-to-server/peer (P2SP) has emerged in recent years. Although P2SP can provide efficient hybrid server-P2P content distribution, P2SP generally works in a closed manner by only utilizing its private owned servers to accelerate its private organized peer swarms. Consequently, P2SP still has its limitations in both content abundance and server bandwidth. To this end, the fourth mode (or says a generalized mode of P2SP) has appeared as \"open-P2SP\" that integrates various third-party servers, contents, and data transfer protocols all over the Internet into a large, open, and federated P2SP platform. In this paper, based on a large-scale commercial open-P2SP system named \"QQXuanfeng\" , we investigate the key challenging problems, practical designs and real-world performances of open-P2SP. Such \"white-box\" study of open-P2SP provides solid experiences and helpful heuristics to the designers of similar systems.", "title": "" }, { "docid": "523677ed6d482ab6551f6d87b8ad761e", "text": "To enable information integration, schema matching is a critical step for discovering semantic correspondences of attributes across heterogeneous sources. While complex matchings are common, because of their far more complex search space, most existing techniques focus on simple 1:1 matchings. To tackle this challenge, this article takes a conceptually novel approach by viewing schema matching as correlation mining, for our task of matching Web query interfaces to integrate the myriad databases on the Internet. On this “deep Web ” query interfaces generally form complex matchings between attribute groups (e.g., {author} corresponds to {first name, last name} in the Books domain). We observe that the co-occurrences patterns across query interfaces often reveal such complex semantic relationships: grouping attributes (e.g., {first name, last name}) tend to be co-present in query interfaces and thus positively correlated. In contrast, synonym attributes are negatively correlated because they rarely co-occur. This insight enables us to discover complex matchings by a correlation mining approach. In particular, we develop the DCM framework, which consists of data preprocessing, dual mining of positive and negative correlations, and finally matching construction. We evaluate the DCM framework on manually extracted interfaces and the results show good accuracy for discovering complex matchings. Further, to automate the entire matching process, we incorporate automatic techniques for interface extraction. Executing the DCM framework on automatically extracted interfaces, we find that the inevitable errors in automatic interface extraction may significantly affect the matching result. To make the DCM framework robust against such “noisy” schemas, we integrate it with a novel “ensemble” approach, which creates an ensemble of DCM matchers, by randomizing the schema data into many trials and aggregating their ranked results by taking majority voting. As a principled basis, we provide analytic justification of the robustness of the ensemble approach. Empirically, our experiments show that the “ensemblization” indeed significantly boosts the matching accuracy, over automatically extracted and thus noisy schema data. By employing the DCM framework with the ensemble approach, we thus complete an automatic process of matchings Web query interfaces.", "title": "" }, { "docid": "7a3d73ec59c7e28d82cc04c4ee20986d", "text": "Updated spatial information on the dynamics of slums can be helpful to measure and evaluate progress of policies. Earlier studies have shown that semi-automatic detection of slums using remote sensing can be challenging considering the large variability in definition and appearance. In this study, we explored the potential of an object-oriented image analysis (OOA) method to detect slums, using very high resolution (VHR) imagery. This method integrated expert knowledge in the form of a local slum ontology. A set of image-based parameters was identified that was used for differentiating slums from non-slum areas in an OOA environment. The method was implemented on three subsets of the city of Ahmedabad, India. Results show that textural features such as entropy and contrast derived from a grey level co-occurrence matrix (GLCM) and the size of image segments are stable parameters for classification of built-up areas and the identification of slums. Relation with classified slum objects, in terms of enclosed by slums and relative border with slums was used to refine classification. The analysis on three different subsets showed final accuracies ranging from 47% to 68%. We conclude that our method produces useful results as it allows including location specific adaptation, whereas generically applicable rulesets for slums are still to be developed.", "title": "" }, { "docid": "fd517c58ce61fdbaf3caf0fdffb1e1f2", "text": "We focus on the problem of selecting meaningful tweets given a user's interests; the dynamic nature of user interests, the sheer volume, and the sparseness of individual messages make this an challenging problem. Specifically, we consider the task of time-aware tweets summarization, based on a user's history and collaborative social influences from ``social circles.'' We propose a time-aware user behavior model, the Tweet Propagation Model (TPM), in which we infer dynamic probabilistic distributions over interests and topics. We then explicitly consider novelty, coverage, and diversity to arrive at an iterative optimization algorithm for selecting tweets. Experimental results validate the effectiveness of our personalized time-aware tweets summarization method based on TPM.", "title": "" } ]
scidocsrr
955ea6aeb5705b1b4fcc212e9ab7e63f
Classification of Vehicle Collision Patterns in Road Accidents using Data Mining Algorithms
[ { "docid": "c581d1300bf07663fcfd8c704450db09", "text": "This research aimed at the case of customers’ default payments in Taiwan and compares the predictive accuracy of probability of default among six data mining methods. From the perspective of risk management, the result of predictive accuracy of the estimated probability of default will be more valuable than the binary result of classification credible or not credible clients. Because the real probability of default is unknown, this study presented the novel ‘‘Sorting Smoothing Method” to estimate the real probability of default. With the real probability of default as the response variable (Y), and the predictive probability of default as the independent variable (X), the simple linear regression result (Y = A + BX) shows that the forecasting model produced by artificial neural network has the highest coefficient of determination; its regression intercept (A) is close to zero, and regression coefficient (B) to one. Therefore, among the six data mining techniques, artificial neural network is the only one that can accurately estimate the real probability of default. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "83688690678b474cd9efe0accfdb93f9", "text": "Feature selection, as a preprocessing step to machine learning, is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. However, the recent increase of dimensionality of data poses a severe challenge to many existing feature selection methods with respect to efficiency and effectiveness. In this work, we introduce a novel concept, predominant correlation, and propose a fast filter method which can identify relevant features as well as redundancy among relevant features without pairwise correlation analysis. The efficiency and effectiveness of our method is demonstrated through extensive comparisons with other methods using real-world data of high dimensionality.", "title": "" } ]
[ { "docid": "070fb90db924de273c4f4351dd76f4ff", "text": "Path planning algorithms have been used in different applications with the aim of finding a suitable collision-free path which satisfies some certain criteria such as the shortest path length and smoothness; thus, defining a suitable curve to describe path is essential. The main goal of these algorithms is to find the shortest and smooth path between the starting and target points. This paper makes use of a Bézier curve-based model for path planning. The control points of the Bézier curve significantly influence the length and smoothness of the path. In this paper, a novel Chaotic Particle Swarm Optimization (CPSO) algorithm has been proposed to optimize the control points of Bézier curve, and the proposed algorithm comes in two variants: CPSO-I and CPSO-II. Using the chosen control points, the optimum smooth path that minimizes the total distance between the starting and ending points is selected. To evaluate the CPSO algorithm, the results of the CPSO-I and CPSO-II algorithms are compared with the standard PSO algorithm. The experimental results proved that the proposed algorithm is capable of finding the optimal path. Moreover, the CPSO algorithm was tested against different numbers of control points and obstacles, and the CPSO algorithm achieved competitive results.", "title": "" }, { "docid": "e5ed312b0c3aaa26240a9f3aaa2bd36e", "text": "This paper presents PDF-TREX, an heuristic approach for table recognition and extraction from PDF documents.The heuristics starts from an initial set of basic content elements and aligns and groups them, in bottom-up way by considering only their spatial features, in order to identify tabular arrangements of information. The scope of the approach is to recognize tables contained in PDF documents as a 2-dimensional grid on a Cartesian plane and extract them as a set of cells equipped by 2-dimensional coordinates. Experiments, carried out on a dataset composed of tables contained in documents coming from different domains, shows that the approach is well performing in recognizing table cells.The approach aims at improving PDF document annotation and information extraction by providing an output that can be further processed for understanding table and document contents.", "title": "" }, { "docid": "32fe4c03f9ddb0df128ccf3f64f844cd", "text": "Consider a stream of n-tuples that empirically define the joint distribution of n discrete random variables X1, . . . , Xn. Previous work of Indyk and McGregor [6] and Braverman et al. [1, 2] addresses the problem of determining whether these variables are n-wise independent by measuring the `p distance between the joint distribution and the product distribution of the marginals. An open problem in this line of work is to answer more general questions about the dependencies between the variables. One powerful way to express such dependencies is via Bayesian networks where nodes correspond to variables and directed edges encode dependencies. We consider the problem of testing such dependencies in the streaming setting. Our main results are: 1. A tight upper and lower bound of Θ̃(nk) on the space required to test whether the data is consistent with a given Bayesian network where k is the size of the range of each Xi and d is the max in-degree of the network. 2. A tight upper and lower bound of Θ̃(k) on the space required to compute any 2-approximation of the log-likelihood of the network. 3. Finally, we show space/accuracy trade-offs for the problem of independence testing using `1 and `2 distances.", "title": "" }, { "docid": "e39cafd4de135ccb17f7cf74cbd38a97", "text": "A central question in metaphor research is how metaphors establish mappings between concepts from different domains. The authors propose an evolutionary path based on structure-mapping theory. This hypothesis--the career of metaphor--postulates a shift in mode of mapping from comparison to categorization as metaphors are conventionalized. Moreover, as demonstrated by 3 experiments, this processing shift is reflected in the very language that people use to make figurative assertions. The career of metaphor hypothesis offers a unified theoretical framework that can resolve the debate between comparison and categorization models of metaphor. This account further suggests that whether metaphors are processed directly or indirectly, and whether they operate at the level of individual concepts or entire conceptual domains, will depend both on their degree of conventionality and on their linguistic form.", "title": "" }, { "docid": "9b45bb1734e9afc34b14fa4bc47d8fba", "text": "To achieve complex solutions in the rapidly changing world of e-commerce, it is impossible to go it alone. This explains the latest trend in IT outsourcing---global and partner-based alliances. But where do we go from here?", "title": "" }, { "docid": "7acd253de05b3eb27d0abccbcb45367e", "text": "High school programming competitions often follow the traditional model of collegiate competitions, exemplified by the ACM International Collegiate Programming Contest (ICPC). This tradition has been reinforced by the nature of Advanced Placement Computer Science (AP CS A), for which ICPC-style problems are considered an excellent practice regimen. As more and more students in high school computer science courses approach the field from broader starting points, such as Exploring Computer Science (ECS), or the new AP CS Principles course, an analogous structure for high school outreach events becomes of greater importance.\n This paper describes our work on developing a Scratch-based alternative competition for high school students, that can be run in parallel with a traditional morning of ICPC-style problems.", "title": "" }, { "docid": "904c285d720f51905c5378821199aac6", "text": "To evaluate the use of Labrafil® M2125CS as a lipid vehicle for danazol. Further, the possibility of predicting the in vivo behavior with a dynamic in vitro lipolysis model was evaluated. Danazol (28 mg/kg) was administered orally to rats in four formulations: an aqueous suspension, two suspensions in Labrafil® M2125CS (1 and 2 ml/kg) and a solution in Labrafil® M2125CS (4 ml/kg). The obtained absolute bioavailabilities of danazol were 1.5 ± 0.8%; 7.1 ± 0.6%; 13.6 ± 1.4% and 13.3 ± 3.4% for the aqueous suspension, 1, 2 and 4 ml Labrafil® M2125CS per kg respectively. Thus administration of danazol with Labrafil® M2125CS resulted in up to a ninefold increase in the bioavailability, and the bioavailability was dependent on the Labrafil® M2125CS dose. In vitro lipolysis of the formulations was able to predict the rank order of the bioavailability from the formulations, but not the absorption profile of the in vivo study. The bioavailability of danazol increased when Labrafil® M2125CS was used as a vehicle, both when danazol was suspended and solubilized in the vehicle. The dynamic in vitro lipolysis model could be used to rank the bioavailabilities of the in vivo data.", "title": "" }, { "docid": "6011ec4cad8fd6a20a38b2f9603ac1d1", "text": "Full text search in the legal domain is not enough, because there is a gap between common sense and legal knowledge. We want to present some possible directions to solve the problem of full text search in legal domain. The gap can be bridged using a model that mixes tags from Eurovoc Thesaurus Schema Ontology and legal ontology in order to enrich information retrieval capabilities in the legal domain.", "title": "" }, { "docid": "646d097ef0b299c0f591448fd842103e", "text": "Research on brain–machine interfaces has been ongoing for at least a decade. During this period, simultaneous recordings of the extracellular electrical activity of hundreds of individual neurons have been used for direct, real-time control of various artificial devices. Brain–machine interfaces have also added greatly to our knowledge of the fundamental physiological principles governing the operation of large neural ensembles. Further understanding of these principles is likely to have a key role in the future development of neuroprosthetics for restoring mobility in severely paralysed patients.", "title": "" }, { "docid": "e6260a482e1ba33e93c555b7ceddb625", "text": "OBJECTIVES\nTo investigate the prevalence and correlates of smartphone addiction among university students in Saudi Arabia.\n\n\nMETHODS\nThis cross-sectional study was conducted in King Saud University, Riyadh, Kingdom of Saudi Arabia between September 2014 and March 2015. An electronic self administered questionnaire and the problematic use of mobile phones (PUMP) Scale were used. \n\n\nRESULTS\nOut of 2367 study subjects, 27.2% stated that they spent more than 8 hours per day using their smartphones. Seventy-five percent used at least 4 applications per day, primarily for social networking and watching news. As a consequence of using the smartphones, at least 43% had decrease sleeping hours, and experienced a lack of energy the next day, 30% had a more unhealthy lifestyle  (ate more fast food, gained weight, and exercised less), and 25% reported that their academic achievement been adversely affected. There are statistically significant positive relationships among the 4 study variables, consequences of smartphone use (negative lifestyle, poor academic achievement), number of hours per day spent using smartphones, years of study, and number of applications used, and the outcome variable score on the PUMP. The mean values of the PUMP scale were 60.8 with a median of 60. \n\n\nCONCLUSION\nUniversity students in Saudi Arabia are at risk of addiction to smartphones; a phenomenon that is associated with negative effects on sleep, levels of energy, eating habits, weight, exercise, and academic performance.", "title": "" }, { "docid": "2a74e3be9866717b10a80c96fcbaeb6b", "text": "This paper studies the economics of match formation using a novel dataset obtained from a major online dating service. Online dating takes place in a new market environment that has become a common means to find a date or a marriage partner. According to comScore (2006), 17 percent of all North American and 18 percent of all European Internet users visited an online personals site in July 2006. In the United States, 37 percent of all single Internet users looking for a partner have visited a dating Web site (Mary Madden and Amanda Lenhart 2006). The Web site we study provides detailed information on the users’ attributes and interactions, which we use to estimate a rich model of mate preferences. Based on the preference estimates, we then examine whether an economic matching model can explain the observed online matching patterns, and we evaluate the efficiency of the matches obtained on the Web site. Finally, we explore whether the estimated preferences and a matching model are helpful in understanding sorting patterns observed “offline,” among dating and married couples. Two distinct literatures motivate this study. The first is the market design literature, which focuses on designing and evaluating the performance of market institutions. A significant branch of this literature is devoted to matching markets (Alvin E. Roth and Marilda A. O. Sotomayor 1990), with the goal of understanding the allocation mechanism in a particular market, and assessing whether an alternative mechanism with better theoretical properties (typically in terms Matching and Sorting in Online Dating", "title": "" }, { "docid": "595b020768622866ab0941031d5590dd", "text": "The wafer procedure is an effective treatment for ulnar impaction syndrome, which decompresses the ulnocarpal junction through a limited open or arthroscopic approach. In comparison with other common decompressive procedures, the wafer procedure does not require bone healing or internal fixation and also provides excellent exposure of the proximal surface of the triangular fibrocartilage complex. Results of the wafer procedure have been good and few complications have been reported.", "title": "" }, { "docid": "427c5f5825ca06350986a311957c6322", "text": "Machine learning based system are increasingly being used for sensitive tasks such as security surveillance, guiding autonomous vehicle, taking investment decisions, detecting and blocking network intrusion and malware etc. However, recent research has shown that machine learning models are venerable to attacks by adversaries at all phases of machine learning (e.g., training data collection, training, operation). All model classes of machine learning systems can be misled by providing carefully crafted inputs making them wrongly classify inputs. Maliciously created input samples can affect the learning process of a ML system by either slowing the learning process, or affecting the performance of the learned model or causing the system make error only in attacker’s planned scenario. Because of these developments, understanding security of machine learning algorithms and systems is emerging as an important research area among computer security and machine learning researchers and practitioners. We present a survey of this emerging area.", "title": "" }, { "docid": "c2fc709aeb4c48a3bd2071b4693d4296", "text": "Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30% on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.", "title": "" }, { "docid": "947f6f1f2b5cbd646bfa9426cdfda7fe", "text": "In many real-world learning tasks it is expensive to acquire a su cient number of labeled examples for training. This paper investigates methods for reducing annotation cost by sample selection. In this approach, during training the learning program examines many unlabeled examples and selects for labeling only those that are most informative at each stage. This avoids redundantly labeling examples that contribute little new information. Our work follows on previous research on Query By Committee, and extends the committee-based paradigm to the context of probabilistic classi cation. We describe a family of empirical methods for committee-based sample selection in probabilistic classi cation models, which evaluate the informativeness of an example by measuring the degree of disagreement between several model variants. These variants (the committee) are drawn randomly from a probability distribution conditioned by the training set labeled so far. The method was applied to the real-world natural language processing task of stochastic part-of-speech tagging. We nd that all variants of the method achieve a signi cant reduction in annotation cost, although their computational e ciency di ers. In particular, the simplest variant, a two member committee with no parameters to tune, gives excellent results. We also show that sample selection yields a signi cant reduction in the size of the model used by the tagger.", "title": "" }, { "docid": "77df04a0f997f402ae5771db5acda9db", "text": "0198-9715/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.compenvurbsys.2011.05.003 ⇑ Corresponding author. Tel.: +1 212 772 4658; fax E-mail address: gong@hunter.cuny.edu (H. Gong). 1 Present address: MTA Bus Company, Metropolitan Broadway, New York, NY 10004, United States. Handheld GPS provides a new technology to trace people’s daily travels and has been increasingly used for household travel surveys in major cities worldwide. However, methodologies have not been developed to successfully manage the enormous amount of data generated by GPS, especially in a complex urban environment such as New York City where urban canyon effects are significant and transportation networks are complicated. We develop a GIS algorithm that automatically processes the data from GPSbased travel surveys and detects five travel modes (walk, car, bus, subway, and commuter rail) from a multimodal transportation network in New York City. The mode detection results from the GIS algorithm are checked against the travel diaries from two small handheld GPS surveys. The combined success rate is a promising 82.6% (78.9% for one survey and 86.0% for another). Challenges we encountered in the mode detection process, ways we developed to meet these challenges, as well as possible future improvement to the GPS/GIS method are discussed in the paper, in order to provide a much-needed methodology to process GPS-based travel data for other cities. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d5debb44bb6cf518bbc3d8d5f88201e7", "text": "In multi-label learning, each training example is associated with multiple class labels and the task is to learn a mapping from the feature space to the power set of label space. It is generally demanding and time-consuming to obtain labels for training examples, especially for multi-label learning task where a number of class labels need to be annotated for the instance. To circumvent this difficulty, semi-supervised multi-label learning aims to exploit the readily-available unlabeled data to help build multi-label predictive model. Nonetheless, most semi-supervised solutions to multi-label learning work under transductive setting, which only focus on making predictions on existing unlabeled data and cannot generalize to unseen instances. In this paper, a novel approach named COINS is proposed to learning from labeled and unlabeled data by adapting the well-known co-training strategy which naturally works under inductive setting. In each co-training round, a dichotomy over the feature space is learned by maximizing the diversity between the two classifiers induced on either dichotomized feature subset. After that, pairwise ranking predictions on unlabeled data are communicated between either classifier for model refinement. Extensive experiments on a number of benchmark data sets show that COINS performs favorably against state-of-the-art multi-label learning approaches.", "title": "" }, { "docid": "8468d6fb6e9b89ac2ce09e2e32aaa2c4", "text": "0272-1732/00/$10.00  2000 IEEE The Hydra chip multiprocessor (CMP) integrates four MIPS-based processors and their primary caches on a single chip together with a shared secondary cache. A standard CMP offers implementation and performance advantages compared to wide-issue superscalar designs. However, it must be programmed with a more complicated parallel programming model to obtain maximum performance. To simplify parallel programming, the Hydra CMP supports thread-level speculation and memory renaming, a paradigm that allows performance similar to a uniprocessor of comparable die area on integer programs. This article motivates the design of a CMP, describes the architecture of the Hydra design with a focus on its speculative thread support, and describes our prototype implementation.", "title": "" }, { "docid": "02770bf28a64851bf773c56736efa537", "text": "Wearable robotics is strongly oriented to humans. New applications for wearable robots are encouraged by the lightness and portability of new devices and the progress in human-robot cooperation strategies. In this paper, we propose the different design guidelines to realize a robotic extra-finger for human grasping enhancement. Such guidelines were followed for the realization of three prototypes obtained using rapid prototyping techniques, i.e., a 3D printer and an open hardware development platform. Both fully actuated and under-actuated solutions have been explored. In the proposed wearable design, the robotic extra-finger can be worn as a bracelet in its rest position. The availability of a supplementary finger in the human hand allows to enlarge its workspace, improving grasping and manipulation capabilities. This preliminary work is a first step towards the development of robotic extra-limbs able to increase human workspace and dexterity.", "title": "" }, { "docid": "70c9fe96604c617a2e94fd721add3fb5", "text": "Multi-task learning aims to boost the performance of multiple prediction tasks by appropriately sharing relevant information among them. However, it always suffers from the negative transfer problem. And due to the diverse learning difficulties and convergence rates of different tasks, jointly optimizing multiple tasks is very challenging. To solve these problems, we present a weighted multi-task deep convolutional neural network for person attribute analysis. A novel validation loss trend algorithm is, for the first time proposed to dynamically and adaptively update the weight for learning each task in the training process. Extensive experiments on CelebA, Market-1501 attribute and Duke attribute datasets clearly show that state-of-the-art performance is obtained; and this validates the effectiveness of our proposed framework.", "title": "" } ]
scidocsrr
f6ba6c232827077319358a18873c7e9e
Isolated Micro-Grids With Renewable Hybrid Generation: The Case of Lençóis Island
[ { "docid": "e3d1282b2ed8c9724cf64251df7e14df", "text": "This paper describes and evaluates the feasibility of control strategies to be adopted for the operation of a microgrid when it becomes isolated. Normally, the microgrid operates in interconnected mode with the medium voltage network; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. An evaluation of the need of storage devices and load shedding strategies is included in this paper.", "title": "" } ]
[ { "docid": "11775f58f85bc3127a5857214ed20df0", "text": "The immune system can be defined as a complex system that protects the organism against organisms or substances that might cause infection or disease. One of the most fascinating characteristics of the immune system is its capability to recognize and respond to pathogens with significant specificity. Innate and adaptive immune responses are able to recognize for‐ eign structures and trigger different molecular and cellular mechanisms for antigen elimina‐ tion. The immune response is critical to all individuals; therefore numerous changes have taken place during evolution to generate variability and specialization, although the im‐ mune system has conserved some important features over millions of years of evolution that are common for all species. The emergence of new taxonomic categories coincided with the diversification of the immune response. Most notably, the emergence of vertebrates coincid‐ ed with the development of a novel type of immune response. Apparently, vertebrates in‐ herited innate immunity from their invertebrate ancestors [1].", "title": "" }, { "docid": "f53be608e9a27d5de0a87c03b953ca28", "text": "In this work, we present and analyze an image denoising method, the NL-means algorithm, based on a non local averaging of all pixels in the image. We also introduce the concept of method noise, that is, the difference between the original (always slightly noisy) digital image and its denoised version. Finally, we present some experiences comparing the NL-means results with some classical denoising methods.", "title": "" }, { "docid": "3a97900c5eb9c138921edd0e23fc3caa", "text": "In this paper, we provide a revolutionary vision of 5G networks, in which SDN technologies are used for the programmability of the wireless network, and where a NFV-ready network store is provided to Mobile Network Operators (MNO), Enterprises, and Over-The-Top (OTT) third parties. The proposed network serves as a digital distribution platform of programmable Virtualized Network Functions (VNFs) that enables 5G application use-cases. Currently existing application stores, such as Apple's App Store for iOS applications, Google's Play Store for Android, or Ubuntu's Software Center, deliver applications to user specific software platforms. Our vision is to provide a digital marketplace, gathering 5G enabling Network Applications and Network Functions, written to run on top of commodity cloud infrastructures, connected to remote radio heads (RRH). The 5G Network Store will be the same to the network provider as the application store is currently to a software platform.", "title": "" }, { "docid": "21916d34fb470601fb6376c4bcd0839a", "text": "BACKGROUND\nCutibacterium (Propionibacterium) acnes is assumed to play an important role in the pathogenesis of acne.\n\n\nOBJECTIVES\nTo examine if clones with distinct virulence properties are associated with acne.\n\n\nMETHODS\nMultiple C. acnes isolates from follicles and surface skin of patients with moderate to severe acne and healthy controls were characterized by multilocus sequence typing. To determine if CC18 isolates from acne patients differ from those of controls in the possession of virulence genes or lack of genes conducive to a harmonious coexistence the full genomes of dominating CC18 follicular clones from six patients and five controls were sequenced.\n\n\nRESULTS\nIndividuals carried one to ten clones simultaneously. The dominating C. acnes clones in follicles from acne patients were exclusively from the phylogenetic clade I-1a and all belonged to clonal complex CC18 with the exception of one patient dominated by the worldwide-disseminated and often antibiotic resistant clone ST3. The clonal composition of healthy follicles showed a more heterogeneous pattern with follicles dominated by clones representing the phylogenetic clades I-1a, I-1b, I-2 and II. Comparison of follicular CC18 gene contents, allelic versions of putative virulence genes and their promoter regions, and 54 variable-length intragenic and inter-genic homopolymeric tracts showed extensive conservation and no difference associated with the clinical origin of isolates.\n\n\nCONCLUSIONS\nThe study supports that C. acnes strains from clonal complex CC18 and the often antibiotic resistant clone ST3 are associated with acne and suggests that susceptibility of the host rather than differences within these clones may determine the clinical outcome of colonization.", "title": "" }, { "docid": "93278184377465ec1b870cd54dc49a93", "text": "We advocate the usage of 3D Zernike invariants as descriptors for 3D shape retrieval. The basis polynomials of this representation facilitate computation of invariants under rotation, translation and scaling. Some theoretical results have already been summarized in the past from the aspect of pattern recognition and shape analysis. We provide practical analysis of these invariants along with algorithms and computational details. Furthermore, we give a detailed discussion on influence of the algorithm parameters like the conversion into a volumetric function, number of utilized coefficients, etc. As is revealed by our study, the 3D Zernike descriptors are natural extensions of recently introduced spherical harmonics based descriptors. We conduct a comparison of 3D Zernike descriptors against these regarding computational aspects and shape retrieval performance using several quality measures and based on experiments on the Princeton Shape Benchmark.", "title": "" }, { "docid": "ff69af9c6ce771b0db8caeaa6da5478f", "text": "The use of Internet as a mean of shopping goods and services is growing over the past decade. Businesses in the e-commerce sector realize that the key factors for success are not limited to the existence of a website and low prices but must also include high standards of e-quality. Research indicates that the attainment of customer satisfaction brings along plenty of benefits. Furthermore, trust is of paramount importance, in ecommerce, due to the fact that that its establishment can diminish the perceived risk of using an internet service. The purpose of this study is to investigate the impact of customer perceived quality of an internet shop on customers’ satisfaction and trust. In addition, the possible effect of customer satisfaction on trust is also examined. An explanatory research approach was adopted in order to identify causal relationships between e-quality, customer satisfaction and trust. This was accomplished through field research by utilizing an interviewer-administered questionnaire. The questionnaire was largely based on existing constructs in relative literature. E-quality was divided into 5 dimensions, namely ease of use, e-scape, customization, responsiveness, and assurance. After being successfully pilot-tested by the managers of 3 Greek companies developing ecommerce software, 4 managers of Greek internet shops and 5 internet shoppers, the questionnaire was distributed to internet shoppers in central Greece. This process had as a result a total of 171 correctly answered questionnaires. Reliability tests and statistical analyses were performed to both confirm scale reliability and test research hypotheses. The findings indicate that all the examined e-quality dimensions expose a significant positive influence on customer satisfaction, with ease of use, e-scape and assurance being the most important ones. One the other hand, rather surprisingly, the only e-quality dimension that proved to have a significant positive impact on trust was customization. Finally, satisfaction was revealed to have a significant positive relation with trust.", "title": "" }, { "docid": "0966aa29291705b44a338692fed9fffc", "text": "Code-Mixing (CM) is defined as the embedding of linguistic units such as phrases, words, and morphemes of one language into an utterance of another language. CM is a natural phenomenon observed in many multilingual societies. It helps in speeding-up communication and allows wider variety of expression due to which it has become a popular mode of communication in social media forums like Facebook and Twitter. However, current Question Answering (QA) research and systems only support expressing a question in a single language which is an unrealistic and hard proposition especially for certain domains like health and technology. In this paper, we take the first step towards the development of a full-fledged QA system in CM language which is building a Question Classification (QC) system. The QC system analyzes the user question and infers the expected Answer Type (AType). The AType helps in locating and verifying the answer as it imposes certain type-specific constraints. In this paper, we present our initial efforts towards building a full-fledged QA system for CM language. We learn a basic Support Vector Machine (SVM) based QC system for English-Hindi CM questions. Due to the inherent complexities involved in processing CM language and also the unavailability of language processing resources such POS taggers, Chunkers, Parsers, we design our current system using only word-level resources such as language identification, transliteration and lexical translation. To reduce data sparsity and leverage resources available in a resource-rich language, in stead of extracting features directly from the original CM words, we translate them commonly into English and then perform featurization. We created an evaluation dataset for this task and our system achieves an accuracy of 63% and 45% in coarse-grained and fine-grained categories of the question taxanomy. The idea of translating features into English indeed helps in improving accuracy over the unigram baseline.", "title": "" }, { "docid": "85f5833628a4b50084fa50cbe45ebe4d", "text": "We introduce a functional gradient descent trajectory optimization algorithm for robot motion planning in Reproducing Kernel Hilbert Spaces (RKHSs). Functional gradient algorithms are a popular choice for motion planning in complex many-degree-of-freedom robots, since they (in theory) work by directly optimizing within a space of continuous trajectories to avoid obstacles while maintaining geometric properties such as smoothness. However, in practice, implementations such as CHOMP and TrajOpt typically commit to a fixed, finite parametrization of trajectories, often as a sequence of waypoints. Such a parameterization can lose much of the benefit of reasoning in a continuous trajectory space: e.g., it can require taking an inconveniently small step size and large number of iterations to maintain smoothness. Our work generalizes functional gradient trajectory optimization by formulating it as minimization of a cost functional in an RKHS. This generalization lets us represent trajectories as linear combinations of kernel functions. As a result, we are able to take larger steps and achieve a locally optimal trajectory in just a few iterations. Depending on the selection of kernel, we can directly optimize in spaces of trajectories that are inherently smooth in velocity, jerk, curvature, etc., and that have a low-dimensional, adaptively chosen parameterization. Our experiments illustrate the effectiveness of the planner for different kernels, including Gaussian RBFs with independent and coupled interactions among robot joints, Laplacian RBFs, and B-splines, as compared to the standard discretized waypoint representation.", "title": "" }, { "docid": "d925c828547a6d0685f5c4c040a00a76", "text": "Least mean square (LMS) based adaptive algorithms have been attracted much attention since their low computational complexity and robust recovery capability. To exploit the channel sparsity, LMS-based adaptive sparse channel estimation methods, e.g., zero-attracting LMS (ZA-LMS), reweighted zero-attracting LMS (RZA-LMS) and Lp - norm sparse LMS (LP-LMS), have also been proposed. To take full advantage of channel sparsity, in this paper, we propose several improved adaptive sparse channel estimation methods using Lp -norm normalized LMS (LP-NLMS) and L0 -norm normalized LMS (L0-NLMS). Comparing with previous methods, effectiveness of the proposed methods is confirmed by computer simulations.", "title": "" }, { "docid": "f89f5e08a2ee9e2c4685a2fde3bf5f36", "text": "Fungal infections, especially those caused by opportunistic species, have become substantially more common in recent decades. Numerous species cause human infections, and several new human pathogens are discovered yearly. This situation has created an increasing interest in fungal taxonomy and has led to the development of new methods and approaches to fungal biosystematics which have promoted important practical advances in identification procedures. However, the significance of some data provided by the new approaches is still unclear, and results drawn from such studies may even increase nomenclatural confusion. Analyses of rRNA and rDNA sequences constitute an important complement of the morphological criteria needed to allow clinical fungi to be more easily identified and placed on a single phylogenetic tree. Most of the pathogenic fungi so far described belong to the kingdom Fungi; two belong to the kingdom Chromista. Within the Fungi, they are distributed in three phyla and in 15 orders (Pneumocystidales, Saccharomycetales, Dothideales, Sordariales, Onygenales, Eurotiales, Hypocreales, Ophiostomatales, Microascales, Tremellales, Poriales, Stereales, Agaricales, Schizophyllales, and Ustilaginales).", "title": "" }, { "docid": "7220e44cff27a0c402a8f39f95ca425d", "text": "The Argument Web is maturing as both a platform built upon a synthesis of many contemporary theories of argumentation in philosophy and also as an ecosystem in which various applications and application components are contributed by different research groups around the world. It already hosts the largest publicly accessible corpora of argumentation and has the largest number of interoperable and cross compatible tools for the analysis, navigation and evaluation of arguments across a broad range of domains, languages and activity types. Such interoperability is key in allowing innovative combinations of tool and data reuse that can further catalyse the development of the field of computational argumentation. The aim of this paper is to summarise the key foundations, the recent advances and the goals of the Argument Web, with a particular focus on demonstrating the relevance to, and roots in, philosophical argumentation theory.", "title": "" }, { "docid": "478f0ac1084fb9b0eb1354d9627d8507", "text": "BACKGROUND\nFemale genital tract anomalies including imperforate hymen affect sexual life and fertility.\n\n\nCASE PRESENTATION\nIn the present case, we describe a pregnant woman diagnosed with imperforate hymen which never had penetrative vaginal sex. A 27-year-old married patient with 2 months of amenorrhea presented in a clinic without any other complications. Her history of difficult intercourse and prolonged menstrual flow were reported, and subsequent vaginal examination confirmed the diagnosis of imperforate hymen even though she claims to made pinhole surgery in hymen during puberty. Her urine pregnancy test was positive, and an ultrasound examination revealed 8.3 weeks pregnant. The pregnancy was followed up to 39.5 weeks when she entered in cesarean delivery in urgency. Due to perioperative complications in our study, a concomitant hymenotomy was successfully performed. The patient was discharged with the baby, and vaginal anatomy was restored.\n\n\nCONCLUSIONS\nThis case study suggests that even though as microperforated hymen surgery in puberty can permit pregnancy and intervention with cesarean section and hymenotomy is a good option to reduce the resulting perioperative complications which indirectly affect the increase of the fertilisation and improvement of later sexual life.", "title": "" }, { "docid": "8180c0bb869da12f32a847f70846807e", "text": "Large-scale adaptive radiations might explain the runaway success of a minority of extant vertebrate clades. This hypothesis predicts, among other things, rapid rates of morphological evolution during the early history of major groups, as lineages invade disparate ecological niches. However, few studies of adaptive radiation have included deep time data, so the links between extant diversity and major extinct radiations are unclear. The intensively studied Mesozoic dinosaur record provides a model system for such investigation, representing an ecologically diverse group that dominated terrestrial ecosystems for 170 million years. Furthermore, with 10,000 species, extant dinosaurs (birds) are the most speciose living tetrapod clade. We assembled composite trees of 614-622 Mesozoic dinosaurs/birds, and a comprehensive body mass dataset using the scaling relationship of limb bone robustness. Maximum-likelihood modelling and the node height test reveal rapid evolutionary rates and a predominance of rapid shifts among size classes in early (Triassic) dinosaurs. This indicates an early burst niche-filling pattern and contrasts with previous studies that favoured gradualistic rates. Subsequently, rates declined in most lineages, which rarely exploited new ecological niches. However, feathered maniraptoran dinosaurs (including Mesozoic birds) sustained rapid evolution from at least the Middle Jurassic, suggesting that these taxa evaded the effects of niche saturation. This indicates that a long evolutionary history of continuing ecological innovation paved the way for a second great radiation of dinosaurs, in birds. We therefore demonstrate links between the predominantly extinct deep time adaptive radiation of non-avian dinosaurs and the phenomenal diversification of birds, via continuing rapid rates of evolution along the phylogenetic stem lineage. This raises the possibility that the uneven distribution of biodiversity results not just from large-scale extrapolation of the process of adaptive radiation in a few extant clades, but also from the maintenance of evolvability on vast time scales across the history of life, in key lineages.", "title": "" }, { "docid": "e39ad8ee1d913cba1707b6aafafceefb", "text": "Thoracic Outlet Syndrome (TOS) is the constellation of symptoms caused by compression of neurovascular structures at the superior aperture of the thorax, properly the thoracic inlet! The diagnosis and treatment is contentious and some even question its existence. Symptoms are often confused with distal compression neuropathies or cervical", "title": "" }, { "docid": "41b5fe6da9d3970ceb09128171e6d604", "text": "250  Abstract— The basic principles of data mining is to analyze the data from different angle, categorize it and finally to summarize it. In today's world data mining have increasingly become very interesting and popular in terms of all application. The need for data mining is that we have too much data, too much technology but don't have useful information. Data mining software allows user to analyze data. This paper introduces the key principle of data pre-processing, classification, clustering and introduction of WEKA tool. Weka is a data mining tool. In this paper we are describing the steps of how to use WEKA tool for these technologies. It provides the facility to classify the data through various algorithms.", "title": "" }, { "docid": "8601b4355bd0272a4c3fa494678f4bb0", "text": "Ever more processes of our daily lives are shifting into the digital realm. Consequently, users face a variety of IT-security threats with possibly severe ramifications. It has been shown that technical measures alone are insufficient to counter all threats. For instance, it takes technical measures on average 32 hours before identifying and blocking phishing websites. Therefore, teaching users how to identify malicious websites is of utmost importance, if they are to be protected at all times. A number of ways to deliver the necessary knowledge to users exist. Among the most broadly used are instructor-based, computer-based and text-based training. We compare all three formats in the security context, or to be more precise in the context of anti-phishing training.", "title": "" }, { "docid": "88db61d4bbe3b2a6eb608896596b957a", "text": "There are many instances in which perceptual disfluency leads to improved memory performance, a phenomenon often referred to as the perceptual-interference effect (e.g., Diemand-Yauman, Oppenheimer, & Vaughn (Cognition 118:111-115, 2010); Nairne (Journal of Experimental Psychology: Learning, Memory, and Cognition 14:248-255, 1988)). In some situations, however, perceptual disfluency does not affect memory (Rhodes & Castel (Journal of Experimental Psychology: General 137:615-625, 2008)), or even impairs memory (Glass, (Psychology and Aging 22:233-238, 2007)). Because of the uncertain effects of perceptual disfluency, it is important to establish when disfluency is a \"desirable difficulty\" (Bjork, 1994) and when it is not, and the degree to which people's judgments of learning (JOLs) reflect the consequences of processing disfluent information. In five experiments, our participants saw multiple lists of blurred and clear words and gave JOLs after each word. The JOLs were consistently higher for the perceptually fluent items in within-subjects designs, which accurately predicted the pattern of recall performance when the presentation time was short (Exps. 1a and 2a). When the final test was recognition or when the presentation time was long, however, we found no difference in recall for clear and blurred words, although JOLs continued to be higher for clear words (Exps. 2b and 3). When fluency was manipulated between subjects, neither JOLs nor recall varied between formats (Exp. 1b). This study suggests a boundary condition for the desirable difficulty of perceptual disfluency and indicates that a visual distortion, such as blurring a word, may not always induce the deeper processing necessary to create a perceptual-interference effect.", "title": "" }, { "docid": "704d729295cddd358eba5eefdf0bdee4", "text": "Remarkable advances in instrument technology, automation and computer science have greatly simplified many aspects of previously tedious tasks in laboratory diagnostics, creating a greater volume of routine work, and significantly improving the quality of results of laboratory testing. Following the development and successful implementation of high-quality analytical standards, analytical errors are no longer the main factor influencing the reliability and clinical utilization of laboratory diagnostics. Therefore, additional sources of variation in the entire laboratory testing process should become the focus for further and necessary quality improvements. Errors occurring within the extra-analytical phases are still the prevailing source of concern. Accordingly, lack of standardized procedures for sample collection, including patient preparation, specimen acquisition, handling and storage, account for up to 93% of the errors currently encountered within the entire diagnostic process. The profound awareness that complete elimination of laboratory testing errors is unrealistic, especially those relating to extra-analytical phases that are harder to control, highlights the importance of good laboratory practice and compliance with the new accreditation standards, which encompass the adoption of suitable strategies for error prevention, tracking and reduction, including process redesign, the use of extra-analytical specifications and improved communication among caregivers.", "title": "" }, { "docid": "74ae28cf8b7f458b857b49748573709d", "text": "Muscle fiber conduction velocity is based on the ti me delay estimation between electromyography recording channels. The aims of this study is to id entify the best estimator of generalized correlati on methods in the case where time delay is constant in order to extent these estimator to the time-varyin g delay case . The fractional part of time delay was c lculated by using parabolic interpolation. The re sults indicate that Eckart filter and Hannan Thomson (HT ) give the best results in the case where the signa l to noise ratio (SNR) is 0 dB.", "title": "" } ]
scidocsrr
ced28169161b4033a80d56dab59f1074
Kinodynamic trajectory optimization and control for car-like robots
[ { "docid": "ea4a1405e1c6444726d1854c7c56a30d", "text": "This paper presents a novel integrated approach for efficient optimization based online trajectory planning of topologically distinctive mobile robot trajectories. Online trajectory optimization deforms an initial coarse path generated by a global planner by minimizing objectives such as path length, transition time or control effort. Kinodynamic motion properties of mobile robots and clearance from obstacles impose additional equality and inequality constraints on the trajectory optimization. Local planners account for efficiency by restricting the search space to locally optimal solutions only. However, the objective function is usually non-convex as the presence of obstacles generates multiple distinctive local optima. The proposed method maintains and simultaneously optimizes a subset of admissible candidate trajectories of distinctive topologies and thus seeking the overall best candidate among the set of alternative local solutions. Time-optimal trajectories for differential-drive and carlike robots are obtained efficiently by adopting the Timed-Elastic-Band approach for the underlying trajectory optimization problem. The investigation of various example scenarios and a comparative analysis with conventional local planners confirm the advantages of integrated exploration, maintenance and optimization of topologically distinctive trajectories. ∗Corresponding author Email address: christoph.roesmann@tu-dortmund.de (Christoph Rösmann) Preprint submitted to Robotics and Autonomous Systems November 12, 2016", "title": "" } ]
[ { "docid": "4edd0cd6a612cc5010be64440296d8fd", "text": "We consider the optimization of deep convolutional neural networks (CNNs) such that they provide good performance while having reduced complexity if deployed on either conventional systems utilizing spatial-domain convolution or lower complexity systems designed for Winograd convolution. Furthermore, we explore the universal quantization and compression of these networks. In particular, the proposed framework produces one compressed model whose convolutional filters can be made sparse either in the spatial domain or in the Winograd domain. Hence, one compressed model can be deployed universally on any platform, without need for re-training on the deployed platform, and the sparsity of its convolutional filters can be exploited for further complexity reduction in either domain. To get a better compression ratio, the sparse model is compressed in the spatial domain which has a less number of parameters. From our experiments, we obtain 24.2×, 47.7× and 35.4× compressed models for ResNet-18, AlexNet and CT-SRCNN, while their computational cost is also reduced by 4.5×, 5.1× and 23.5×, respectively.", "title": "" }, { "docid": "bd1b178ad5eabe9d40319ebada94146b", "text": "The emergence and abundance of cooperation in nature poses a tenacious and challenging puzzle to evolutionary biology. Cooperative behaviour seems to contradict Darwinian evolution because altruistic individuals increase the fitness of other members of the population at a cost to themselves. Thus, in the absence of supporting mechanisms, cooperation should decrease and vanish, as predicted by classical models for cooperation in evolutionary game theory, such as the Prisoner's Dilemma and public goods games. Traditional approaches to studying the problem of cooperation assume constant population sizes and thus neglect the ecology of the interacting individuals. Here, we incorporate ecological dynamics into evolutionary games and reveal a new mechanism for maintaining cooperation. In public goods games, cooperation can gain a foothold if the population density depends on the average population payoff. Decreasing population densities, due to defection leading to small payoffs, results in smaller interaction group sizes in which cooperation can be favoured. This feedback between ecological dynamics and game dynamics can generate stable coexistence of cooperators and defectors in public goods games. However, this mechanism fails for pairwise Prisoner's Dilemma interactions and the population is driven to extinction. Our model represents natural extension of replicator dynamics to populations of varying densities.", "title": "" }, { "docid": "13517b6b95e70119775c2cee94c1f198", "text": "Fueled by the popularity of firms such as Airbnb and Uber, the sharing economy, otherwise referred to as the collaborative economy or peer economy, has recently gained increasing attention among practitioners and academics. In the sharing economy,3 new ventures develop and deploy digital platforms to enable peer-to-peer sharing of goods, services and information. The underlying proposition of sharing economy firms is that they can add value by allowing owners of resources to make their idle personal assets (e.g., rooms or homes) available to those who need them (e.g., travelers).4 As such, sharing economy firms are direct alternatives to established businesses (e.g., hotels). The resource optimization offered by these firms has become possible through recent technological advances in search, rating and matching algorithms, the spread of mobile consumer devices and the explosive growth in the use of", "title": "" }, { "docid": "33285813f1b3f2c13c711447199ed75d", "text": "This paper describes the dotplot data visualization technique and its potential for contributingto the identificationof design patterns. Pattern languages have been used in architectural design and urban planning to codify related rules-of-thumb for constructing vernacular buildings and towns. When applied to software design, pattern languages promote reuse while allowing novice designers to learn from the insights of experts. Dotplots have been used in biology to study similarity in genetic sequences. When applied to software, dotplots identify patterns that range in abstraction from the syntax of programming languages to the organizational uniformity of large, multi-component systems. Dotplots are useful for design by successive abstraction—replacing duplicated code with macros, subroutines, or classes. Dotplots reveal a pervasive design pattern for simplifying algorithms by increasing the complexity of initializations. Dotplots also reveal patterns of wordiness in languages—one example inspired a design pattern for a new programming language. In addition, dotplots of data associated with programs identify dynamic usage patterns—one example identifies a design pattern used in the construction of a UNIX(tm) file system.", "title": "" }, { "docid": "618582e1f8fb7830d59d0d38fb14c4ae", "text": "Today, we analyze an application of random projection to compute approximate solutions of constrained least-squares problems. This method is often referred to as sketched least-squares. Suppose that we are given an observation vector y ∈ R n and matrix A ∈ R n×d , and that for some convex set C ⊂ R d , we would like to compute the constrained least-squares solution x LS : = argmin x∈C 1 2 y − Ax 2 2 : =f (x). In general, this solution may not be unique, but we assume throughout this lecture that uniqueness holds (so that n ≥ d necessarily). Different versions of the constrained least-squares problem arise in many applications: • In the simplest case of an unconstrained problem (C = R d), it corresponds to the usual least-squares estimator, which has been widely studied. Most past work on sketching least-squares has focused on this case. • When C is a scaled form of the 1-ball—that is, C = {x ∈ R d | x 1 ≤ R}—then the constrained problem is known as the Lasso. It is widely used for estimating sparse regression vectors. • The support vector machine for classification, when solved in its dual form, leads to a least-squares problem over a polytope C. Problems of the form (3.1) can also arise as intermediate steps of using Newton's method to solve a constrained optimization problem. The original problem can be difficult to solve if the first matrix dimension n is too large. Thus, in order to reduce both storage and computation requirements, a natural idea is to randomly project the original data to a lower-dimensional space. In particular, given a random sketch matrix S ∈ R m×n , consider the sketched least-squares problem x : = argmin x∈C 1 2 S(y − Ax) 2 2 : =g(x) .", "title": "" }, { "docid": "afd0656733192f479ac3989812647227", "text": "In this paper we present a novel method for automatic traffic accident detection, based on Smoothed Particles Hydrodynamics (SPH). In our method, a motion flow field is obtained from the video through dense optical flow extraction. Then a thermal diffusion process (TDP) is exploited to turn the motion flow field into a coherent motion field. Approximating the moving particles to individuals, their interaction forces, represented as endothermic reactions, are computed using the enthalpy measure, thus obtaining the potential particles of interest. Furthermore, we exploit SPH that accumulates the contribution of each particle in a weighted form, based on a kernel function. The experimental evaluation is conducted on a set of video sequences collected from Youtube, and the obtained results are compared against a state of the art technique.", "title": "" }, { "docid": "f4a2e2cc920e28ae3d7539ba8b822fb7", "text": "Neurologic injuries, such as stroke, spinal cord injuries, and weaknesses of skeletal muscles with elderly people, may considerably limit the ability of this population to achieve the main daily living activities. Recently, there has been an increasing interest in the development of wearable devices, the so-called exoskeletons, to assist elderly as well as patients with limb pathologies, for movement assistance and rehabilitation. In this paper, we review and discuss the state of the art of the lower limb exoskeletons that are mainly used for physical movement assistance and rehabilitation. An overview of the commonly used actuation systems is presented. According to different case studies, a classification and comparison between different types of actuators is conducted, such as hydraulic actuators, electrical motors, series elastic actuators, and artificial pneumatic muscles. Additionally, the mainly used control strategies in lower limb exoskeletons are classified and reviewed, based on three types of human-robot interfaces: the signals collected from the human body, the interaction forces between the exoskeleton and the wearer, and the signals collected from exoskeletons. Furthermore, the performances of several typical lower limb exoskeletons are discussed, and some assessment methods and performance criteria are reviewed. Finally, a discussion of the major advances that have been made, some research directions, and future challenges are presented.", "title": "" }, { "docid": "f0d17b259b699bc7fb7e8f525ec64db0", "text": "Developing Intelligent Systems involves artificial intelligence approaches including artificial neural networks. Here, we present a tutorial of Deep Neural Networks (DNNs), and some insights about the origin of the term “deep”; references to deep learning are also given. Restricted Boltzmann Machines, which are the core of DNNs, are discussed in detail. An example of a simple two-layer network, performing unsupervised learning for unlabeled data, is shown. Deep Belief Networks (DBNs), which are used to build networks with more than two layers, are also described. Moreover, examples for supervised learning with DNNs performing simple prediction and classification tasks, are presented and explained. This tutorial includes two intelligent pattern recognition applications: handwritten digits (benchmark known as MNIST) and speech recognition.", "title": "" }, { "docid": "f0f17b4d7bf858e84ed12d0f5f309d4e", "text": "KEY CLINICAL MESSAGE\nPatient complained of hearing loss and tinnitus after the onset of Reiter's syndrome. Audiometry confirmed the hearing loss on the left ear; blood work showed increased erythrocyte sedimentation rate and C3 fraction of the complement. Genotyping for HLA-B27 was positive. Treatment with prednisolone did not improve the hearing levels.", "title": "" }, { "docid": "11707c7f7c5b028392b25d1dffa9daeb", "text": "High reliability and large rangeability are required of pumps in existing and new plants which must be capable of reliable on-off cycling operations and specially low load duties. The reliability and rangeability target is a new task for the pump designer/researcher and is made very challenging by the cavitation and/or suction recirculation effects, first of all the pump damage. The present knowledge about the: a) design critical parameters and their optimization, b) field problems diagnosis and troubleshooting has much advanced, in the very latest years. The objective of the pump manufacturer is to develop design solutions and troubleshooting approaches which improve the impeller life as related to cavitation erosion and enlarge the reliable operating range by minimizing the effects of the suction recirculation. This paper gives a short description of several field cases characterized by different damage patterns and other symptoms related with cavitation and/or suction recirculation. The troubleshooting methodology is described in detail, also focusing on the role of both the pump designer and the pump user.", "title": "" }, { "docid": "597522575f1bc27394da2f1040e9eaa5", "text": "Many natural language processing systems rely on machine learning models that are trained on large amounts of manually annotated text data. The lack of sufficient amounts of annotated data is, however, a common obstacle for such systems, since manual annotation of text is often expensive and time-consuming. The aim of “PAL, a tool for Pre-annotation and Active Learning” is to provide a ready-made package that can be used to simplify annotation and to reduce the amount of annotated data required to train a machine learning classifier. The package provides support for two techniques that have been shown to be successful in previous studies, namely active learning and pre-annotation. The output of the pre-annotation is provided in the annotation format of the annotation tool BRAT, but PAL is a stand-alone package that can be adapted to other formats.", "title": "" }, { "docid": "a96f219a2a1baac2c0d964a5a7d9fb62", "text": "Spam-reduction techniques have developed rapidly ov er the last few years, as spam volumes have increased. We believe that no one anti-spam soluti on is the “right” answer, and that the best approac h is a multifaceted one, combining various forms of filtering w ith infrastructure changes, financial changes, lega l recourse, and more, to provide a stronger barrier to spam tha n can be achieved with one solution alone. SpamGur u addresses the part of this multi-faceted approach t hat can be handled by technology on the recipient’s side, using plug-in tokenizers and parsers, plug-in classificat ion modules, and machine-learning techniques to ach ieve high hit rates and low false-positive rates.", "title": "" }, { "docid": "cd6270a071c4e386f0372d9d771cdb14", "text": "This paper introduces Iconoscope, a game aiming to foster the creativity of a young target audience in formal or informal educational settings. At the core of the Iconoscope design is the creative, playful interpretation of word-concepts via the construction of visual icons. In addition to that, the game rewards ambiguity via a scoring system which favors icons that dichotomize public opinion. The game is played by a group of players, with each player attempting to guess which of the concepts provided by the system is represented by each opponent’s created icon. Through the social interaction that emerges, Iconoscope prompts co-creativity within a group of players; in addition, the game offers the potential of human-machine co-creativity via computer-generated suggestions to the player’s icon. Experiments with early prototypes, described in this paper, provide insight into the design process and motivate certain decisions taken for the current version of Iconoscope which, at the time of writing, is being evaluated in selected schools in Greece, Austria and the United Kingdom.", "title": "" }, { "docid": "f792142a55488e701211d95acbb176e1", "text": "Steganography is a technique of hiding messages in such a way that only the intended user can view and able to suspects the reality of the message, a form of security through anonymity. Another definition of Steganography is hiding one chunk of information inside some other information. The different Steganography algorithms like Discrete Wavelet Transform system (DWT), Least Significant bit algorithm (LSB), Blowfish algorithm and Sparse Matrix Encoding which are used for improvement of security strength for hiding the information are discussed in thispaper.", "title": "" }, { "docid": "6f5a3f7ddb99eee445d342e6235280c3", "text": "Although aesthetic experiences are frequent in modern life, there is as of yet no scientifically comprehensive theory that explains what psychologically constitutes such experiences. These experiences are particularly interesting because of their hedonic properties and the possibility to provide self-rewarding cognitive operations. We shall explain why modern art's large number of individualized styles, innovativeness and conceptuality offer positive aesthetic experiences. Moreover, the challenge of art is mainly driven by a need for understanding. Cognitive challenges of both abstract art and other conceptual, complex and multidimensional stimuli require an extension of previous approaches to empirical aesthetics. We present an information-processing stage model of aesthetic processing. According to the model, aesthetic experiences involve five stages: perception, explicit classification, implicit classification, cognitive mastering and evaluation. The model differentiates between aesthetic emotion and aesthetic judgments as two types of output.", "title": "" }, { "docid": "9ad1acc78312d66f3e37dfb39f4692df", "text": "This work targets human action recognition in video. While recent methods typically represent actions by statistics of local video features, here we argue for the importance of a representation derived from human pose. To this end we propose a new Pose-based Convolutional Neural Network descriptor (P-CNN) for action recognition. The descriptor aggregates motion and appearance information along tracks of human body parts. We investigate different schemes of temporal aggregation and experiment with P-CNN features obtained both for automatically estimated and manually annotated human poses. We evaluate our method on the recent and challenging JHMDB and MPII Cooking datasets. For both datasets our method shows consistent improvement over the state of the art.", "title": "" }, { "docid": "374383490d88240b410a14a185ff082e", "text": "A substantial part of the operating costs of public transport is attributable to drivers, whose efficient use therefore is important. The compilation of optimal work packages is difficult, being NP-hard. In practice, algorithmic advances and enhanced computing power have led to significant progress in achieving better schedules. However, differences in labor practices among modes of transport and operating companies make production of a truly general system with acceptable performance a difficult proposition. TRACS II has overcome these difficulties, being used with success by a substantial number of bus and train operators. Many theoretical aspects of the system have been published previously. This paper shows for the first time how theory and practice have been brought together, explaining the many features which have been added to the algorithmic kernel to provide a user-friendly and adaptable system designed to provide maximum flexibility in practice. We discuss the extent to which users have been involved in system development, leading to many practical successes, and we summarize some recent achievements.", "title": "" }, { "docid": "2907badaf086752657c09d45fa99111e", "text": "The 3L-NPC (three-level neutral-point-clamped) is the most popular multilevel converter used in high-power medium-voltage applications. An important disadvantage of this structure is the unequal distribution of losses among the switches. The performances of 3L-NPC structure were improved by developing the 3L-ANPC (Active-NPC) converter which has more degrees of freedom. In this paper the switching states and the loss distribution problem are studied for different PWM strategies. A new PWM strategy is also proposed in the paper. It has numerous advantages: (a) natural doubling of the apparent switching frequency without using the flying-capacitor concept, (b) dead times do not influence the operating mode at 50% of the duty cycle, (c) operating at both high and small switching frequencies without structural modifications and (d) better balancing of loss distribution in switches. The PSIM simulation results are shown in order to validate the proposed PWM strategy and the analysis of the switching states.", "title": "" }, { "docid": "e42357ff2f957f6964bab00de4722d52", "text": "We model a degraded image as an original image that has been subject to linear frequency distortion and additive noise injection. Since the psychovisual effects of frequency distortion and noise injection are independent, we decouple these two sources of degradation and measure their effect on the human visual system. We develop a distortion measure (DM) of the effect of frequency distortion, and a noise quality measure (NQM) of the effect of additive noise. The NQM, which is based on Peli's (1990) contrast pyramid, takes into account the following: 1) variation in contrast sensitivity with distance, image dimensions, and spatial frequency; 2) variation in the local luminance mean; 3) contrast interaction between spatial frequencies; 4) contrast masking effects. For additive noise, we demonstrate that the nonlinear NQM is a better measure of visual quality than peak signal-to noise ratio (PSNR) and linear quality measures. We compute the DM in three steps. First, we find the frequency distortion in the degraded image. Second, we compute the deviation of this frequency distortion from an allpass response of unity gain (no distortion). Finally, we weight the deviation by a model of the frequency response of the human visual system and integrate over the visible frequencies. We demonstrate how to decouple distortion and additive noise degradation in a practical image restoration system.", "title": "" }, { "docid": "a21454cc2b7d02cdf509d47930e342cd", "text": "The enormous growth of information on the Internet makes finding information challenging and time consuming. Recommender systems provide a solution to this problem by automatically capturing user interests and recommending related information the user may also find interesting. In this paper, we present a novel recommender system for the research paper domain using a Dynamic Normalized Tree of Concepts (DNTC) model. Our system improves existing vector and tree of concepts models to be adaptable with a complex ontology and a large number of papers. The proposed system uses the 2012 version of the ACM Computing Classification System (CCS) ontology. This ontology has a much deeper structure than previous versions, which makes it challenging for previous ontology-based approaches to recommender systems. We performed offline evaluations using papers provided by ACM digital library for classifier training, and papers provided by CiteSeerX digital library for measuring the performance of the proposed DNTC model. Our evaluation results show that the novel DNTC model significantly outperforms the other two models: non-normalized tree of concepts and the vector of concepts models. Further, our DNTC model provides high average precision and reliable results when used in a context which the user has multiple interests and reads a large quantity of papers over time.", "title": "" } ]
scidocsrr
7687714c056eda424f7276b5515b23f6
An efficient and simple under-sampling technique for imbalanced time series classification
[ { "docid": "a13a50d552572d08b4d1496ca87ac160", "text": "In recent years, mining with imbalanced data sets receives more and more attentions in both theoretical and practical aspects. This paper introduces the importance of imbalanced data sets and their broad application domains in data mining, and then summarizes the evaluation metrics and the existing methods to evaluate and solve the imbalance problem. Synthetic minority oversampling technique (SMOTE) is one of the over-sampling methods addressing this problem. Based on SMOTE method, this paper presents two new minority over-sampling methods, borderline-SMOTE1 and borderline-SMOTE2, in which only the minority examples near the borderline are over-sampled. For the minority class, experiments show that our approaches achieve better TP rate and F-value than SMOTE and random over-sampling methods.", "title": "" } ]
[ { "docid": "58a75098bc32cb853504a91ddc53e1e8", "text": "In this study, forest type mapping data set taken from UCI (University of California, Irvine) machine learning repository database has been classified using different machine learning algorithms including Multilayer Perceptron, k-NN, J48, Naïve Bayes, Bayes Net and KStar. In this dataset, there are 27 spectral values showing the type of three different forests (Sugi, Hinoki, mixed broadleaf). As the performance measure criteria, the classification accuracy has been used to evaluate the classifier algorithms and then to select the best method. The best classification rates have been obtained 90.43% with MLP, and 89.1013% with k-NN classifier (for k=5). As can be seen from the obtained results, the machine learning algorithms including MLP and k-NN classifier have obtained very promising results in the classification of forest type with 27 spectral features.", "title": "" }, { "docid": "6c493ff556ac3df190e114a37b062c2b", "text": "This paper presents a novel active learning method developed in the framework of ε-insensitive support vector regression (SVR) for the solution of regression problems with small size initial training data. The proposed active learning method selects iteratively the most informative as well as representative unlabeled samples to be included in the training set by jointly evaluating three criteria: (i) relevancy, (ii) diversity, and (iii) density of samples. All three criteria are implemented according to the SVR properties and are applied in two clustering-based consecutive steps. In the first step, a novel measure to select the most relevant samples that have high probability to be located either outside or on the boundary of the ε-tube of SVR is defined. To this end, initially a clustering method is applied to all unlabeled samples together with the training samples that are inside the ε-tube (those that are not support vectors, i.e., non-SVs); then the clusters with non-SVs are eliminated. The unlabeled samples in the remaining clusters are considered as the most relevant patterns. In the second step, a novel measure to select diverse samples among the relevant patterns from the high density regions in the feature space is defined to better model the SVR learning function. To this end, initially clusters with the highest density of samples are chosen to identify the highest density regions in the feature space. Then, the sample from each selected cluster that is associated with the portion of feature space having the highest density (i.e., the most representative of the underlying distribution of samples contained in the related cluster) is selected to be included in the training set. In this way diverse samples taken from high density regions are efficiently identified. Experimental results obtained on four different data sets show the robustness of the proposed technique particularly when a small-size initial training set are available. & 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "feca14524ff389c59a4d6f79954f26e3", "text": "Zero shot learning (ZSL) is about being able to recognize gesture classes that were never seen before. This type of recognition involves the understanding that the presented gesture is a new form of expression from those observed so far, and yet carries embedded information universal to all the other gestures (also referred as context). As part of the same problem, it is required to determine what action/command this new gesture conveys, in order to react to the command autonomously. Research in this area may shed light to areas where ZSL occurs, such as spontaneous gestures. People perform gestures that may be new to the observer. This occurs when the gesturer is learning, solving a problem or acquiring a new language. The ability of having a machine recognizing spontaneous gesturing, in the same manner as humans do, would enable more fluent human-machine interaction. In this paper, we describe a new paradigm for ZSL based on adaptive learning, where it is possible to determine the amount of transfer learning carried out by the algorithm and how much knowledge is acquired from a new gesture observation. Another contribution is a procedure to determine what are the best semantic descriptors for a given command and how to use those as part of the ZSL approach proposed.", "title": "" }, { "docid": "3eeb8af163f02e8ab5f709bf75bc20d6", "text": "The connection between part-of-speech (POS) categories and morphological properties is well-documented in linguistics but underutilized in text processing systems. This paper proposes a novel model for morphological segmentation that is driven by this connection. Our model learns that words with common affixes are likely to be in the same syntactic category and uses learned syntactic categories to refine the segmentation boundaries of words. Our results demonstrate that incorporating POS categorization yields substantial performance gains on morphological segmentation of Arabic. 1", "title": "" }, { "docid": "54c8a8669b133e23035d93aabdc01a54", "text": "The proposed antenna topology is an interesting radiating element, characterized by broadband or multiband capabilities. The exponential and soft/tapered design of the edge transitions and feeding makes it a challenging item to design and tune, leading though to impressive results. The antenna is build on Rogers RO3010 material. The bands in which the antenna works are GPS and Galileo (1.57 GHz), UMTS (1.8–2.17 GHz) and ISM 2.4 GHz (Bluetooth WiFi). The purpose of such an antenna is to be embedded in an Assisted GPS (A-GPS) reference station. Such a device serves as a fix GPS reference distributing the positioning information to mobile device users and delivering at the same time services via GSM network standards or via Wi-Fi / Bluetooth connections.", "title": "" }, { "docid": "74f90683e6daae840cdb5ffa3c1b6e4a", "text": "Texture atlas parameterization provides an effective way to map a variety of color and data attributes from 2D texture domains onto polygonal surface meshes. However, the individual charts of such atlases are typically plagued by noticeable seams. We describe a new type of atlas which is seamless by construction. Our seamless atlas comprises all quadrilateral charts, and permits seamless texturing, as well as per-fragment down-sampling on rendering hardware and polygon simplification. We demonstrate the use of this atlas for capturing appearance attributes and producing seamless renderings.", "title": "" }, { "docid": "ef6160d304908ea87287f2071dea5f6d", "text": "The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95% can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89%, on compressed data.", "title": "" }, { "docid": "5d10681b2b56c8fdf6b7510c6c795b80", "text": "The market share of mobile devices like smartphones and tablets is growing rapidly. These devices are increasingly used to access the services offered on the Internet. Time spent online has started to shift considerably from desktop and laptop computers to mobile connected post-pc devices. In certain areas, mobile usage has already exceeded the access and traffic generated by desktop computers. This development also affects the usage behavior and ex-pectations of job seekers when accessing job ads and other job-related information online. In this context the paper at hand presents a study analyzing the behavior and user expectations of job seekers using mobile devices in Germany. The study shows that the majority of smartphone and tablet users have already accessed job ads and used job search applications (\"apps\") through these devices. Moreover, many of those respondents who have already accessed job ads with a post-pc device also expect to be able to apply for a job via smartphone and tablet.", "title": "" }, { "docid": "7adf452c728be4552d5588f8b3af5070", "text": "In this paper, we conduct an empirical investigation of neural query graph ranking approaches for the task of complex question answering over knowledge graphs. We experiment with six different ranking models and propose a novel self-attention based slot matching model which exploits the inherent structure of query graphs, our logical form of choice. Our proposed model generally outperforms the other models on two QA datasets over the DBpedia knowledge graph, evaluated in different settings. In addition, we show that transfer learning from the larger of those QA datasets to the smaller dataset yields substantial improvements, effectively offsetting the general lack of training data.", "title": "" }, { "docid": "e660a3407d3ae46995054764549adc35", "text": "The factors predicting stress, anxiety and depression in the parents of children with autism remain poorly understood. In this study, a cohort of 250 mothers and 229 fathers of one or more children with autism completed a questionnaire assessing reported parental mental health problems, locus of control, social support, perceived parent-child attachment, as well as autism symptom severity and perceived externalizing behaviours in the child with autism. Variables assessing parental cognitions and socioeconomic support were found to be more significant predictors of parental mental health problems than child-centric variables. A path model, describing the relationship between the dependent and independent variables, was found to be a good fit with the observed data for both mothers and fathers.", "title": "" }, { "docid": "f9450ee0f87fd5071f7808093c5170a8", "text": "ÐWe consider the problem of reconstructing the 3D coordinates of a moving point seen from a monocular moving camera, i.e., to reconstruct moving objects from line-of-sight measurements only. The task is feasible only when some constraints are placed on the shape of the trajectory of the moving point. We coin the family of such tasks as atrajectory triangulation.o We investigate the solutions for points moving along a straight-line and along conic-section trajectories. We show that if the point is moving along a straight line, then the parameters of the line (and, hence, the 3D position of the point at each time instant) can be uniquely recovered, and by linear methods, from at least five views. For the case of conic-shaped trajectory, we show that generally nine views are sufficient for a unique reconstruction of the moving point and fewer views when the conic is of a known type (like a circle in 3D Euclidean space for which seven views are sufficient). The paradigm of trajectory triangulation, in general, pushes the envelope of processing dynamic scenes forward. Thus static scenes become a particular case of a more general task of reconstructing scenes rich with moving objects (where an object could be a single point). Index TermsÐStructure from motion, multiple-view geometry, dynamic scenes.", "title": "" }, { "docid": "cabfa3e645415d491ed4ca776b9e370a", "text": "The impact of social networks in customer buying decisions is rapidly increasing, because they are effective in shaping public opinion. This paper helps marketers analyze a social network’s members based on different characteristics as well as choose the best method for identifying influential people among them. Marketers can then use these influential people as seeds for market products/services. Considering the importance of opinion leadership in social networks, the authors provide a comprehensive overview of existing literature. Studies show that different titles (such as opinion leaders, influential people, market mavens, and key players) are used to refer to the influential group in social networks. In this paper, all the properties presented for opinion leaders in the form of different titles are classified into three general categories, including structural, relational, and personal characteristics. Furthermore, based on studying opinion leader identification methods, appropriate parameters are extracted in a comprehensive chart to evaluate and compare these methods accurately. based marketing, word-of-mouth marketing has more creditability (Li & Du, 2011), because there is no direct link between the sender and the merchant. As a result, information is considered independent and subjective. In recent years, many researches in word-of-mouth marketing investigate discovering influential nodes in a social network. These influential people are called opinion leaders in the literature. Organizations interested in e-commerce need to identify opinion leaders among their customers, also the place (web site) which they are going online. This is the place they can market their products. DOI: 10.4018/jvcsn.2011010105 44 International Journal of Virtual Communities and Social Networking, 3(1), 43-59, January-March 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. Social Network Analysis Regarding the importance of interpersonal relationship, studies are looking for formal methods to measures who talks to whom in a community. These methods are known as social network analysis (Scott, 1991; Wasserman & Faust, 1994; Rogers & Kincaid, 1981; Valente & Davis, 1999). Social network analysis includes the study of the interpersonal relationships. It usually is more focused on the network itself, rather than on the attributes of the members (Li & Du, 2011). Valente and Rogers (1995) have described social network analysis from the point of view of interpersonal communication by “formal methods of measuring who talks to whom within a community”. Social network analysis enables researchers to identify people who are more central in the network and so more influential. By using these central people or opinion leaders as seeds diffusion of a new product or service can be accelerated (Katz & Lazarsfeld, 1955; Valente & Davis, 1999). Importance of Social Networks for Marketing The importance of social networks as a marketing tool is increasing, and it includes diverse areas (Even-Dar & Shapirab, 2011). Analysis of interdependencies between customers can improve targeted marketing as well as help organization in acquisition of new customers who are not detectable by traditional techniques. By recent technological developments social networks are not limited in face-to-face and physical relationships. Furthermore, online social networks have become a new medium for word-of-mouth marketing. Although the face-to-face word-of-mouth has a greater impact on consumer purchasing decisions over printed information because of its vividness and credibility, in recent years with the growth of the Internet and virtual communities the written word-of-mouth (word-of-mouse) has been created in the online channels (Mak, 2008). Consider a company that wants to launch a new product. This company can benefit from popular social networks like Facebook and Myspace rather than using classical advertising channels. Then, convincing several key persons in each network to adopt the new product, can help a company to exploit an effective diffusion in the network through word-of-mouth. According to Nielsen’s survey of more than 26,000 internet uses, 78% of respondents exhibited recommendations from others are the most trusted source when considering a product or service (Nielsen, 2007). Based on another study conducted by Deloitte’s Consumer Products group, almost 62% of consumers who read consumer-written product reviews online declare their purchase decisions have been directly influenced by the user reviews (Delottie, 2007). Empirical studies have demonstrated that new ideas and practices spread through interpersonal communication (Valente & Rogers, 1995; Valente & Davis, 1999; Valente, 1995). Hawkins et al. (1995) suggest that companies can use four possible courses of action, including marketing research, product sampling, retailing/personal selling and advertising to use their knowledge of opinion leaders to their advantage. The authors of this paper in a similar study have done a review of related literature using social networks for improving marketing response. They discuss the benefits and challenges of utilizing interpersonal relationships in a network as well as opinion leader identification; also, a three step process to show how firms can apply social networks for their marketing activities has been proposed (Jafari Momtaz et al., 2011). While applications of opinion leadership in business and marketing have been widely studied, it generally deals with the development of measurement scale (Burt, 1999), its importance in the social sciences (Flynn et al., 1994), and its application to various areas related to the marketing, such as the health care industry, political science (Burt, 1999) and public communications (Howard et al., 2000; Locock et al., 2001). In this paper, a comprehensive review of studies in the field of opinion leadership and employing social networks to improve the marketing response is done. In the next sec15 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/identifying-opinion-leadersmarketing-analyzing/60541?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciCommunications, Online Engagement, and Media eJournal Collection. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2", "title": "" }, { "docid": "b6249dbd61928a0722e0bcbf18cd9f79", "text": "For many applications such as tele-operational robots and interactions with virtual environments, it is better to have performance with force feedback than without. Haptic devices are force reflecting interfaces. They can also track human hand positions simultaneously. A new 6 DOF (degree-of-freedom) haptic device was designed and calibrated in this study. It mainly contains a double parallel linkage, a rhombus linkage, a rotating mechanical structure and a grasping interface. Benefited from the unique design, it is a hybrid structure device with a large workspace and high output capability. Therefore, it is capable of multi-finger interactions. Moreover, with an adjustable base, operators can change different postures without interrupting haptic tasks. To investigate the performance regarding position tracking accuracy and static output forces, we conducted experiments on a three-dimensional electric sliding platform and a digital force gauge, respectively. Displacement errors and force errors are calculated and analyzed. To identify the capability and potential of the device, four application examples were programmed.", "title": "" }, { "docid": "1580a496e78f9dc5599201db32e4ab94", "text": "Path planning is one of the key technologies in the robot research. The aim of it is to find the shortest safe path in the objective environments. Firstly, the robot is transformed into particle by expanding obstacles method; the obstacle is transformed into particle by multi-round enveloping method. Secondly, we make the Voronoi graph of the particles of obstacle and find the skeleton topology about the feasible path. Following, a new arithmetic named heuristic bidirectional ant colony algorithm is proposed by joining the merit of ant colony algorithm, Dijkstra algorithm and heuristic algorithm, with which we can find the shortest path of the skeleton topology. After transforming the path planning into n-dimensions quadrate feasible region by coordinate transformation and solving it with particle swarm optimization, the optimization of the path planning is acquired.", "title": "" }, { "docid": "30ba7b3cf3ba8a7760703a90261d70eb", "text": "Starch is a major storage product of many economically important crops such as wheat, rice, maize, tapioca, and potato. A large-scale starch processing industry has emerged in the last century. In the past decades, we have seen a shift from the acid hydrolysis of starch to the use of starch-converting enzymes in the production of maltodextrin, modified starches, or glucose and fructose syrups. Currently, these enzymes comprise about 30% of the world’s enzyme production. Besides the use in starch hydrolysis, starch-converting enzymes are also used in a number of other industrial applications, such as laundry and porcelain detergents or as anti-staling agents in baking. A number of these starch-converting enzymes belong to a single family: the -amylase family or family13 glycosyl hydrolases. This group of enzymes share a number of common characteristics such as a ( / )8 barrel structure, the hydrolysis or formation of glycosidic bonds in the conformation, and a number of conserved amino acid residues in the active site. As many as 21 different reaction and product specificities are found in this family. Currently, 25 three-dimensional (3D) structures of a few members of the -amylase family have been determined using protein crystallization and X-ray crystallography. These data in combination with site-directed mutagenesis studies have helped to better understand the interactions between the substrate or product molecule and the different amino acids found in and around the active site. This review illustrates the reaction and product diversity found within the -amylase family, the mechanistic principles deduced from structure–function relationship structures, and the use of the enzymes of this family in industrial applications. © 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "4b7e218d3f1c6a0c9c732a130fd2ddb3", "text": "In this paper, we propose a way of synthesizing realistic images directly with natural language description, which has many useful applications, e.g. intelligent image manipulation. We attempt to accomplish such synthesis: given a source image and a target text description, our model synthesizes images to meet two requirements: 1) being realistic while matching the target text description; 2) maintaining other image features that are irrelevant to the text description. The model should be able to disentangle the semantic information from the two modalities (image and text), and generate new images from the combined semantics. To achieve this, we proposed an end-to-end neural architecture that leverages adversarial learning to automatically learn implicit loss functions, which are optimized to fulfill the aforementioned two requirements. We have evaluated our model by conducting experiments on Caltech-200 bird dataset and Oxford-102 flower dataset, and have demonstrated that our model is capable of synthesizing realistic images that match the given descriptions, while still maintain other features of original images.", "title": "" }, { "docid": "fc2c995d20c83a72ea46f5055d1847a1", "text": "In this paper, we present a novel probabilistic compact representation of the on-road environment, i.e., the dynamic probabilistic drivability map (DPDM), and demonstrate its utility for predictive lane change and merge (LCM) driver assistance during highway and urban driving. The DPDM is a flexible representation and readily accepts data from a variety of sensor modalities to represent the on-road environment as a spatially coded data structure, encapsulating spatial, dynamic, and legal information. Using the DPDM, we develop a general predictive system for LCMs. We formulate the LCM assistance system to solve for the minimum-cost solution to merge or change lanes, which is solved efficiently using dynamic programming over the DPDM. Based on the DPDM, the LCM system recommends the required acceleration and timing to safely merge or change lanes with minimum cost. System performance has been extensively validated using real-world on-road data, including urban driving, on-ramp merges, and both dense and free-flow highway conditions.", "title": "" }, { "docid": "7ef3829b1fab59c50f08265d7f4e0132", "text": "Muscle glycogen is the predominant energy source for soccer match play, though its importance for soccer training (where lower loads are observed) is not well known. In an attempt to better inform carbohydrate (CHO) guidelines, we quantified training load in English Premier League soccer players (n = 12) during a one-, two- and three-game week schedule (weekly training frequency was four, four and two, respectively). In a one-game week, training load was progressively reduced (P < 0.05) in 3 days prior to match day (total distance = 5223 ± 406, 3097 ± 149 and 2912 ± 192 m for day 1, 2 and 3, respectively). Whilst daily training load and periodisation was similar in the one- and two-game weeks, total accumulative distance (inclusive of both match and training load) was higher in a two-game week (32.5 ± 4.1 km) versus one-game week (25.9 ± 2 km). In contrast, daily training total distance was lower in the three-game week (2422 ± 251 m) versus the one- and two-game weeks, though accumulative weekly distance was highest in this week (35.5 ± 2.4 km) and more time (P < 0.05) was spent in speed zones >14.4 km · h(-1) (14%, 18% and 23% in the one-, two- and three-game weeks, respectively). Considering that high CHO availability improves physical match performance but high CHO availability attenuates molecular pathways regulating training adaptation (especially considering the low daily customary loads reported here, e.g., 3-5 km per day), we suggest daily CHO intake should be periodised according to weekly training and match schedules.", "title": "" }, { "docid": "478d910cda9aab5d75b95066b355cc1a", "text": "Business rules are the basis of any organization. From an information systems perspective, these business rules function as constraints on a database helping ensure that the structure and content of the real world—sometimes referred to as miniworld—is accurately incorporated into the database. It is important to elicit these rules during the analysis and design stage, since the captured rules are the basis for subsequent development of a business constraints repository. We present a taxonomy for set-based business rules, and describe an overarching framework for modeling rules that constrain the cardinality of sets. The proposed framework results in various types constraints, i.e., attribute, class, participation, projection, co-occurrence, appearance and overlapping, on a semantic model that supports abstractions like classification, generalization/specialization, aggregation and association. We formally define the syntax of our proposed framework in Backus-Naur Form and explicate the semantics using first-order logic. We describe partial ordering in the constraints and define the concept of metaconstraints, which can be used for automatic constraint consistency checking during the design stage itself. We demonstrate the practicality of our approach with a case study and show how our approach to modeling business rules seamlessly integrates into existing database design methodology. Via our proposed framework, we show how explicitly capturing data semantics will help bridge the semantic gap between the real world and its representation in an information system. r 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6d13952afa196a6a77f227e1cc9f43bd", "text": "Spreadsheets contain valuable data on many topics, but they are difficult to integrate with other sources. Converting spreadsheet data to the relational model would allow relational integration tools to be used, but using manual methods to do this requires large amounts of work for each integration candidate. Automatic data extraction would be useful but it is very challenging: spreadsheet designs generally requires human knowledge to understand the metadata being described. Even if it is possible to obtain this metadata information automatically, a single mistake can yield an output relation with a huge number of incorrect tuples. We propose a two-phase semiautomatic system that extracts accurate relational metadata while minimizing user effort. Based on conditional random fields (CRFs), our system enables downstream spreadsheet integration applications. First, the automatic extractor uses hints from spreadsheets’ graphical style and recovered metadata to extract the spreadsheet data as accurately as possible. Second, the interactive repair component identifies similar regions in distinct spreadsheets scattered across large spreadsheet corpora, allowing a user’s single manual repair to be amortized over many possible extraction errors. Through our method of integrating the repair workflow into the extraction system, a human can obtain the accurate extraction with just 31% of the manual operations required by a standard classification based technique. We demonstrate and evaluate our system using two corpora: more than 1,000 spreadsheets published by the US government and more than 400,000 spreadsheets downloaded from the Web.", "title": "" } ]
scidocsrr
208dfe685ab49ca94e85702a28329f39
Vision-Based SLAM and Moving Objects Tracking for the Perceptual Support of a Smart Walker Platform
[ { "docid": "1ff51e3f6b73aa6fe8eee9c1fb404e4e", "text": "The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects.", "title": "" } ]
[ { "docid": "28e1d829bfc11147881e3dfde945ccbd", "text": "In wearable robotics applications, actuators are required to satisfy strict constraints in terms of safety and controllability. The introduction of intrinsic compliance can help to meet both these requirements. However, the high torque and power necessary for robotic systems for gait assistance requires the use of custom elements, able to guarantee high performances with a compact and lightweight design. This paper presents a rotary Series Elastic Actuator (SEA), suitable to be used in an active orthosis for knee assistance during overground walking. The system includes a commercial flat brushless DC motor, a Harmonic Drive gear and a custom-designed torsion spring. Spring design has been optimized by means of an iterative FEM simulations-based process and can be directly connected to the output shaft, thus guaranteeing high torque fidelity. With a total weight of 1.8 kg, it is possible to directly include the actuator in the frame of a wearable orthosis for knee flexion/extension assistance. The presented design allows to obtain a large-force bandwidth of 5 Hz and to regulate output impedance in a range compatible to locomotion assistance of elderly subjects with an age-related decay of motor performances.", "title": "" }, { "docid": "4f069eeff7cf99679fb6f31e2eea55f0", "text": "The present study aims to design, develop, operate and evaluate a social media GIS (Geographical Information Systems) specially tailored to mash-up the information that local residents and governments provide to support information utilization from normal times to disaster outbreak times in order to promote disaster reduction. The conclusions of the present study are summarized in the following three points. (1) Social media GIS, an information system which integrates a Web-GIS, an SNS and Twitter in addition to an information classification function, a button function and a ranking function into a single system, was developed. This made it propose an information utilization system based on the assumption of disaster outbreak times when information overload happens as well as normal times. (2) The social media GIS was operated for fifty local residents who are more than 18 years old for ten weeks in Mitaka City of Tokyo metropolis. Although about 32% of the users were in their forties, about 30% were aged fifties, and more than 10% of the users were in their twenties, thirties and sixties or more. (3) The access survey showed that 260 pieces of disaster information were distributed throughout the whole city of Mitaka. Among the disaster information, danger-related information occupied 20%, safety-related information occupied 68%, and other information occupied 12%. Keywords—Social Media GIS; Web-GIS; SNS; Twitter; Disaster Information; Disaster Reduction; Support for Information Utilization", "title": "" }, { "docid": "9cce3a9ed14279acae533befc31735c7", "text": "Flower pollination algorithm (FPA) is a nature-inspired meta-heuristics to handle a large scale optimization process. This paper reviews the previous studies on the application of FPA, modified FPA and hybrid FPA for solving optimization problems. The effectiveness of FPA for solving the optimization problems are highlighted and discussed. The improvement aspects include local and global search strategies and the quality of the solutions. The measured enhancements in FPA are based on various research domains. The results of review indicate the capability of the enhanced and hybrid FPA for solving optimization problems in variety of applications and outperformed the results of other established optimization techniques.", "title": "" }, { "docid": "7b4dd695182f7e15e58f44e309bf897c", "text": "Phosphorus is one of the most abundant elements preserved in earth, and it comprises a fraction of ∼0.1% of the earth crust. In general, phosphorus has several allotropes, and the two most commonly seen allotropes, i.e. white and red phosphorus, are widely used in explosives and safety matches. In addition, black phosphorus, though rarely mentioned, is a layered semiconductor and has great potential in optical and electronic applications. Remarkably, this layered material can be reduced to one single atomic layer in the vertical direction owing to the van der Waals structure, and is known as phosphorene, in which the physical properties can be tremendously different from its bulk counterpart. In this review article, we trace back to the research history on black phosphorus of over 100 years from the synthesis to material properties, and extend the topic from black phosphorus to phosphorene. The physical and transport properties are highlighted for further applications in electronic and optoelectronics devices.", "title": "" }, { "docid": "2342c92f91c243474a53323a476ae3d9", "text": "Gesture recognition has emerged recently as a promising application in our daily lives. Owing to low cost, prevalent availability, and structural simplicity, RFID shall become a popular technology for gesture recognition. However, the performance of existing RFID-based gesture recognition systems is constrained by unfavorable intrusiveness to users, requiring users to attach tags on their bodies. To overcome this, we propose GRfid, a novel device-free gesture recognition system based on phase information output by COTS RFID devices. Our work stems from the key insight that the RFID phase information is capable of capturing the spatial features of various gestures with low-cost commodity hardware. In GRfid, after data are collected by hardware, we process the data by a sequence of functional blocks, namely data preprocessing, gesture detection, profiles training, and gesture recognition, all of which are well-designed to achieve high performance in gesture recognition. We have implemented GRfid with a commercial RFID reader and multiple tags, and conducted extensive experiments in different scenarios to evaluate its performance. The results demonstrate that GRfid can achieve an average recognition accuracy of <inline-formula> <tex-math notation=\"LaTeX\">$96.5$</tex-math><alternatives><inline-graphic xlink:href=\"wu-ieq1-2549518.gif\"/> </alternatives></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$92.8$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq2-2549518.gif\"/></alternatives></inline-formula> percent in the identical-position and diverse-positions scenario, respectively. Moreover, experiment results show that GRfid is robust against environmental interference and tag orientations.", "title": "" }, { "docid": "f2daa3fd822be73e3663520cc6afe741", "text": "Low health literacy (LHL) remains a formidable barrier to improving health care quality and outcomes. Given the lack of precision of single demographic characteristics to predict health literacy, and the administrative burden and inability of existing health literacy measures to estimate health literacy at a population level, LHL is largely unaddressed in public health and clinical practice. To help overcome these limitations, we developed two models to estimate health literacy. We analyzed data from the 2003 National Assessment of Adult Literacy (NAAL), using linear regression to predict mean health literacy scores and probit regression to predict the probability of an individual having ‘above basic’ proficiency. Predictors included gender, age, race/ethnicity, educational attainment, poverty status, marital status, language spoken in the home, metropolitan statistical area (MSA) and length of time in U.S. All variables except MSA were statistically significant, with lower educational attainment being the strongest predictor. Our linear regression model and the probit model accounted for about 30% and 21% of the variance in health literacy scores, respectively, nearly twice as much as the variance accounted for by either education or poverty alone. Multivariable models permit a more accurate estimation of health literacy than single predictors. Further, such models can be applied to readily available administrative or census data to produce estimates of average health literacy and identify communities that would benefit most from appropriate, targeted interventions in the clinical setting to address poor quality care and outcomes related to LHL.", "title": "" }, { "docid": "d25a34b3208ee28f9cdcddb9adf46eb4", "text": "1 Umeå University, Department of Computing Science, SE-901 87 Umeå, Sweden, {jubo,thomasj,marie}@cs.umu.se Abstract  The transition to object-oriented programming is more than just a matter of programming language. Traditional syllabi fail to teach students the “big picture” and students have difficulties taking advantage of objectoriented concepts. In this paper we present a holistic approach to a CS1 course in Java favouring general objectoriented concepts over the syntactical details of the language. We present goals for designing such a course and a case study showing interesting results.", "title": "" }, { "docid": "691d86e5bf01664c260150493e8fcb9c", "text": "There are many important applications, such as math function evaluation, digital signal processing, and built-in self-test, whose implementations can be faster and simpler if we can have large on-chip “tables” stored as read-only memories (ROMs). We show that conventional de facto standard 6T and 8T static random access memory (SRAM) bit cells can embed ROM data without area overhead or performance degradation on the bit cells. Just by adding an extra wordline (WL) and connecting the WL to selected access transistor of the bit cell (based on whether a 0 or 1 is to be stored as ROM data in that location), the bit cell can work both in the SRAM mode and in the ROM mode. In the proposed ROM-embedded SRAM, during SRAM operations, ROM data is not available. To retrieve the ROM data, special write steps associated with proper via connections load ROM data into the SRAM array. The ROM data is read by conventional load instruction with unique virtual address space assigned to the data. This allows the ROM-embedded cache (R-cache) to bypass tag arrays and translation look-aside buffers, leading to fast ROM operations. We show example applications to illustrate how the R-cache can lead to low-cost logic testing and faster evaluation of mathematical functions.", "title": "" }, { "docid": "bd3e5a403cc42952932a7efbd0d57719", "text": "The acoustic echo cancellation system is very important in the communication applications that are used these days; in view of this importance we have implemented this system practically by using DSP TMS320C6713 Starter Kit (DSK). The acoustic echo cancellation system was implemented based on 8 subbands techniques using Least Mean Square (LMS) algorithm and Normalized Least Mean Square (NLMS) algorithm. The system was evaluated by measuring the performance according to Echo Return Loss Enhancement (ERLE) factor and Mean Square Error (MSE) factor. Keywords—Acoustic echo canceller; Least Mean Square (LMS); Normalized Least Mean Square (NLMS); TMS320C6713; 8 subbands adaptive filter", "title": "" }, { "docid": "dfd6367741547212520b4303bbd2b8d1", "text": "A highly digital two-stage fractional-N phaselocked loop (PLL) architecture utilizing a first-order 1-bit frequency-to-digital converter (FDC) is proposed and implemented in a 65 nm CMOS process. Performance of the first-order 1-bit FDC is improved by using a phase interpolatorbased fractional divider that reduces phase quantizer input span and by using a multiplying delay-locked loop that increases its oversampling ratio. We also describe an analogy between a time-to-digital converter (TDC) and a FDC followed by an accumulator that allows us to leverage the TDC-based PLL analysis techniques to study the impact of FDC characteristics on FDC-based fractional-N PLL (FDCPLL) performance. Utilizing proposed techniques, a prototype PLL achieves 1 MHz bandwidth, −101.6 dBc/Hz in-band phase noise, and 1.22 psrms (1 kHz–40 MHz) jitter while generating 5.031 GHz output from 31.25 MHz reference clock input. For the same output frequency, the stand-alone second-stage fractional-N FDCPLL achieves 1 MHz bandwidth, −106.1 dBc/Hz in-band phase noise, and 403 fsrms jitter with a 500 MHz reference clock input. The two-stage PLL consumes 10.1 mW power from a 1 V supply, out of which 7.1 mW is consumed by the second-stage FDCPLL.", "title": "" }, { "docid": "1e3d8ab33f0dda81e4f06eb57803852c", "text": "Outlier detection and ensemble learning are well established research directions in data mining yet the application of ensemble techniques to outlier detection has been rarely studied. Here, we propose and study subsampling as a technique to induce diversity among individual outlier detectors. We show analytically and experimentally that an outlier detector based on a subsample per se, besides inducing diversity, can, under certain conditions, already improve upon the results of the same outlier detector on the complete dataset. Building an ensemble on top of several subsamples is further improving the results. While in the literature so far the intuition that ensembles improve over single outlier detectors has just been transferred from the classification literature, here we also justify analytically why ensembles are also expected to work in the unsupervised area of outlier detection. As a side effect, running an ensemble of several outlier detectors on subsamples of the dataset is more efficient than ensembles based on other means of introducing diversity and, depending on the sample rate and the size of the ensemble, can be even more efficient than just the single outlier detector on the complete data.", "title": "" }, { "docid": "241542e915e51ce1505c7d24641e4e0b", "text": "Over the past decade, research has increased our understanding of the effects of physical activity at opposite ends of the spectrum. Sedentary behaviour—too much sitting—has been shown to increase risk of chronic disease, particularly diabetes and cardiovascular disease. There is now a clear need to reduce prolonged sitting. Secondly, evidence on the potential of high intensity interval training inmanaging the same chronic diseases, as well as reducing indices of cardiometabolic risk in healthy adults, has emerged. This vigorous training typically comprises multiple 3-4 minute bouts of high intensity exercise interspersed with several minutes of low intensity recovery, three times a week. Between these two extremes of the activity spectrum is the mainstream public health recommendation for aerobic exercise, which is similar in many developed countries. The suggested target for older adults (≥65) is the same as for other adults (18-64): 150 minutes a week of moderate intensity activity in bouts of 10 minutes or more. It is often expressed as 30 minutes of brisk walking or equivalent activity five days a week, although 75 minutes of vigorous intensity activity spread across the week, or a combination of moderate and vigorous activity are sometimes suggested. Physical activity to improve strength should also be done at least two days a week. The 150 minute target is widely disseminated to health professionals and the public. However, many people, especially in older age groups, find it hard to achieve this level of activity. We argue that when advising patients on exercise doctors should encourage people to increase their level of activity by small amounts rather than focus on the recommended levels. The 150 minute target, although warranted, may overshadow other less concrete elements of guidelines. These include finding ways to do more lower intensity lifestyle activity. As people get older, activity may become more relevant for sustaining the strength, flexibility, and balance required for independent living in addition to the strong associations with hypertension, coronary heart disease, stroke, diabetes, breast cancer, and colon cancer. Observational data have confirmed associations between increased physical activity and reduction in musculoskeletal conditions such as arthritis, osteoporosis, and sarcopenia, and better cognitive acuity and mental health. Although these links may be modest and some lack evidence of causality, they may provide sufficient incentives for many people to be more active. Research into physical activity", "title": "" }, { "docid": "9b470feac9ae4edd11b87921934c9fc2", "text": "Cutaneous melanoma may in some instances be confused with seborrheic keratosis, which is a very common neoplasia, more often mistaken for actinic keratosis and verruca vulgaris. Melanoma may clinically resemble seborrheic keratosis and should be considered as its possible clinical simulator. We report a case of melanoma with dermatoscopic characteristics of seborrheic keratosis and emphasize the importance of the dermatoscopy algorithm in differentiating between a melanocytic and a non-melanocytic lesion, of the excisional biopsy for the establishment of the diagnosis of cutaneous tumors, and of the histopathologic examination in all surgically removed samples.", "title": "" }, { "docid": "4add7de7ed94bc100de8119ebd74967e", "text": "Wireless signal strength is susceptible to the phenomena of interference, jumping, and instability, which often appear in the positioning results based on Wi-Fi field strength fingerprint database technology for indoor positioning. Therefore, a Wi-Fi and PDR (pedestrian dead reckoning) real-time fusion scheme is proposed in this paper to perform fusing calculation by adaptively determining the dynamic noise of a filtering system according to pedestrian movement (straight or turning), which can effectively restrain the jumping or accumulation phenomena of wireless positioning and the PDR error accumulation problem. Wi-Fi fingerprint matching typically requires a quite high computational burden: To reduce the computational complexity of this step, the affinity propagation clustering algorithm is adopted to cluster the fingerprint database and integrate the information of the position domain and signal domain of respective points. An experiment performed in a fourth-floor corridor at the School of Environment and Spatial Informatics, China University of Mining and Technology, shows that the traverse points of the clustered positioning system decrease by 65%–80%, which greatly improves the time efficiency. In terms of positioning accuracy, the average error is 4.09 m through the Wi-Fi positioning method. However, the positioning error can be reduced to 2.32 m after integration of the PDR algorithm with the adaptive noise extended Kalman filter (EKF).", "title": "" }, { "docid": "ba69b4c09bbcd6cfd50632a8d4bea877", "text": "In this report we consider the current status of the coverage of computer science in education at the lowest levels of education in multiple countries. Our focus is on computational thinking (CT), a term meant to encompass a set of concepts and thought processes that aid in formulating problems and their solutions in different fields in a way that could involve computers [130].\n The main goal of this report is to help teachers, those involved in teacher education, and decision makers to make informed decisions about how and when CT can be included in their local institutions. We begin by defining CT and then discuss the current state of CT in K-9 education in multiple countries in Europe as well as the United States. Since many students are exposed to CT outside of school, we also discuss the current state of informal educational initiatives in the same set of countries.\n An important contribution of the report is a survey distributed to K-9 teachers, aiming at revealing to what extent different aspects of CT are already part of teachers' classroom practice and how this is done. The survey data suggest that some teachers are already involved in activities that have strong potential for introducing some aspects of CT. In addition to the examples given by teachers participating in the survey, we present some additional sample activities and lesson plans for working with aspects of CT in different subjects. We also discuss ways in which teacher training can be coordinated as well as the issue of repositories. We conclude with future directions for research in CT at school.", "title": "" }, { "docid": "fa4480bbc460658bd1ea5804fdebc5ed", "text": "This paper examines the problem of how to teach multiple tasks to a Reinforcement Learning (RL) agent. To this end, we use Linear Temporal Logic (LTL) as a language for specifying multiple tasks in a manner that supports the composition of learned skills. We also propose a novel algorithm that exploits LTL progression and off-policy RL to speed up learning without compromising convergence guarantees, and show that our method outperforms the state-of-the-art approach on randomly generated Minecraft-like grids.", "title": "" }, { "docid": "405022c5a2ca49973eaaeb1e1ca33c0f", "text": "BACKGROUND\nPreanalytical factors are the main source of variation in clinical chemistry testing and among the major determinants of preanalytical variability, sample hemolysis can exert a strong influence on result reliability. Hemolytic samples are a rather common and unfavorable occurrence in laboratory practice, as they are often considered unsuitable for routine testing due to biological and analytical interference. However, definitive indications on the analytical and clinical management of hemolyzed specimens are currently lacking. Therefore, the present investigation evaluated the influence of in vitro blood cell lysis on routine clinical chemistry testing.\n\n\nMETHODS\nNine aliquots, prepared by serial dilutions of homologous hemolyzed samples collected from 12 different subjects and containing a final concentration of serum hemoglobin ranging from 0 to 20.6 g/L, were tested for the most common clinical chemistry analytes. Lysis was achieved by subjecting whole blood to an overnight freeze-thaw cycle.\n\n\nRESULTS\nHemolysis interference appeared to be approximately linearly dependent on the final concentration of blood-cell lysate in the specimen. This generated a consistent trend towards overestimation of alanine aminotransferase (ALT), aspartate aminotransferase (AST), creatinine, creatine kinase (CK), iron, lactate dehydrogenase (LDH), lipase, magnesium, phosphorus, potassium and urea, whereas mean values of albumin, alkaline phosphatase (ALP), chloride, gamma-glutamyltransferase (GGT), glucose and sodium were substantially decreased. Clinically meaningful variations of AST, chloride, LDH, potassium and sodium were observed in specimens displaying mild or almost undetectable hemolysis by visual inspection (serum hemoglobin < 0.6 g/L). The rather heterogeneous and unpredictable response to hemolysis observed for several parameters prevented the adoption of reliable statistic corrective measures for results on the basis of the degree of hemolysis.\n\n\nCONCLUSION\nIf hemolysis and blood cell lysis result from an in vitro cause, we suggest that the most convenient corrective solution might be quantification of free hemoglobin, alerting the clinicians and sample recollection.", "title": "" }, { "docid": "7e6a3a04c24a0fc24012619d60ebb87b", "text": "The recent trend toward democratization in countries throughout the globe has challenged scholars to pursue two potentially contradictory goals: to develop a differentiated conceptualization of democracy that captures the diverse experiences of these countries; and to extend the analysis to this broad range of cases without ‘stretching’ the concept. This paper argues that this dual challenge has led to a proliferation of conceptual innovations, including hundreds of subtypes of democracy—i.e., democracy ‘with adjectives.’ The paper explores the strengths and weaknesses of three important strategies of innovation that have emerged: ‘precising’ the definition of democracy; shifting the overarching concept with which democracy is associated; and generating various forms of subtypes. Given the complex structure of meaning produced by these strategies for refining the concept of democracy, we conclude by offering an old piece of advice with renewed urgency: It is imperative that scholars situate themselves in relation to this structure of meaning by clearly defining and explicating the conception of democracy they are employing.", "title": "" }, { "docid": "0fafa2597726dfeb4d35721c478f1038", "text": "Visual saliency models have enjoyed a big leap in performance in recent years, thanks to advances in deep learning and large scale annotated data. Despite enormous effort and huge breakthroughs, however, models still fall short in reaching human-level accuracy. In this work, I explore the landscape of the field emphasizing on new deep saliency models, benchmarks, and datasets. A large number of image and video saliency models are reviewed and compared over two image benchmarks and two large scale video datasets. Further, I identify factors that contribute to the gap between models and humans and discuss the remaining issues that need to be addressed to build the next generation of more powerful saliency models. Some specific questions that are addressed include: in what ways current models fail, how to remedy them, what can be learned from cognitive studies of attention, how explicit saliency judgments relate to fixations, how to conduct fair model comparison, and what are the emerging applications of saliency models.", "title": "" }, { "docid": "1ffc6db796b8e8a03165676c1bc48145", "text": "UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. Œe result is a practical scalable algorithm that applies to real world data. Œe UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.", "title": "" } ]
scidocsrr
263c8c9b57287185550754bc221fae65
IoT based smart healthcare kit
[ { "docid": "ecd7da1f742b4c92f3c748fd19098159", "text": "Abstract. Today, a paradigm shift is being observed in science, where the focus is gradually shifting toward the cloud environments to obtain appropriate, robust and affordable services to deal with Big Data challenges (Sharma et al. 2014, 2015a, 2015b). Cloud computing avoids any need to locally maintain the overly scaled computing infrastructure that include not only dedicated space, but the expensive hardware and software also. In this paper, we study the evolution of as-a-Service modalities, stimulated by cloud computing, and explore the most complete inventory of new members beyond traditional cloud computing stack.", "title": "" } ]
[ { "docid": "6533ee7e13ab293f33f1747adff92fe5", "text": "The stochastic approximation method is behind the solution to many important, actively-studied problems in machine learning. Despite its farreaching application, there is almost no work on applying stochastic approximation to learning problems with general constraints. The reason for this, we hypothesize, is that no robust, widely-applicable stochastic approximation method exists for handling such problems. We propose that interior-point methods are a natural solution. We establish the stability of a stochastic interior-point approximation method both analytically and empirically, and demonstrate its utility by deriving an on-line learning algorithm that also performs feature selection via L1 regularization.", "title": "" }, { "docid": "4599791edd82107f40afc86a1367bf19", "text": "Acknowledgements It is a great pleasure to have an opportunity to thanks valuable beings for their continuous support and inspiration throughout the thesis work. I would like to extend my gratitude towards Dr. for all the guidance and great knowledge he shared during our course. The abundance of knowledge he has always satisfied our queries at every point. Thanks to Mr. Sumit Miglani, My guide for his contribution for timely reviews and suggestions in completing the thesis. Every time he provided the needed support and guidance. At last but not the least, a heartiest thanks to all my family and friends for being there every time I needed them. Abstract Address Resolution Protocol (ARP) is a protocol having simple architecture and have been in use since the advent of Open System Interconnection (OSI) network architecture. Its been working at network layer for the important dynamic conversion of network address i.e. Internet Protocol (IP) address to physical address or Media Access Control (MAC) address. Earlier it was sufficiently providing its services but in today \" s complex and more sophisticated unreliable network, security being one major issue, standard ARP protocol is vulnerable to many different kinds of attacks. These attacks lead to devastating loss of important information. With certain loopholes it has become easy to be attacked and with not so reliable security mechanism, confidentiality of data is being compromised. Therefore, a strong need is felt to harden the security system. Since, LAN is used in maximum organizations to get the different computer connected. So, an attempt has been made to enhance the working of ARP protocol to work in a more secure way. Any kind of attempts to poison the ARP cache (it maintains the corresponding IP and MAC address associations in the LAN network) for redirecting the data to unreliable host, are prevented beforehand. New modified techniques are proposed which could efficiently guard our ARP from attacker and protect critical data from being sniffed both internally and externally. Efficiency of these methods has been shown mathematically without any major impact on the performance of network. Main idea behind how these methods actually work and proceed to achieve its task has been explained with the help of flow chart and pseudo codes. With the help of different tools ARP cache is being monitored regularly and if any malicious activity is encountered, it is intimidated to the administrator immediately. So, in …", "title": "" }, { "docid": "4ae4aa05befe374ab4e06d1c002efb53", "text": "The convincing development in Internet of Things (IoT) enables the solutions to spur the advent of novel and fascinating applications. The main aim is to integrate IoT aware architecture to enhance smart healthcare systems for automatic environmental monitoring of hospital and patient health. Staying true to the IoT vision, we propose a smart hospital system (SHS), which relies on different, yet complimentary, technologies, specifically RFID, WSN and smart mobile, interoperating with each other through a Constrained Application Protocol (CoAP)/IPv6 over low-power wireless personal area network (6LoWPAN)/representational state transfer (REST) network infrastructure. RADIO frequency identification technologies have been increasingly used in various applications, such as inventory control, and object tracking. An RFID system typically consist of one or several readers and numerous tags. Each tag has a unique ID. The proposed SHS has highlighted a number of key capabilities and aspects of novelty, which represent a significant step forward.", "title": "" }, { "docid": "d4f3cc4ac102fc922499001c8a8ab6af", "text": "This four-part series of articles provides an overview of the neurological examination of the elderly patient, particularly as it applies to patients with cognitive impairment, dementia or cerebrovascular disease.The focus is on the method and interpretation of the bedside physical examination; the mental state and cognitive examinations are not covered in this review.Part 1 (featured in the September issue of Geriatrics & Aging) began with an approach to the neurological examination in normal aging and in disease,and reviewed components of the general physical, head and neck, neurovascular and cranial nerve examinations relevant to aging and dementia. Part 2 (featured in the October issue) covered the motor examination with an emphasis on upper motor neuron signs and movement disorders. Part 3, featured here, reviews the assessment of coordination,balance and gait,and Part 4 will discuss the muscle stretch reflexes, pathological and primitive reflexes, sensory examination and concluding remarks. Throughout this series, special emphasis is placed on the evaluation and interpretation of neurological signs in light of findings considered normal in the", "title": "" }, { "docid": "6e5c74562ed54f068217fe98cdba946d", "text": "Consumer behavior is essentially a decision-making processes by consumers either individuals, groups, and organizations that includes the process of choosing, buying, obtaining, use of goods or services. The main question in consumer behavior research is how consumers make a purchase decision. This study indentifies factors that are statistically significant to impulsive buying to Kacang Garuda (Peanut) product of each gender in Surabaya. By using primary data with the population of people ages 18–40, collected with purposive sampling, by spreading questionnaire. Selected object is Garuda Peanut because peanut products are low involvement products that trigger the occurrence of impulsive buying behavior, as well as Garuda Peanut is the market leader for peanut products in Indonesia. This research limits the factors into three, product attractiveness attributed by unique and interesting package, attractive package color, and package size availability, word of mouth attributed by convincing salesman, info from relatives and info from friends, and quality attributed by reliability, conformance quality and durability. The data shows that product attractiveness and quality are significant in increasing the degree of impulsive buying to both gender, but word of mouth applies only to female gender.", "title": "" }, { "docid": "f9ca69c3a63403ff7a9e676847868dcd", "text": "BACKGROUND\nVegetarian nutrition is gaining increasing public attention worldwide. While some studies have examined differences in motivations and personality traits between vegetarians and omnivores, only few studies have considered differences in motivations and personality traits between the 2 largest vegetarian subgroups: lacto-ovo-vegetarians and vegans.\n\n\nOBJECTIVES\nTo examine differences between lacto-ovo-vegetarians and vegans in the distribution patterns of motives, values, empathy, and personality profiles.\n\n\nMETHODS\nAn anonymous online survey was performed in January 2014. Group differences between vegetarians and vegans in their initial motives for the choice of nutritional approaches, health-related quality of life (World Health Organization Quality of Life-BREF (WHOQOL-BREF)), personality traits (Big Five Inventory-SOEP (BFI-S)), values (Portraits Value Questionnaire (PVQ)), and empathy (Empathizing Scale) were analyzed by univariate analyses of covariance; P values were adjusted for multiple testing.\n\n\nRESULTS\n10,184 individuals completed the survey; 4,427 (43.5%) were vegetarians and 4,822 (47.3%) were vegans. Regarding the initial motives for the choice of nutritional approaches, vegans rated food taste, love of animals, and global/humanitarian reasons as more important, and the influence of their social environment as less important than did vegetarians. Compared to vegetarians, vegans had higher values on physical, psychological, and social quality of life on the WHOQOL-BREF, and scored lower on neuroticism and higher on openness on the BFI-S. In the PVQ, vegans scored lower than vegetarians on power/might, achievement, safety, conformity, and tradition and higher on self-determination and universalism. Vegans had higher empathy than vegetarians (all p < 0.001).\n\n\nDISCUSSION\nThis survey suggests that vegans have more open and compatible personality traits, are more universalistic, empathic, and ethically oriented, and have a slightly higher quality of life when compared to vegetarians. Given the small absolute size of these differences, further research is needed to evaluate whether these group differences are relevant in everyday life and can be confirmed in other populations.", "title": "" }, { "docid": "c7a96129484bbedd063a0b322d9ae3d3", "text": "BACKGROUND\nNon-invasive detection of aneuploidies in a fetal genome through analysis of cell-free DNA circulating in the maternal plasma is becoming a routine clinical test. Such tests, which rely on analyzing the read coverage or the allelic ratios at single-nucleotide polymorphism (SNP) loci, are not sensitive enough for smaller sub-chromosomal abnormalities due to sequencing biases and paucity of SNPs in a genome.\n\n\nRESULTS\nWe have developed an alternative framework for identifying sub-chromosomal copy number variations in a fetal genome. This framework relies on the size distribution of fragments in a sample, as fetal-origin fragments tend to be smaller than those of maternal origin. By analyzing the local distribution of the cell-free DNA fragment sizes in each region, our method allows for the identification of sub-megabase CNVs, even in the absence of SNP positions. To evaluate the accuracy of our method, we used a plasma sample with the fetal fraction of 13%, down-sampled it to samples with coverage of 10X-40X and simulated samples with CNVs based on it. Our method had a perfect accuracy (both specificity and sensitivity) for detecting 5 Mb CNVs, and after reducing the fetal fraction (to 11%, 9% and 7%), it could correctly identify 98.82-100% of the 5 Mb CNVs and had a true-negative rate of 95.29-99.76%.\n\n\nAVAILABILITY AND IMPLEMENTATION\nOur source code is available on GitHub at https://github.com/compbio-UofT/FSDA CONTACT: : brudno@cs.toronto.edu.", "title": "" }, { "docid": "40ad3f021008c82d8138ad38ca489ad4", "text": "This paper reviews the current state of the art on reinforcement learning (RL)-based feedback control solutions to optimal regulation and tracking of single and multiagent systems. Existing RL solutions to both optimal <inline-formula> <tex-math notation=\"LaTeX\">$\\mathcal {H}_{2}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$\\mathcal {H}_\\infty $ </tex-math></inline-formula> control problems, as well as graphical games, will be reviewed. RL methods learn the solution to optimal control and game problems online and using measured data along the system trajectories. We discuss Q-learning and the integral RL algorithm as core algorithms for discrete-time (DT) and continuous-time (CT) systems, respectively. Moreover, we discuss a new direction of off-policy RL for both CT and DT systems. Finally, we review several applications.", "title": "" }, { "docid": "4a27c9c13896eb50806371e179ccbf33", "text": "A geographical information system (CIS) is proposed as a suitable tool for mapping the spatial distribution of forest fire danger. Using a region severely affected by forest fires in Central Spain as the study area, topography, meteorological data, fuel models and human-caused risk were mapped and incorporated within a GIS. Three danger maps were generated: probability of ignition, fuel hazard and human risk, and all of them were overlaid in an integrated fire danger map, based upon the criteria established by the Spanish Forest Service. CIS make it possible to improve our knowledge of the geographical distribution of fire danger, which is crucial for suppression planning (particularly when hotshot crews are involved) and for elaborating regional fire defence plans.", "title": "" }, { "docid": "f3b0216455db9e0c1a204df16fb7499e", "text": "na a a am m m mi i i ic c c c p p p pe e e er r r rs s s sp p p pe e e ec c c ct t t ti i i iv v v ve e e e While the study of implicit learning is nothing new, the field as a whole has come to embody — over the last decade or so — ongoing questioning about three of the most fundamental debates in the cognitive sciences: The nature of consciousness, the nature of mental representation (in particular the difficult issue of abstraction), and the role of experience in shaping the cognitive system. Our main goal in this chapter is to offer a framework that attempts to integrate current thinking about these three issues in a way that specifically links consciousness with adaptation and learning. Our assumptions about this relationship are rooted in further assumptions about the nature of processing and of representation in cognitive systems. When considered together, we believe that these assumptions offer a new perspective on the relationships between conscious and unconscious processing and on the function of consciousness in cognitive systems. To begin in a way that reflects the goals of this volume, we can ask the question: \" What is implicit learning for? \" In asking this question, one presupposes that implicit learning is a special process that can be distinguished from, say, explicit learning or, even more pointedly, from learning tout court. The most salient feature attributed to implicit learning is of course that it is implicit, by which most researchers in the area actually mean unconscious. Hence the question \"What is implicit learning for?\" is in fact a way of asking about the function of consciousness in learning that specifically assumes that conscious and unconscious learning have different functions. The central idea that we will develop in this chapter is that conscious and unconscious learning are actually two different expressions of a single set of constantly operating graded, dynamic processes of adaptation. While this position emphasizes that conscious and unconscious processing differ only in degree rather than in kind, it is nevertheless not incompatible with the notion that consciousness has specific functions in the cognitive economy. Indeed, our main conclusion will be that the function of consciousness is to offer flexible adaptive control over behavior. By adaptive here, we do not mean simply the …", "title": "" }, { "docid": "c0d7cd54a947d9764209e905a6779d45", "text": "The mainstream approach to protecting the location-privacy of mobile users in location-based services (LBSs) is to alter the users' actual locations in order to reduce the location information exposed to the service provider. The location obfuscation algorithm behind an effective location-privacy preserving mechanism (LPPM) must consider three fundamental elements: the privacy requirements of the users, the adversary's knowledge and capabilities, and the maximal tolerated service quality degradation stemming from the obfuscation of true locations. We propose the first methodology, to the best of our knowledge, that enables a designer to find the optimal LPPM for a LBS given each user's service quality constraints against an adversary implementing the optimal inference algorithm. Such LPPM is the one that maximizes the expected distortion (error) that the optimal adversary incurs in reconstructing the actual location of a user, while fulfilling the user's service-quality requirement. We formalize the mutual optimization of user-adversary objectives (location privacy vs. correctness of localization) by using the framework of Stackelberg Bayesian games. In such setting, we develop two linear programs that output the best LPPM strategy and its corresponding optimal inference attack. Our optimal user-centric LPPM can be easily integrated in the users' mobile devices they use to access LBSs. We validate the efficacy of our game theoretic method against real location traces. Our evaluation confirms that the optimal LPPM strategy is superior to a straightforward obfuscation method, and that the optimal localization attack performs better compared to a Bayesian inference attack.", "title": "" }, { "docid": "4cfd7fab35e081f2d6f81ec23c4d0d18", "text": "In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.", "title": "" }, { "docid": "6e52471655da243e278f121cd1b12596", "text": "Finite element method (FEM) is a powerful tool in analysis of electrical machines however, the computational cost is high depending on the geometry of analyzed machine. In synchronous reluctance machines (SyRM) with transversally laminated rotors, the anisotropy of magnetic circuit is provided by flux barriers which can be of various shapes. Flux barriers of shape based on Zhukovski's curves seem to provide very good electromagnetic properties of the machine. Complex geometry requires a fine mesh which increases computational cost when performing finite element analysis. By using magnetic equivalent circuit (MEC) it is possible to obtain good accuracy at low cost. This paper presents magnetic equivalent circuit of SyRM with new type of flux barriers. Numerical calculation of flux barriers' reluctances will be also presented.", "title": "" }, { "docid": "ff83e090897ed7b79537392801078ffb", "text": "Component-based software engineering has had great impact in the desktop and server domain and is spreading to other domains as well, such as embedded systems. Agile software development is another approach which has gained much attention in recent years, mainly for smaller-scale production of less critical systems. Both of them promise to increase system quality, development speed and flexibility, but so far little has been published on the combination of the two approaches. This paper presents a comprehensive analysis of the applicability of the agile approach in the development processes of 1) COTS components and 2) COTS-based systems. The study method is a systematic theoretical examination and comparison of the fundamental concepts and characteristics of these approaches. The contributions are: first, an enumeration of identified contradictions between the approaches, and suggestions how to bridge these incompatibilities to some extent. Second, the paper provides some more general comments, considerations, and application guidelines concerning the introduction of agile principles into the development of COTS components or COTS-based systems. This study thus forms a framework which will guide further empirical studies.", "title": "" }, { "docid": "11851c0615ad483b6c4f9d0e4ccc30b2", "text": "In the era of information technology, human tend to develop better and more convenient lifestyle. Nowadays, almost all the electronic devices are equipped with wireless technology. A wireless communication network has numerous advantages and becomes an important application. The enhancements provide by the wireless technology gives the ease of control to the users and not least the mobility of the devices within the network. It is use the Zigbee as the wireless modules. The Smart Ordering System introduced current and fast way to order food at a restaurant. The system uses a small keypad to place orders and the order made by inserting the code on the keypad menu. This code comes along with the menu. The signal will be delivered to the order by the Zigbee technology, and it will automatically be displayed on the screen in the kitchen. Keywords— smart, ordering, S.O.S, Zigbee.", "title": "" }, { "docid": "09710d5e583ac83c2279d8fab48abe8d", "text": "This paper describes the upgrading process of the Multilingual Central Repository (MCR). The new MCR uses WordNet 3.0 as Interlingual-Index (ILI). Now, the current version of the MCR integrates in the same EuroWordNet framework wordnets from five different languages: English, Spanish, Catalan, Basque and Galician. In order to provide ontological coherence to all the integrated wordnets, the MCR has also been enriched with a disparate set of ontologies: Base Concepts, Top Ontology, WordNet Domains and Suggested Upper Merged Ontology. We also suggest a novel approach for improving some of the semantic resources integrated in the MCR, including a semiautomatic method to propagate domain information. The whole content of the MCR is freely available.", "title": "" }, { "docid": "1aa39f265d476fca4c54af341b6f2bde", "text": "Explaining the output of a complicated machine learning model like a deep neural network (DNN) is a central challenge in machine learning. Several proposed local explanation methods address this issue by identifying what dimensions of a single input are most responsible for a DNN’s output. The goal of this work is to assess the sensitivity of local explanations to DNN parameter values. Somewhat surprisingly, we find that DNNs with randomly-initialized weights produce explanations that are both visually and quantitatively similar to those produced by DNNs with learned weights. Our conjecture is that this phenomenon occurs because these explanations are dominated by the lower level features of a DNN, and that a DNN’s architecture provides a strong prior which significantly affects the representations learned at these lower layers.", "title": "" }, { "docid": "4b156066e72d0e8bf220c3e13738d91c", "text": "We present an unsupervised approach for abnormal event detection in videos. We propose, given a dictionary of features learned from local spatiotemporal cuboids using the sparse coding objective, the abnormality of an event depends jointly on two factors: the frequency of each feature in reconstructing all events (or, rarity of a feature) and the strength by which it is used in reconstructing the current event (or, the absolute coefficient). The Incremental Coding Length (ICL) of a feature is a measure of its entropy gain. Given a dictionary, the ICL computation does not involve any parameter, is computationally efficient and has been used for saliency detection in images with impressive results. In this paper, the rarity of a dictionary feature is learned online as its average energy, a function of its ICL. The proposed approach is applicable to real world streaming videos. Experiments on three benchmark datasets and evaluations in comparison with a number of mainstream algorithms show that the approach is comparable to the state-of-the-art.", "title": "" }, { "docid": "31ab58f42f5f34f765d28aead4ae7fe3", "text": "Machine learning (ML) has become a core component of many real-world applications and training data is a key factor that drives current progress. This huge success has led Internet companies to deploy machine learning as a service (MLaaS). Recently, the first membership inference attack has shown that extraction of information on the training set is possible in such MLaaS settings, which has severe security and privacy implications. However, the early demonstrations of the feasibility of such attacks have many assumptions on the adversary, such as using multiple so-called shadow models, knowledge of the target model structure, and having a dataset from the same distribution as the target model’s training data. We relax all these key assumptions, thereby showing that such attacks are very broadly applicable at low cost and thereby pose a more severe risk than previously thought. We present the most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains. In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.", "title": "" } ]
scidocsrr
530ade4822f3ba2f821bb2c7f244a761
Education and Economic Growth
[ { "docid": "4fa7ee44cdc4b0cd439723e9600131bd", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ucpress.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" } ]
[ { "docid": "2d953dda47c80304f8b2fa0d6e08c2f8", "text": "A facial recognition system is an application which is used for identifying or verifying a person from a digital image or a video frame. One of the ways to do this is by comparing selected facial features from the image and a facial database. It is generally used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. Areas such as network security, content indexing and retrieval, and video compression benefit from face recognition technology since people themselves are the main source of interest. Network access control via face recognition not only makes hackers virtually impossible to steal one's \"password\", but also increases the user friendliness in human-computer interaction. Although humans have always had the innate ability to recognize and distinguish between faces, yet computers only recently have shown the same ability. In the mid 1960s, scientists began work on using the computer to recognize human faces. Since then, facial recognition software has come a long way. In this article, I have explored the reasons behind using facial recognition, the products developed to implement this biometrics technique and also the criticisms and advantages that are bounded with it.", "title": "" }, { "docid": "3fa1abd26925407bbf34716060a1a589", "text": "Generating knowledge from data is an increasingly important activity. This process of data exploration consists of multiple tasks: data ingestion, visualization, statistical analysis, and storytelling. Though these tasks are complementary, analysts often execute them in separate tools. Moreover, these tools have steep learning curves due to their reliance on manual query specification. Here, we describe the design and implementation of DIVE, a web-based system that integrates state-of-the-art data exploration features into a single tool. DIVE contributes a mixed-initiative interaction scheme that combines recommendation with point-and-click manual specification, and a consistent visual language that unifies different stages of the data exploration workflow. In a controlled user study with 67 professional data scientists, we find that DIVE users were significantly more successful and faster than Excel users at completing predefined data visualization and analysis tasks.", "title": "" }, { "docid": "e05fc780d1f3fd4061918e50f5dd26a0", "text": "The need for organizations to operate in changing environments is addressed by proposing an approach that integrates organizational development with information system (IS) development taking into account changes in the application context of the solution. This is referred to as Capability Driven Development (CDD). A meta-model representing business and IS designs consisting of goals, key performance indicators, capabilities, context and capability delivery patterns, is being proposed. The use of the meta-model is validated in three industrial case studies as part of an ongoing collaboration project, whereas one case is presented in the paper. Issues related to the use of the CDD approach, namely, CDD methodology and tool support are also discussed.", "title": "" }, { "docid": "727c36aac7bd0327f3edb85613dcf508", "text": "The interpretation of adjective-noun pairs plays a crucial role in tasks such as recognizing textual entailment. Formal semantics often places adjectives into a taxonomy which should dictate adjectives’ entailment behavior when placed in adjective-noun compounds. However, we show experimentally that the behavior of subsective adjectives (e.g. red) versus non-subsective adjectives (e.g. fake) is not as cut and dry as often assumed. For example, inferences are not always symmetric: while ID is generally considered to be mutually exclusive with fake ID, fake ID is considered to entail ID. We discuss the implications of these findings for automated natural language understanding.", "title": "" }, { "docid": "9f14b13b9c89fcbe330a77285eda432e", "text": "Purpose – The purpose of this research is to examine the manner in which employees access, create and share information and knowledge within a complex supply chain with a view to better understanding how to identify and manage barriers which may inhibit such exchanges. Design/methodology/approach – An extensive literature review combined with an in-depth case study analysis identified a range of potential transfer barriers. These in turn were examined in terms of their consistency of impact by an end-to-end process survey conducted within an IBM facility. Findings – Barrier impact cannot be assumed to be uniform across the core processes of the organization. Process performance will be impacted upon in different ways and subject to varying degrees of influence by the transfer barriers. Barrier identification and management must take place at a process rather than at the organizational level. Research limitations/implications – The findings are based, in the main, on an extensive single company study. Although significant in terms of influencing both knowledge and information systems design and management the study/findings have still to be fully replicated across a range of public and private organizations. Originality/value – The deployment of generic information technology and business systems needs to be questioned if they have been designed and implemented to satisfy organizational rather than process needs.", "title": "" }, { "docid": "7fd396ca8870c3a2fe99e63f24aaf9f7", "text": "This paper presents a one-point calibration gaze tracking method based on eyeball kinematics using stereo cameras. By using two cameras and two light sources, the optic axis of the eye can be estimated. One-point calibration is required to estimate the angle of the visual axis from the optic axis. The eyeball rotates with optic and visual axes based on the eyeball kinematics (Listing's law). Therefore, we introduced eyeball kinematics to the one-point calibration process in order to properly estimate the visual axis. The prototype system was developed and it was found that the accuracy was under 1° around the center and bottom of the display.", "title": "" }, { "docid": "8015f5668df95f83e353550d54eac4da", "text": "Counterfeit currency is a burning question throughout the world. The counterfeiters are becoming harder to track down because of their rapid adoption of and adaptation with highly advanced technology. One of the most effective methods to stop counterfeiting can be the widespread use of counterfeit detection tools/software that are easily available and are efficient in terms of cost, reliability and accuracy. This paper presents a core software system to build a robust automated counterfeit currency detection tool for Bangladeshi bank notes. The software detects fake currency by extracting existing features of banknotes such as micro-printing, optically variable ink (OVI), water-mark, iridescent ink, security thread and ultraviolet lines using OCR (Optical Character recognition), Contour Analysis, Face Recognition, Speeded UP Robust Features (SURF) and Canny Edge & Hough transformation algorithm of OpenCV. The success rate of this software can be measured in terms of accuracy and speed. This paper also focuses on the pros and cons of implementation details that may degrade the performance of image processing based paper currency authentication systems.", "title": "" }, { "docid": "ad60e181edbf2500da6f78b96fd513d1", "text": "While vendors on the Internet may have enjoyed an increase in the number of clicks on their Web sites, they have also faced disappointments in converting these clicks into purchases. Lack of trust is identified as one of the greatest barriers inhibiting Internet transactions. Thus, it is essential to understand how trust is created and how it evolves in the Electronic Commerce (EC) context throughout a customer’s purchase experience with an Internet store. As the first step in studying the dynamics of online trust building, this research aims to compare online trust-building factors between potential customers and repeat customers. For this purpose, we classify trust in an Internet store into potential customer trust and repeat customer trust, depending on the customer’s purchase experience with the store. We find that trust building differs between potential customers and repeat customers in terms of antecedents. We also compare the effects of shared antecedents on trust between potential customers and repeat customers. We find that customer satisfaction has a stronger effect on trust building for repeat ∗ Soon Ang was the accepting senior editor for this paper. Harrison McKnight and Suzanne Rivard were reviewers for this paper. Kim, Xu, and Koh/A Comparison of Online Trust Building Factors Journal of the Association for Information Systems Vol. 5 No. 10, pp.392-420/October 2004 393 customers than other antecedents. We discuss the theoretical reasons for the differences and the implications of our research.", "title": "" }, { "docid": "6622922fb28cce3df8c68c21ac55e20e", "text": "Semantic-based approaches are relatively new technologies. Some of these technologies are supported by specifications of W3 Consortium, i.e. RDF, SPARQL and so on. There are many areas where semantic data can be utilized, e.g. social networks, annotation of protein sequences etc. From the physical database design point of view, several index data structures are utilized to handle this data. In many cases, the well-known B-tree is used as a basic index supporting some operations. Since the semantic data are multidimensional, a common way is to use a number of B-trees to index the data. In this article, we review other index data structures; we show that we can create only one index when we utilize a multidimensional data structure like the R-tree. We compare a performance of the B-tree indices with the R-tree and some its variants. Our experiments are performed over a huge semantic database, we show advantages and disadvantages of these data structures.", "title": "" }, { "docid": "47baa10f94368bc056bbca3dd4caec0c", "text": "We characterize the problem of pose estimation for rigid objects in terms of determining viewpoint to explain coarse pose and keypoint prediction to capture the finer details. We address both these tasks in two different settings - the constrained setting with known bounding boxes and the more challenging detection setting where the aim is to simultaneously detect and correctly estimate pose of objects. We present Convolutional Neural Network based architectures for these and demonstrate that leveraging viewpoint estimates can substantially improve local appearance based keypoint predictions. In addition to achieving significant improvements over state-of-the-art in the above tasks, we analyze the error modes and effect of object characteristics on performance to guide future efforts towards this goal.", "title": "" }, { "docid": "8251cc4742adfb41fdfe611dc45bb311", "text": "Recommending news articles is a challenging task due to the continuous changes in the set of available news articles and the contextdependent preferences of users. Traditional recommender approaches are optimized for analyzing static data sets. In news recommendation scenarios, characterized by continuous changes, high volume of messages, and tight time constraints, alternative approaches are needed. In this work we present a highly scalable recommender system optimized for the processing of streams. We evaluate the system in the CLEF NewsREEL challenge. Our system is built on Apache Spark enabling the distributed processing of recommendation requests ensuring the scalability of our approach. The evaluation of the implemented system shows that our approach is suitable for the news recommenation scenario and provides high-quality results while satisfying the tight time constraints.", "title": "" }, { "docid": "16d248e2d2ef584c52d09e7c6cfd63e7", "text": "Over the last few years, the convincing forward steps in the development of Internet of Things (IoT)-enabling solutions are spurring the advent of novel and fascinating applications. Today's healthcare system is the lack of security and real time monitoring. In the wake of this tendency, this paper proposes a novel, IoT-aware, smart architecture for automatic monitoring and tracking of patients from their home itself. Staying true to the IoT vision, we propose a Automation Healthcare System (AHS). The proposed AHS is to investigate advanced home health care services. Data produced in AHS shared with doctors and patients through IoT. The system utilizes IoT telemetry to transmit data from sensors to a remote monitor. This paper discusses recent advances in wearable sensors healthcare system to monitor temperature, heart rate and the energy efficient routing. The system provides the security and real time monitoring.", "title": "" }, { "docid": "fa52d586e7e6c92444845881ab1990cf", "text": "This paper proposes a novel rotor contour design for variable reluctance (VR) resolvers by injecting auxiliary air-gap permeance harmonics. Based on the resolver model with nonoverlapping tooth-coil windings, the influence of air-gap length function is first investigated by finite element (FE) method, and the detection accuracy of designs with higher values of fundamental wave factor may deteriorate due to the increasing third order of output voltage harmonics. Further, the origins of the third harmonics are investigated by analytical derivation and FE analyses of output voltages. Furthermore, it is proved that the voltage harmonics and the detection accuracy are significantly improved by injecting auxiliary air-gap permeance harmonics in the design of rotor contour. In addition, the proposed design can also be employed to eliminate voltage tooth harmonics in a conventional VR resolver topology. Finally, VR resolver prototypes with the conventional and the proposed rotors are fabricated and tested respectively to verify the analyses.", "title": "" }, { "docid": "4b46631305f749b029392bdbc72a08d2", "text": "Microfacet models have proven very successful for modeling light reflection from rough surfaces. In this paper we review microfacet theory and demonstrate how it can be extended to simulate transmission through rough surfaces such as etched glass. We compare the resulting transmission model to measured data from several real surfaces and discuss appropriate choices for the microfacet distribution and shadowing-masking functions. Since rendering transmission through media requires tracking light that crosses at least two interfaces, good importance sampling is a practical necessity. Therefore, we also describe efficient schemes for sampling the microfacet models and the corresponding probability density functions.", "title": "" }, { "docid": "159b604934e2aa4150d0e3e0222c6f1e", "text": "Assembly line balancing is significant for efficient and cost effective production of the products and is therefore gaining popularity in recent years. However, several uncertain events in assembly lines might causes variation in the task time and due to these variations there always remains a possibility that completion time of tasks might exceed the predefined cycle time. To hedge against this issue, a single model assembly line balancing problem with uncertain task times and multiple objectives is presented. Current research is aimed to minimize cycle time in addition to maximize the probability that completion time of tasks on stations will not exceed the cycle time and minimize smoothness index simultaneously. A Pareto based artificial bee colony algorithm is proposed to get Pareto solution of the multiple objectives. The proposed algorithm called Pareto based artificial bee colony algorithm (PBABC) introduces some extra steps i.e., sorting of food sources, niche technique and preserve some elitists in the standard artificial bee colony algorithm (ABC) to get Pareto solution. Furthermore, the effective parameters of the proposed algorithm are tuned using Taguchi method. Experiments are performed to solve standard assembly line balancing problems taken from operations research (OR) library. The performance of proposed PBABC algorithm is compared with a famous multi objective optimization algorithm NSGA II, in literature. Computational result shows that proposed PBABC algorithm outperforms NSGA II in terms of the quality of Pareto solutions and computational time. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cb8fa49be63150e1b85f98a44df691a5", "text": "SQL tuning---the attempt to improve a poorly-performing execution plan produced by the database query optimizer---is a critical aspect of database performance tuning. Ironically, as commercial databases strive to improve on the manageability front, SQL tuning is becoming more of a black art. It requires a high level of expertise in areas like (i) query optimization, run-time execution of query plan operators, configuration parameter settings, and other database internals; (ii) identification of missing indexes and other access structures; (iii) statistics maintained about the data; and (iv) characteristics of the underlying storage system. Since database systems, their workloads, and the data that they manage are not getting any simpler, database users and administrators often rely on trial and error for SQL tuning.\n In this paper, we take the position that the trial-and-error (or, experiment-driven) process of SQL tuning can be automated by the database system in an efficient manner; freeing the user or administrator from this burden in most cases. A number of current approaches to SQL tuning indeed take an experiment-driven approach. We are prototyping a tool, called zTuned, that automates experiment-driven SQL tuning. This paper describes the design choices in zTuned to address three nontrivial issues: (i) how is the SQL tuning logic integrated with the regular query optimizer, (ii) how to plan the experiments to conduct so that a satisfactory (new) plan can be found quickly, and (iii) how to conduct experiments with minimal impact on the user-facing production workload. We conclude with a preliminary empirical evaluation and outline promising new directions in automated SQL tuning.", "title": "" }, { "docid": "56bab9b1d6ea6b26134b02ae4d76f864", "text": "The 3G iPhone was the first consumer device to provide a seamless integration of three positioning technologies: Assisted GPS (A-GPS), WiFi positioning and cellular network positioning. This study presents an evaluation of the accuracy of locations obtained using these three positioning modes on the 3G iPhone. A-GPS locations were validated using surveyed benchmarks and compared to a traditional low-cost GPS receiver running simultaneously. WiFi and cellular positions for indoor locations were validated using high resolution orthophotography. Results indicate that A-GPS locations obtained using the 3G iPhone are much less accurate than those from regular autonomous GPS units (average median error of 8 m for ten 20-minute field tests) but appear sufficient for most Location Based Services (LBS). WiFi locations using the 3G iPhone are much less accurate (median error of 74 m for 58 observations) and fail to meet the published accuracy specifications. Positional errors in WiFi also reveal erratic spatial patterns resulting from the design of the calibration effort underlying the WiFi positioning system. Cellular positioning using the 3G iPhone is the least accurate positioning method (median error of 600 m for 64 observations), consistent with previous studies. Pros and cons of the three positioning technologies are presented in terms of coverage, accuracy and reliability, followed by a discussion of the implications for LBS using the 3G iPhone and similar mobile devices.", "title": "" }, { "docid": "c5f749c36b3d8af93c96bee59f78efe5", "text": "INTRODUCTION\nMolecular diagnostics is a key component of laboratory medicine. Here, the authors review key triggers of ever-increasing automation in nucleic acid amplification testing (NAAT) with a focus on specific automated Polymerase Chain Reaction (PCR) testing and platforms such as the recently launched cobas® 6800 and cobas® 8800 Systems. The benefits of such automation for different stakeholders including patients, clinicians, laboratory personnel, hospital administrators, payers, and manufacturers are described. Areas Covered: The authors describe how molecular diagnostics has achieved total laboratory automation over time, rivaling clinical chemistry to significantly improve testing efficiency. Finally, the authors discuss how advances in automation decrease the development time for new tests enabling clinicians to more readily provide test results. Expert Commentary: The advancements described enable complete diagnostic solutions whereby specific test results can be combined with relevant patient data sets to allow healthcare providers to deliver comprehensive clinical recommendations in multiple fields ranging from infectious disease to outbreak management and blood safety solutions.", "title": "" }, { "docid": "d973047c3143043bb25d4a53c6b092ec", "text": "Persian License Plate Detection and Recognition System is an image-processing technique used to identify a vehicle by its license plate. In fact this system is one kind of automatic inspection of transport, traffic and security systems and is of considerable interest because of its potential applications to areas such as automatic toll collection, traffic law enforcement and security control of restricted areas. License plate location is an important stage in vehicle license plate recognition for automated transport system. This paper presents a real time and robust method of license plate detection and recognition from cluttered images based on the morphology and template matching. In this system main stage is the isolation of the license plate from the digital image of the car obtained by a digital camera under different circumstances such as illumination, slop, distance, and angle. The algorithm starts with preprocessing and signal conditioning. Next license plate is localized using morphological operators. Then a template matching scheme will be used to recognize the digits and characters within the plate. This system implemented with help of Isfahan Control Traffic organization and the performance was 98.2% of correct plates identification and localization and 92% of correct recognized characters. The results regarding the complexity of the problem and diversity of the test cases show the high accuracy and robustness of the proposed method. The method could also be applicable for other applications in the transport information systems, where automatic recognition of registration plates, shields, signs, and so on is often necessary. This paper presents a morphology-based method.", "title": "" }, { "docid": "06f1c7daafcf59a8eb2ddf430d0d7f18", "text": "OBJECTIVES\nWe aimed to evaluate the efficacy of reinforcing short-segment pedicle screw fixation with polymethyl methacrylate (PMMA) vertebroplasty in patients with thoracolumbar burst fractures.\n\n\nMETHODS\nWe enrolled 70 patients with thoracolumbar burst fractures for treatment with short-segment pedicle screw fixation. Fractures in Group A (n = 20) were reinforced with PMMA vertebroplasty during surgery. Group B patients (n = 50) were not treated with PMMA vertebroplasty. Kyphotic deformity, anterior vertebral height, instrument failure rates, and neurological function outcomes were compared between the two groups.\n\n\nRESULTS\nKyphosis correction was achieved in Group A (PMMA vertebroplasty) and Group B (Group A, 6.4 degrees; Group B, 5.4 degrees). At the end of the follow-up period, kyphosis correction was maintained in Group A but lost in Group B (Group A, 0.33-degree loss; Group B, 6.20-degree loss) (P = 0.0001). After surgery, greater anterior vertebral height was achieved in Group A than in Group B (Group A, 12.9%; Group B, 2.3%) (P < 0.001). During follow-up, anterior vertebral height was maintained only in Group A (Group A, 0.13 +/- 4.06%; Group B, -6.17 +/- 1.21%) (P < 0.001). Patients in both Groups A and B demonstrated good postoperative Denis Pain Scale grades (P1 and P2), but Group A had better results than Group B in terms of the control of severe and constant pain (P4 and P5) (P < 0.001). The Frankel Performance Scale scores increased by nearly 1 in both Groups A and B. Group B was subdivided into Group B1 and B2. Group B1 consisted of patients who experienced instrument failure, including screw pullout, breakage, disconnection, and dislodgement (n = 11). Group B2 comprised patients from Group B who did not experience instrument failure (n = 39). There were no instrument failures among patients in Group A. Preoperative kyphotic deformity was greater in Group B1 (23.5 +/- 7.9 degrees) than in Group B2 (16.8 +/- 8.40 degrees), P < 0.05. Severe and constant pain (P4 and P5) was noted in 36% of Group B1 patients (P < 0.001), and three of these patients required removal of their implants.\n\n\nCONCLUSION\nReinforcement of short-segment pedicle fixation with PMMA vertebroplasty for the treatment of patients with thoracolumbar burst fracture may achieve and maintain kyphosis correction, and it may also increase and maintain anterior vertebral height. Good Denis Pain Scale grades and improvement in Frankel Performance Scale scores were found in patients without instrument failure (Groups A and B2). Patients with greater preoperative kyphotic deformity had a higher risk of instrument failure if they did not undergo reinforcement with vertebroplasty. PMMA vertebroplasty offers immediate spinal stability in patients with thoracolumbar burst fractures, decreases the instrument failure rate, and provides better postoperative pain control than without vertebroplasty.", "title": "" } ]
scidocsrr
4dd6c30912c64e7dd77abbad74c10151
Resolving the Predicament of Android Custom Permissions
[ { "docid": "40d4716214b80ff944c552dfee09f5ec", "text": "Since the appearance of Android, its permission system was central to many studies of Android security. For a long time, the description of the architecture provided by Enck et al. in [31] was immutably used in various research papers. The introduction of highly anticipated runtime permissions in Android 6.0 forced us to reconsider this model. To our surprise, the permission system evolved with almost every release. After analysis of 16 Android versions, we can confirm that the modifications, especially introduced in Android 6.0, considerably impact the aptness of old conclusions and tools for newer releases. For instance, since Android 6.0 some signature permissions, previously granted only to apps signed with a platform certificate, can be granted to third-party apps even if they are signed with a non-platform certificate; many permissions considered before as threatening are now granted by default. In this paper, we review in detail the updated system, introduced changes, and their security implications. We highlight some bizarre behaviors, which may be of interest for developers and security researchers. We also found a number of bugs during our analysis, and provided patches to AOSP where possible.", "title": "" } ]
[ { "docid": "660bc85f84d37a98e78a34ccf1c8b1ab", "text": "In this paper, we evaluate the performance and experience differences between direct touch and mouse input on horizontal and vertical surfaces using a simple application and several validated scales. We find that, not only are both speed and accuracy improved when using the multi-touch display over a mouse, but that participants were happier and more engaged. They also felt more competent, in control, related to other people, and immersed. Surprisingly, these results cannot be explained by the intuitiveness of the controller, and the benefits of touch did not come at the expense of perceived workload. Our work shows the added value of considering experience in addition to traditional measures of performance, and demonstrates an effective and efficient method for gathering experience during inter-action with surface applications. We conclude by discussing how an understanding of this experience can help in designing touch applications.", "title": "" }, { "docid": "1b92575dd7c34c3d89fe3b1629731c40", "text": "In this paper we give an overview of our research on nonphotorealistic rendering methods for computer-generated pencil drawing. Our approach to the problem of simulating pencil drawings was to break it down into the subproblems of (1) simulating first the drawing materials (graphite pencil and drawing paper, blenders and kneaded eraser), (2) developing drawing primitives (individual pencil strokes and mark-making to create tones and textures), (3) simulating the basic rendering techniques (outlining and shading of 3D models) used by artists and illustrators familiar with pencil rendering, and (4) implementing the control of drawing steps from preparatory sketches to finished rendering results. We demonstrate the capabilities of our approach with a variety of images generated from reference images and 3D models. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation—Display algorithms; I.6.3 [Simulation and Modeling]: Applications—.", "title": "" }, { "docid": "f755019089b2477573e65336932bcc5d", "text": "Parts-of-Speech (POS) tagging plays vital roles in the field of Natural Language Processing (NLP), such as - machine translation, spell checker, information retrieval, speech processing, emotion analysis and so on. Bangla is a very inflectional language that induces many variants from a single word. Although there is a few POS Tagger in Bangla language, very small of them address the essence of suffices to identify tag of the words. In this regard, we propose an automated POS Tagging system for Bangla language based on word-suffixes. In our system, we use our own stemming technique to retrieve a possible minimum root words and apply rules according to different forms of suffixes. Moreover, we incorporate a Bangla vocabulary that contains more than 45,000 words with their default tag and a patterned based verb-data-set. These facilitate to improve tagging efficiency of Bangla POS Tagger. We experiment our proposed system on a Bangla text corpus. The result shows that our proposed Bangla POS Tagger has outperformed the known related tagging systems.", "title": "" }, { "docid": "19d554b2ef08382418979bf7ceb15baf", "text": "In this paper, we address the cross-lingual topic modeling, which is an important technique that enables global enterprises to detect and compare topic trends across global markets. Previous works in cross-lingual topic modeling have proposed methods that utilize parallel or comparable corpus in constructing the polylingual topic model. However, parallel or comparable corpus in many cases are not available. In this research, we incorporate techniques of mapping cross-lingual word space and the topic modeling (LDA) and propose two methods: Translated Corpus with LDA (TC-LDA) and Post Match LDA (PM-LDA). The cross-lingual word space mapping allows us to compare words of different languages, and LDA enables us to group words into topics. Both TC-LDA and PM-LDA do not need parallel or comparable corpus and hence have more applicable domains. The effectiveness of both methods is evaluated using UM-Corpus and WS-353. Our evaluation results indicate that both methods are able to identify similar documents written in different language. In addition, PM-LDA is shown to achieve better performance than TC-LDA, especially when document length is short.", "title": "" }, { "docid": "ad860674746dcf04156b3576174a9120", "text": "Predicting the popularity dynamics of Twitter hashtags has a broad spectrum of applications. Existing works have primarily focused on modeling the popularity of individual tweets rather than the underlying hashtags. As a result, they fail to consider several realistic factors contributing to hashtag popularity. In this paper, we propose Large Margin Point Process (LMPP), a probabilistic framework that integrates hashtag-tweet influence and hashtaghashtag competitions, the two factors which play important roles in hashtag propagation. Furthermore, while considering the hashtag competitions, LMPP looks into the variations of popularity rankings of the competing hashtags across time. Extensive experiments on seven real datasets demonstrate that LMPP outperforms existing popularity prediction approaches by a significant margin. Additionally, LMPP can accurately predict the relative rankings of competing hashtags, offering additional advantage over the state-of-the-art baselines.", "title": "" }, { "docid": "5b25230c26cb4f7687b561e20f3da6f3", "text": "This paper presents an approach to optimal design of elastic flywheels using an Injection Island Genetic Algorithm (iiGA). An iiGA in combination with a finite element code is used to search for shape variations to optimize the Specific Energy Density (SED) of elastic flywheels. SED is defined as the amount of rotational energy stored per unit mass. iiGAs seek solutions simultaneously at different levels of refinement of the problem representation (and correspondingly different definitions of the fitness function) in separate sub-populations (islands). Solutions are sought first at low levels of refinement with an axisymmetric plane stress finite element code for high speed exploration of the coarse design space. Next, individuals are injected into populations with a higher level of resolution that uses an axisymmetric three dimensional finite element model to “ fine-tune” the flywheel designs. In true multi -objective optimization, various “sub-fitness” functions can be defined that represent “good” aspects of the overall fitness function. Solutions can be sought for these various “sub-fitness” functions on different nodes and injected into a node that evaluates the overall fitness. Allowing subpopulations to explore different regions of the fitness space simultaneously allows relatively robust and efficient exploration in problems for which fitness evaluations are costly. 1.0 INTRODUCTION This paper will describe the advantages of searching with an axisymmetric plane stress finite element model (with a “sub-fitness” function) to quickly find building blocks needed to inject into an axisymmetric three-dimensional finite element model through use of an iiGA. An optimal annular composite flywheel shape will be sought by an iiGA and, for comparison, by a “ring” topology parallel GA. The flywheel is modeled as a series of concentric rings (see Figure 1). The thickness of each ring varies linearly in the radial direction with the possibilit y for a diverse set of material choices for each ring. Figure 2 shows a typical flywheel model in which symmetry is used to increase computational eff iciency. The overall fitness function for the genetic algorithm GALOPPS was the specific energy density (SED) of a flywheel, which is defined as: SED I mass = 1 2 2 ω 1.) where ω is the angular velocity of the flywheel (“sub-fitness” function), I is the mass moment of inertia defined by:", "title": "" }, { "docid": "71062284df13ccb63b6aaefde02ebf85", "text": "The ability to store and later use information is essential for a variety of adaptive behaviors, including integration, learning, generalization, prediction and inference. In this Review, we survey theoretical principles that can allow the brain to construct persistent states for memory. We identify requirements that a memory system must satisfy and analyze existing models and hypothesized biological substrates in light of these requirements. We also highlight open questions, theoretical puzzles and problems shared with computer science and information theory.", "title": "" }, { "docid": "65b933f72f74a17777baa966658f4c42", "text": "We describe the epidemic of obesity in the United States: escalating rates of obesity in both adults and children, and why these qualify as an epidemic; disparities in overweight and obesity by race/ethnicity and sex, and the staggering health and economic consequences of obesity. Physical activity contributes to the epidemic as explained by new patterns of physical activity in adults and children. Changing patterns of food consumption, such as rising carbohydrate intake--particularly in the form of soda and other foods containing high fructose corn syrup--also contribute to obesity. We present as a central concept, the food environment--the contexts within which food choices are made--and its contribution to food consumption: the abundance and ubiquity of certain types of foods over others; limited food choices available in certain settings, such as schools; the market economy of the United States that exposes individuals to many marketing/advertising strategies. Advertising tailored to children plays an important role.", "title": "" }, { "docid": "e2d431708d34533f4390d17a21bc7373", "text": "Credit Derivatives are continuing to enjoy major growth in the financial markets, aided and abetted by sophisticated product development and the expansion of product applications beyond price management to the strategic management of portfolio risk. As Blythe Masters, global head of credit derivatives marketing at J.P. Morgan in New York points out: \" In bypassing barriers between different classes, maturities, rating categories, debt seniority levels and so on, credit derivatives are creating enormous opportunities to exploit and profit from associated discontinuities in the pricing of credit risk \". With such intense and rapid product development Risk Publications is delighted to introduce the first Guide to Credit Derivatives, a joint project with J.P. Morgan, a pioneer in the use of credit derivatives, with contributions from the RiskMetrics Group, a leading provider of risk management research, data, software, and education. The guide will be of great value to risk managers addressing portfolio concentration risk, issuers seeking to minimise the cost of liquidity in the debt capital markets and investors pursuing assets that offer attractive relative value.", "title": "" }, { "docid": "fff53c626db93d568b4e9e6c13ef6f86", "text": "We give a correspondence between enriched categories and the Gauss-Kleene-Floyd-Warshall connection familiar to computer scientists. This correspondence shows this generalization of categories to be a close cousin to the generalization of transitive closure algorithms. Via this connection we may bring categorical and 2-categorical constructions into an active but algebraically impoverished arena presently served only by semiring constructions. We illustrate these techniques by applying them to Birkoff’s poset arithmetic, interpretable as an algebra of “true concurrency.” The Floyd-Warshall algorithm for generalized transitive closure [AHU74] is the code fragment for v do for u, w do δuw + = δuv · δvw. Here δuv denotes an entry in a matrix δ, or equivalently a label on the edge from vertex u to vertex v in a graph. When the matrix entries are truth values 0 or 1, with + and · interpreted respectively as ∨ and ∧, we have Warshall’s algorithm for computing the transitive closure δ+ of δ, such that δ+ uv = 1 just when there exists a path in δ from u to v. When the entries are nonnegative reals, with + as min and · as addition, we have Floyd’s algorithm for computing all shortest paths in a graph: δ+ uv is the minimum, over all paths from u to v in δ, of the sum of the edges of each path. Other instances of this algorithm include Kleene’s algorithm for translating finite automata into regular expressions, and Gauss’s algorithm for inverting a matrix, in each case with an appropriate choice of semiring. Not only are these algorithms the same up to interpretation of the data, but so are their correctness proofs. This begs for a unifying framework, which is found in the notion of semiring. A semiring is a structure differing from a ring principally in that its additive component is not a group but merely a monoid, see AHU [AHU74] for a more formal treatment. Other matrix problems and algorithms besides Floyd-Warshall, such as matrix multiplication and the various recursive divide-and-conquer approaches to closure, also lend themselves to this abstraction. This abstraction supports mainly vertex-preserving operations on such graphs. Typical operations are, given two graphs δ, on a common set of vertices, to form their pointwise sum δ + defined as (δ + )uv = δuv + uv, their matrix product δ defined as (δ )uv = δu− · −v (inner product), along with their transitive, symmetric, and reflexive closures, all on the same vertex set. We would like to consider other operations that combine distinct vertex sets in various ways. The two basic operations we have in mind are the disjoint union and cartesian product of such graphs, along with such variations of these operations as pasting (as not-so-disjoint union), concatenation (as a disjoint union with additional edges from one component to the other), etc. An efficient way to obtain a usefully large library of such operations is to impose an appropriate categorical structure on the collection of such graphs. In this paper we show how to use enriched categories to provide such structure while at the same time extending the notion of semiring to the more general notion of monoidal category. In so doing we find two layers of categorical structure: 1 enriched categories in the lower layer, as a generalization of graphs, and ordinary categories in the upper layer having enriched categories for its objects. The graph operations we want to define are expressible as limits and colimits in the upper (ordinary) categories. We first make a connection between the two universes of graph theory and category theory. We assume at the outset that vertices of graphs correspond to objects of categories, both for ordinary categories and enriched categories. The interesting part is how the edges are treated. The underlying graph U(C) of a category C consists of the objects and morphisms of C, with no composition law or identities. But there may be more than one morphism between any two vertices, whereas in graph theory one ordinarily allows just one edge. These “multigraphs” of category theory would therefore appear to be a more general notion than the directed graphs of graph theory. A staple of graph theory however is the label, whether on a vertex or an edge. If we regard a homset as an edge labeled with a set then a multigraph is the case of an edge-labeled graph where the labels are sets. So a multigraph is intermediate in generality between a directed graph and an edge-labeled directed graph. So starting from graphs whose edges are labeled with sets, we may pass to categories by specifying identities and a composition law, or we may pass to edge-labeled graphs by allowing other labels than sets. What is less obvious is that we can elegantly and usefully do both at once, giving rise to enriched categories. The basic ideas behind enriched categories can be traced to Mac Lane [Mac65], with much of the detail worked out by Eilenberg and Kelly [EK65], with the many subsequent developments condensed by Kelly [Kel82]. Lawvere [Law73] provides a highly readable account of the concepts. We require of the edge labels only that they form a monoidal category. Roughly speaking this is a set bearing the structure of both a category and a monoid. Formally a monoidal category D = 〈D,⊗, I, α, λ, ρ〉 is a category D = 〈D0,m, i〉, a functor ⊗:D2 → D, an object I of D, and three natural isomorphisms α: c ⊗ (d ⊗ e) → (c ⊗ d) ⊗ e, λ: I ⊗ d → d, and ρ: d ⊗ I → d. (Here c⊗ (d⊗ e) and (c⊗ d)⊗ e denote the evident functors from D3 to D, and similarly for I ⊗ d, d⊗ I and d as functors from D to D, where c, d, e are variables ranging over D.) These correspond to the three basic identities of the equational theory of monoids. To complete the definition of monoidal category we require a certain coherence condition, namely that the other identities of that theory be “generated” in exactly one way from these, see Mac Lane [Mac71] for details. A D-category, or (small) category enriched in a monoidal category D, is a quadruple 〈V, δ,m, i〉 consisting of a set V (which we think of as vertices of a graph), a function δ:V 2 → D0 (the edgelabeling function), a family m of morphisms muvw: δ(u, v)⊗δ(v, w) → δ(u, w) of D (the composition law), and a family i of morphisms iu: I → δ(u, u) (the identities), satisfying the following diagrams. (δ(u, v)⊗ δ(v, w))⊗ δ(w, x) αδ(u,v)δ(v,w)δ(w,x) > δ(u, v)⊗ (δ(v, w)⊗ δ(w, x)) muvw ⊗ 1 ∨ 1⊗mvwx ∨ δ(u, w)⊗ δ(w, x) muwx > δ(u, x) < muvx δ(u, v)⊗ δ(v, x)", "title": "" }, { "docid": "a46721e527f1fefd0380b7c8c40729ca", "text": "The use of game-based learning in the classroom has become more common in recent years. Many game-based learning tools and platforms are based on a quiz concept where the students can score points if they can choose the correct answer among multiple answers. The article describes an experiment where the game-based student response system Kahoot! was compared to a traditional non-gamified student response system, as well as the usage of paper forms for formative assessment. The goal of the experiment was to investigate whether gamified formative assessments improve the students’ engagement, motivation, enjoyment, concentration, and learning. In the experiment, the three different formative assessment tools/methods were used to review and summarize the same topic in three parallel lectures in an IT introductory course. The first method was to have the students complete a paper quiz, and then review the results afterwards using hand raising. The second method was to use the non-gamified student response system Clicker where the students gave their response to a quiz through polling. The third method was to use the game-based student response system Kahoot!. All three lectures were taught in the exact same way, teaching the same syllabus and using the same teacher. The only difference was the method use to summarize the lecture. A total of 384 students participated in the experiment, where 127 subjects did the paper quiz, 175 used the non-gamified student response system, and 82 students using the gamified approach. The gender distribution was 48% female students and 52% male students. Preand a post-test were used to assess the learning outcome of the lectures, and a questionnaire was used to get data on the students’ engagement and motivation. The results show significant improvement in motivation, engagement, enjoyment, and concentration for the gamified approach, but we did not find significant learning improvement.", "title": "" }, { "docid": "55702c5dd8986f2510b06bc15870566a", "text": "Queuing networks are used widely in computer simulation studies. Examples of queuing networks can be found in areas such as the supply chains, manufacturing work flow, and internet routing. If the networks are fairly small in size and complexity, it is possible to create discrete event simulations of the networks without incurring significant delays in analyzing the system. However, as the networks grow in size, such analysis can be time consuming, and thus require more expensive parallel processing computers or clusters. We have constructed a set of tools that allow the analyst to simulate queuing networks in parallel, using the fairly inexpensive and commonly available graphics processing units (GPUs) found in most recent computing platforms. We present an analysis of a GPU-based algorithm, describing benefits and issues with the GPU approach. The algorithm clusters events, achieving speedup at the expense of an approximation error which grows as the cluster size increases. We were able to achieve 10-x speedup using our approach with a small error in a specific implementation of a synthetic closed queuing network simulation. This error can be mitigated, based on error analysis trends, obtaining reasonably accurate output statistics. The experimental results of the mobile ad hoc network simulation show that errors occur only in the time-dependent output statistics.", "title": "" }, { "docid": "4c82ba56d6532ddc57c2a2978de7fe5a", "text": "This paper presents a Model Reference Adaptive System (MRAS) based speed sensorless estimation of vector controlled Induction Motor Drive. MRAS based techniques are one of the best methods to estimate the rotor speed due to its performance and straightforward stability approach. Depending on the type of tuning signal driving the adaptation mechanism, MRAS estimators are classified into rotor flux based MRAS, back e.m.f based MRAS, reactive power based MRAS and artificial neural network based MRAS. In this paper, the performance of the rotor flux based MRAS for estimating the rotor speed was studied. Overview on the IM mathematical model is briefly summarized to establish a physical basis for the sensorless scheme used. Further, the theoretical basis of indirect field oriented vector control is explained in detail and it is implemented in MATLAB/SIMULINK.", "title": "" }, { "docid": "ec8e80ac733951b3cb2dfebb0fac2cf5", "text": "Now-a-days, researchers are increasingly looking into new and innovative techniques with the help of information technology to overcome the rapid surge in health care costs facing the community. Research undertaken in the past has shown that artificial intelligence (AI) tools and techniques can aid in the diagnosis of disease states and assessment of treatment outcomes. This has been demonstrated in a number of areas, including: help with medical decision support system, classification of heart disease from electrocardiogram (ECG) waveforms, identification of epileptic seizure from electroencephalogram (EEG) signals, ophthalmology to detect glaucoma disease, abnormality in movement pattern (gait) recognition for rehabilitation and potential falls risk minimization, assisting functional electrical stimulation (FES) control in rehabilitation setting of spinal cord injured patients, and clustering of medical images (Begg et al., 2003; Garrett et al., 2003; Masulli et al., 1998; Papadourokis et al., 1998; Silva & Silva, 1998). Recent developments in information technology and AI tools, particularly in neural networks, fuzzy logic and support vector machines, have provided the necessary support to develop highly efficient automated diagnostic systems. Despite plenty of future challenges, these new advances in AI tools hold much promise for future developments in AI-based approaches in solving medical and health-related problems. This article is organized as follows: Following an overview of major AI techniques, a brief review of some of the applications of AI in health care is provided. Future challenges and directions in automated diagnostics are discussed in the summary and conclusion sections.", "title": "" }, { "docid": "6dc078974eb732b2cdc9538d726ab853", "text": "We propose a non-permanent add-on that enables plenoptic imaging with standard cameras. Our design is based on a physical copying mechanism that multiplies a sensor image into a number of identical copies that still carry the plenoptic information of interest. Via different optical filters, we can then recover the desired information. A minor modification of the design also allows for aperture sub-sampling and, hence, light-field imaging. As the filters in our design are exchangeable, a reconfiguration for different imaging purposes is possible. We show in a prototype setup that high dynamic range, multispectral, polarization, and light-field imaging can be achieved with our design.", "title": "" }, { "docid": "eb4c25caba8c3e6f06d3cabe6c004cd5", "text": "The greater power of bad events over good ones is found in everyday events, major life events (e.g., trauma), close relationship outcomes, social network patterns, interpersonal interactions, and learning processes. Bad emotions, bad parents, and bad feedback have more impact than good ones, and bad information is processed more thoroughly than good. The self is more motivated to avoid bad self-definitions than to pursue good ones. Bad impressions and bad stereotypes are quicker to form and more resistant to disconfirmation than good ones. Various explanations such as diagnosticity and salience help explain some findings, but the greater power of bad events is still found when such variables are controlled. Hardly any exceptions (indicating greater power of good) can be found. Taken together, these findings suggest that bad is stronger than good, as a general principle across a broad range of psychological phenomena.", "title": "" }, { "docid": "a958167cce364ac045e69922191e2f64", "text": "WEKA is a popular machine learning workbench with a development life of nearly two decades. This article provides an overview of the factors that we believe to be important to its success. Rather than focussing on the software’s functionality, we review aspects of project management and historical development decisions that likely had an impact on the uptake of the project.", "title": "" }, { "docid": "bf1bcf55307b02adca47ff696be6f801", "text": "INTRODUCTION\nMobile phones are ubiquitous in society and owned by a majority of psychiatric patients, including those with severe mental illness. Their versatility as a platform can extend mental health services in the areas of communication, self-monitoring, self-management, diagnosis, and treatment. However, the efficacy and reliability of publicly available applications (apps) have yet to be demonstrated. Numerous articles have noted the need for rigorous evaluation of the efficacy and clinical utility of smartphone apps, which are largely unregulated. Professional clinical organizations do not provide guidelines for evaluating mobile apps.\n\n\nMATERIALS AND METHODS\nGuidelines and frameworks are needed to evaluate medical apps. Numerous frameworks and evaluation criteria exist from the engineering and informatics literature, as well as interdisciplinary organizations in similar fields such as telemedicine and healthcare informatics.\n\n\nRESULTS\nWe propose criteria for both patients and providers to use in assessing not just smartphone apps, but also wearable devices and smartwatch apps for mental health. Apps can be evaluated by their usefulness, usability, and integration and infrastructure. Apps can be categorized by their usability in one or more stages of a mental health provider's workflow.\n\n\nCONCLUSIONS\nUltimately, leadership is needed to develop a framework for describing apps, and guidelines are needed for both patients and mental health providers.", "title": "" }, { "docid": "f41f4e3b27bda4b3000f3ab5ae9ef22a", "text": "This paper, first analysis the performance of image segmentation techniques; K-mean clustering algorithm and region growing for cyst area extraction from liver images, then enhances the performance of K-mean by post-processing. The K-mean algorithm makes the clusters effectively. But it could not separate out the desired cluster (cyst) from the image. So, to enhance its performance for cyst region extraction, morphological opening-by-reconstruction is applied on the output of K-mean clustering algorithm. The results are presented both qualitatively and quantitatively, which demonstrate the superiority of enhanced K-mean as compared to standard K-mean and region growing algorithm.", "title": "" }, { "docid": "efd9b39c0d6284999f1d927d1979c769", "text": "This research is focused to find what are the key success factors for fast food industry in region of Peshawar Pakistan. Fast food concepts developed very rapidly in last few years in Peshawar region. The failure or success of a fast food industry based on some factors like Promotion, Service quality, Customer expectations, Brand, Physical Environment, Price, and Taste of the product. To find which of these factors has greater influence on consumer satisfaction, four fast food restaurants customers were targeted randomly. These four restaurants were KFC, CHIEF, ARBAIN CHICK, and PIZZA HUT. The data collected from the customers of these restaurants. The data collected from customers of these restaurants when the customers were in restaurants for refreshment. Total number of customers who were targeted was 120. From each restaurant 30 customers were targeted on availability basis. On the basis of their responses multiple regression and correlation test was applied. Findings of the study shows that service quality and brand are the key factors for satisfaction in fast food industry in Peshawar Pakistan.", "title": "" } ]
scidocsrr
d9c2aa257ffb4b01a993e5b68c9f2c8d
Artificial intelligence, machine learning and deep learning
[ { "docid": "fd1e327327068a1373e35270ef257c59", "text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.", "title": "" }, { "docid": "eb0001af27b36b02a24bc7f406198270", "text": "This thesis describes the design and implementation of a smile detector based on deep convolutional neural networks. It starts with a summary of neural networks, the difficulties of training them and new training methods, such as Restricted Boltzmann Machines or autoencoders. It then provides a literature review of convolutional neural networks and recurrent neural networks. In order to select databases for smile recognition, comprehensive statistics of databases popular in the field of facial expression recognition were generated and are summarized in this thesis. It then proposes a model for smile detection, of which the main part is implemented. The experimental results are discussed in this thesis and justified based on a comprehensive model selection performed. All experiments were run on a Tesla K40c GPU benefiting from a speedup of up to factor 10 over the computations on a CPU. A smile detection test accuracy of 99.45% is achieved for the Denver Intensity of Spontaneous Facial Action (DISFA) database, significantly outperforming existing approaches with accuracies ranging from 65.55% to 79.67%. This experiment is re-run under various variations, such as retaining less neutral images or only the low or high intensities, of which the results are extensively compared.", "title": "" } ]
[ { "docid": "871f1464f360c76b72c421689624495f", "text": "Coordinated Multi-Point (CoMP) transmission by neighboring base stations to mobile users is recently considered for both the cell-edge and the overall system throughput enhancements in next-generation cellular systems. As one of the most promising CoMP techniques, Joint Transmission (JT)-CoMP achieves higher spectral efficiency by exploiting cooperative diversity gains and inter-cell interference mitigation by coordinated transmission of the same information from multiple transmitting points. In this paper, we investigate the throughput and energy benefits of JT-CoMP transmission (over the conventional single-point transmission) in Long-Term Evolution (LTE) and LTE-Advanced homogeneous and heterogeneous network scenarios. For realistic rate/energy efficiency analysis, we propose a link-level modeling framework based on Finite State Markov Chain models for physical layer transport block transmission over the LTE radio access network. Results obtained show significant rate gains and energy savings using JT-CoMP (compared to the single-point transmission) especially for users in the cell-edge area.", "title": "" }, { "docid": "6cf18bea11ea8e95f24b7db69d3924e2", "text": "Experimentation in software engineering is necessar y but difficult. One reason is that there are a lar ge number of context variables, and so creating a cohesive under standing of experimental results requires a mechani sm for motivating studies and integrating results. It requ ires a community of researchers that can replicate studies, vary context variables, and build models that represent the common observations about the discipline. This paper discusses the experience of the authors, based upon a c llection of experiments, in terms of a framewo rk f r organizing sets of related studies. With such a fra mework, experiments can be viewed as part of common families of studies, rather than being isolated events. Common families of studies can contribute to important and relevant hypotheses that may not be suggested by individual experiments. A framework also facilitates building knowledge in an incremental manner through the replication of experiments within families of studies. To support the framework, this paper discusses the exp riences of the authors in carrying out empirica l studies, with specific emphasis on persistent problems encountere d in xperimental design, threats to validity, crit eria for evaluation, and execution of experiments in the dom ain of software engineering.", "title": "" }, { "docid": "a0d4d6c36cab8c5ed5be69bea1d8f302", "text": "In this paper, we propose a simple, fast decoding algorithm that fosters diversity in neural generation. The algorithm modifies the standard beam search algorithm by adding an intersibling ranking penalty, favoring choosing hypotheses from diverse parents. We evaluate the proposed model on the tasks of dialogue response generation, abstractive summarization and machine translation. We find that diverse decoding helps across all tasks, especially those for which reranking is needed. We further propose a variation that is capable of automatically adjusting its diversity decoding rates for different inputs using reinforcement learning (RL). We observe a further performance boost from this RL technique.1", "title": "" }, { "docid": "7aaf1de930b5aa3ca14fc8b0345999b0", "text": "A disturbance in scapulohumeral rhythm may cause negative biomechanic effects on rotator cuff (RC). Alteration in scapular motion and shoulder pain can influence RC strength. Purpose of this study was to assess supraspinatus and infraspinatus strength in 29 overhead athletes with scapular dyskinesis, before and after 3 and 6 months of rehabilitation aimed to restore scapular musculature balance. A passive posterior soft tissues stretching was prescribed to balance shoulder mobility. Scapular dyskinesis patterns were evaluated according to Kibler et al. Clinical assessment was performed with the empty can (EC) test and infraspinatus strength test (IST). Strength values were recorded by a dynamometer; scores for pain were assessed with VAS scale. Changes of shoulder IR were measured. The force values increased at 3 months (P < 0.01) and at 6 months (P < 0.01). Changes of glenohumeral IR and decrease in pain scores were found at both follow-up. Outcomes registered on pain and strength confirm the role of a proper scapular position for an optimal length-tension relationship of the RC muscles. These data should encourage those caring for athletes to consider restoring of scapular musculature balance as essential part of the athletic training.", "title": "" }, { "docid": "2e3cee13657129d26ec236f9d2641e6c", "text": "Due to the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to process and search for persons of interest among the billions of shared photos on these websites. Facebook revealed in a 2013 white paper that its users have uploaded more than 250 billion photos, and are uploading 350 million new photos each day. Due to this humongous amount of data, large-scale face search for mining web images is both important and challenging. Despite significant progress in face recognition, searching a large collection of unconstrained face images has not been adequately addressed. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top-k most similar faces using deep features generated from a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities from deep features and the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that the deep features are competitive with state-of-the-art methods on unconstrained face recognition benchmarks (LFW and IJB-A). More specifically, on the LFW database, we achieve 98.23% accuracy under the standard protocol and a verification rate of 87.65% at FAR of 0.1% under the BLUFR protocol. For the IJB-A benchmark, our accuracies are as follows: TAR of 51.4% at FAR of 0.1% (verification); Rank 1 retrieval of 82.0% (closed-set search); FNIR of 61.7% at FPIR of 1% (open-set search). Further, the proposed face search system offers an excellent trade-off between accuracy and scalability on datasets consisting of millions of images. Additionally, in an experiment involving searching for face images of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother’s (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5M gallery and at rank 8 in 7 seconds", "title": "" }, { "docid": "d95a204f5e931c9e5a1fff7dbfa3bc8c", "text": "Educational games have long been used in the classroom to add an immersive aspect to the curriculum. While the technology has a cadre of strong advocates, formal reviews have yielded mixed results. Two widely reported problems with educational games are poor production quality and monotonous game-play. On the other hand, commercial noneducational games exhibit both high production standards (good artwork, animation, and sound) and diversity of gameplay experience. Recently, educators have started to use commercial games in the classroom to overcome these obstacles. However, the use of these games is often limited since it is usually difficult to adapt them from their entertainment role. We describe how a commercial computer role-playing game (Neverwinter Nights) can be adapted by non-programmers, to produce a more enriching educational game-playing experience. This adaptation can be done by individual educators, groups of educators or by commercial enterprises. In addition, by using our approach, students can further adapt or augment the games they are playing to gain additional and deeper insights into the models and underlying abstractions of the subject domain they are learning about. This approach can be applied across a wide range of topics such as monetary systems in economics, the geography of a region, the culture of a population, or the sociology of a group or of interacting groups. EDUCATIONAL COMPUTER GAMES Educators are aware of the motivational power of simulation-based gaming and have diligently sought ways to exploit that power (Bowman, 1982; Malone & Lepper, 1987; Cordova & Lepper, 1996). Advocates of this approach have been captivated by the potential of creating immersive experiences (Stadsklev, 1974; Greenblat & Duke, 1975; Gee, 2003). The intent was to have students become existential player/participants operating within a virtual world with goals, resources and potential behaviors shaped by both the underlying model and the players’ experience and choices (Colella, Klopfer & Resnick, 2001; Collins & Ferguson, 1993; Rieber, 1996). Contemporary exponents of educational gaming/simulations have drawn their inspiration from modern video games (Gee, 2003). Like earlier proponents, they have been captivated by the ability of well designed gaming simulations to induce the immersive, \"in-the-groove\" experience that Csikszentmihalyi (1991) described as \"flow.\" They contend that the scaffolded learning principles employed in modern video games create the potential for participant experiences that are personally meaningful, socially rich, essentially experiential and highly epistemological (Bos, 2001; Gee, 2003; Halverson, 2003). Furthermore the design principles of successful video games provide a partial glimpse into possible future educational environments that incorporate what is commonly referred to as “just in time /need to know” learning (Prensky, 2001; Gee, 2005). Unfortunately, educational game producers have not had much success at producing the compelling, immersive environments of successful commercial games (Gee, 2003). “Most look like infomercials, showing low quality, poor editing, and low production costs.” (Squire & Jenkins, 2003, p11). Even relatively well received educational games such as Reader Rabbit, The Magic School Bus, Math Blaster, and States and Traits are little more than “electronic flashcards” that simply combine monotonous repetition with visual animations (Card, 1995; Squire & Jenkins, n.d.; Squire & Jenkins, 2003). Approaches to educational gaming/simulation can range from the instructivist in which students learn through playing games (Kafai, 1995) to the experimentalist in which students learn through exploring micro-worlds (Rieber, 1992, 1996) to the constructionist where students learn by building games (Papert & Harel, 1991). Advocates of the latter approach have been in the minority but the potential power of the game-building technologies and their potential as an alternative form of learning or expression is drawing increasing attention from the educational gaming community (Kafai, 2001; Robertson & Good, 2005). We have done some preliminary work with all three of these modes, with most of efforts focused on constructionist approaches (Carbonaro et al., 2005; Szafron et al., 2005). In this paper, we show how our constructivist approach can be adapted to create instructivist classroom materials. On the instructivist side, there are three basic approaches. First, simply use games that were created as educational games such as Reader Rabbit etc. and incur all of the problems manifested in this approach. Second, use commercial games, such as Civilization III (a historical simulation game) (Squire, 2005). However, it can be difficult for the educator to align a commercial game with specific educational topics or goals. Third, adapt a commercial game to meet specific educational goals. This is the approach we describe in this paper. We describe how the same gamebuilding tools we put into the hands of students can be used by educators to easily adapt commercial CPRGs to create instructivist classroom materials in the form of educational computer games.", "title": "" }, { "docid": "ef4e1490de4a837c18aad07f9ad8c5db", "text": "A planar 2-D leaky-wave (LW) antenna, capable of broadside radiation, is presented for millimeter wave applications. A directive surface-wave launcher (SWL) is utilized as the antenna feed exciting cylindrical surface-waves (SWs) on a grounded dielectric slab (GDS). With the addition of a segmented circular strip grating, cylindrical LWs can be excited on the antenna aperture. Measurements illustrate maximum gain at broadside at 19.48 GHz in both the E and H planes with a 10deg half power beamwidth. Specifically, a directive pencil beam is observed just at the edge of the TE1 SW mode cuttoff frequency of the slab (19.47 GHz), suggesting maximum radiation at the edge of a TE stopband.", "title": "" }, { "docid": "f4bdd6416013dfd2b552efef9c1b22e9", "text": "ABSTRACT\nTraumatic hemipelvectomy is an uncommon and life threatening injury. We report a case of a 16-year-old boy involved in a traffic accident who presented with an almost circumferential pelvic wound with wide diastasis of the right sacroiliac joint and symphysis pubis. The injury was associated with complete avulsion of external and internal iliac vessels as well as the femoral and sciatic nerves. He also had ipsilateral open comminuted fractures of the femur and tibia. Emergency debridement and completion of amputation with preservation of the posterior gluteal flap and primary anastomosis of the inferior gluteal vessels to the internal iliac artery stump were performed. A free fillet flap was used to close the massive exposed area.\n\n\nKEY WORDS\ntraumatic hemipelvectomy, amputation, and free gluteus maximus fillet flap.", "title": "" }, { "docid": "541d99f3312ad41b8c72f6be3b4106f6", "text": "This study addressed a relatively neglected topic in schizophrenia: identifying methods to reduce stigma directed toward individuals with this disorder. The study investigated whether presentation of information describing the association between violent behavior and schizophrenia could affect subjects' impressions of the dangerousness of both a target person with schizophrenia and individuals with mental illness in general. Subjects with and without previous contact with individuals with a mental illness were administered one of four \"information sheets\" with varying information about schizophrenia and its association with violent behavior. Subjects then read a brief vignette of a male or female target individual with schizophrenia. Results showed that subjects who reported previous contact with individuals with a mental illness rated the male target individual and individuals with mental illness in general as less dangerous than did subjects without previous contact. Subjects who received information summarizing the prevalence rates of violent behavior among individuals with schizophrenia and other psychiatric disorders (e.g., substance abuse) rated individuals with a mental illness as less dangerous than did subjects who did not receive this information. Implications of the findings for public education are discussed.", "title": "" }, { "docid": "d7671e3c1124d3011744b5d35a8b0ac9", "text": "Due to its potential for multi-gigabit and low latency wireless links, millimeter wave (mmWave) technology is expected to play a central role in 5th generation (5G) cellular systems. While there has been considerable progress in understanding the mmWave physical layer, innovations will be required at all layers of the protocol stack, in both the access and the core network. Discrete-event network simulation is essential for end-to-end, cross-layer research and development. This paper provides a tutorial on a recently developed full-stack mmWave module integrated into the widely used open-source ns–3 simulator. The module includes a number of detailed statistical channel models as well as the ability to incorporate real measurements or ray-tracing data. The physical and medium access control layers are modular and highly customizable, making it easy to integrate algorithms or compare orthogonal frequency division multiplexing numerologies, for example. The module is interfaced with the core network of the ns–3 Long Term Evolution (LTE) module for full-stack simulations of end-to-end connectivity, and advanced architectural features, such as dual-connectivity, are also available. To facilitate the understanding of the module, and verify its correct functioning, we provide several examples that show the performance of the custom mmWave stack as well as custom congestion control algorithms designed specifically for efficient utilization of the mmWave channel.", "title": "" }, { "docid": "87fa8c6c894208e24328aa9dbb71a889", "text": "In this paper, the design and measurements of a 8-12GHz high-efficiency MMIC high power amplifier (HPA) implemented in a 0.25μm GaAS pHEMT process is described. The 3-stage amplifier has demonstrated from 37% to 54% power-added efficiency (PAE) with 12W of output power and up to 27dB of small signal gain range from 8-12GHz. In particular, over the frequency band of 9-11 GHz, the circuit achieved above 45% PAE. The key to this design is determining and matching the optimum source and load impedance for PAE at the first two harmonics in output stage.", "title": "" }, { "docid": "51b91ef1b46d6696a0e99eb8649d6447", "text": "A solid-state drive (SSD) gains fast I/O speed and is becoming an ideal replacement for traditional rotating storage. However, its speed and responsiveness heavily depend on internal fragmentation. With a high degree of fragmentation, an SSD may experience sharp performance degradation. Hence, minimizing fragmentation in the SSD is an effective way to sustain its high performance. In this paper, we propose an innovative file data placement strategy for Rocks DB, a widely used embedded NoSQL database. The proposed strategy steers data to a write unit exposed by an SSD according to predicted data lifetime. By placing data with similar lifetime in the same write unit, fragmentation in the SSD is controlled at the time of data write. We evaluate our proposed strategy using the Yahoo! Cloud Serving Benchmark. Our experimental results demonstrate that the proposed strategy improves the Rocks DB performance significantly: the throughput can be increased by up to 41%, 99.99%ile latency reduced by 59%, and SSD lifetime extended by up to 18%.", "title": "" }, { "docid": "1dfbe95e53aeae347c2b42ef297a859f", "text": "With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important. Question answering over knowledge base (KB-QA) is one of the promising approaches to access the substantial knowledge. Meanwhile, as the neural networkbased (NN-based) methods develop, NNbased KB-QA has already achieved impressive results. However, previous work did not put more emphasis on question representation, and the question is converted into a fixed vector regardless of its candidate answers. This simple representation strategy is not easy to express the proper information in the question. Hence, we present an end-to-end neural network model to represent the questions and their corresponding scores dynamically according to the various candidate answer aspects via cross-attention mechanism. In addition, we leverage the global knowledge inside the underlying KB, aiming at integrating the rich KB information into the representation of the answers. As a result, it could alleviates the out-of-vocabulary (OOV) problem, which helps the crossattention model to represent the question more precisely. The experimental results on WebQuestions demonstrate the effectiveness of the proposed approach.", "title": "" }, { "docid": "7cc20934720912ad1c056dc9afd97e18", "text": "Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe two experiments that. demonstrate a real-time HMM-based system for recognizing sentence level American Sign Language (ASL) without explicitly modeling the fingers. The first experiment tracks hands wearing colored gloves and attains a word accuracy of 99%. The second experiment tracks hands without gloves and attains a word accuracy of 92%. Both experiments have a 40 word lexicon.", "title": "" }, { "docid": "7855f5c3a3abec2f31c3ef9b3b65d9bb", "text": "BLEU is the de facto standard machine translation (MT) evaluation metric. However, because BLEU computes a geometric mean of n-gram precisions, it often correlates poorly with human judgment on the sentence-level. Therefore, several smoothing techniques have been proposed. This paper systematically compares 7 smoothing techniques for sentence-level BLEU. Three of them are first proposed in this paper, and they correlate better with human judgments on the sentence-level than other smoothing techniques. Moreover, we also compare the performance of using the 7 smoothing techniques in statistical machine translation tuning.", "title": "" }, { "docid": "ce6296ae51be4e6fe3d36be618cdfe75", "text": "UNLABELLED\nOBJECTIVES. Understanding the factors that promote quality of life in old age has been a staple of social gerontology since its inception and remains a significant theme in aging research. The purpose of this article was to review the state of the science with regard to subjective well-being (SWB) in later life and to identify promising directions for future research.\n\n\nMETHODS\nThis article is based on a review of literature on SWB in aging, sociological, and psychological journals. Although the materials reviewed date back to the early 1960s, the emphasis is on publications in the past decade.\n\n\nRESULTS\nResearch to date paints an effective portrait of the epidemiology of SWB in late life and the factors associated with it. Although the research base is large, causal inferences about the determinants of SWB remain problematic. Two recent contributions to the research base are highlighted as emerging issues: studies of secular trends in SWB and cross-national studies. Discussion. The review ends with discussion of priority issues for future research.", "title": "" }, { "docid": "8db3f92e38d379ab5ba644ff7a59544d", "text": "Within American psychology, there has been a recent surge of interest in self-compassion, a construct from Buddhist thought. Self-compassion entails: (a) being kind and understanding toward oneself in times of pain or failure, (b) perceiving one’s own suffering as part of a larger human experience, and (c) holding painful feelings and thoughts in mindful awareness. In this article we review findings from personality, social, and clinical psychology related to self-compassion. First, we define self-compassion and distinguish it from other self-constructs such as self-esteem, self-pity, and self-criticism. Next, we review empirical work on the correlates of self-compassion, demonstrating that self-compassion has consistently been found to be related to well-being. These findings support the call for interventions that can raise self-compassion. We then review the theory and empirical support behind current interventions that could enhance self-compassion including compassionate mind training (CMT), imagery work, the gestalt two-chair technique, mindfulness based stress reduction (MBSR), dialectical behavior therapy (DBT), and acceptance and commitment therapy (ACT). Directions for future research are also discussed.", "title": "" }, { "docid": "2a1d77e0c5fe71c3c5eab995828ef113", "text": "Local modular control (LMC) is an approach to the supervisory control theory (SCT) of discrete-event systems that exploits the modularity of plant and specifications. Recently, distinguishers and approximations have been associated with SCT to simplify modeling and reduce synthesis effort. This paper shows how advantages from LMC, distinguishers, and approximations can be combined. Sufficient conditions are presented to guarantee that local supervisors computed by our approach lead to the same global closed-loop behavior as the solution obtained with the original LMC, in which the modeling is entirely handled without distinguishers. A further contribution presents a modular way to design distinguishers and a straightforward way to construct approximations to be used in local synthesis. An example of manufacturing system illustrates our approach. Note to Practitioners—Distinguishers and approximations are alternatives to simplify modeling and reduce synthesis cost in SCT, grounded on the idea of event-refinements. However, this approach may entangle the modular structure of a plant, so that LMC does not keep the same efficiency. This paper shows how distinguishers and approximations can be locally combined such that synthesis cost is reduced and LMC advantages are preserved.", "title": "" }, { "docid": "85ba8c2cb24fcd991f9f5193f92e736a", "text": "Energy-efficient operation is a challenge for wireless sensor networks (WSNs). A common method employed for this purpose is duty-cycled operation, which extends battery lifetime yet incurs several types of energy wastes and challenges. A promising alternative to duty-cycled operation is the use of wake-up radio (WuR), where the main microcontroller unit (MCU) and transceiver, that is, the two most energy-consuming elements, are kept in energy-saving mode until a special signal from another node is received by an attached, secondary, ultra-low power receiver. Next, this so-called wake-up receiver generates an interrupt to activate the receiver node's MCU and, consequently, the main radio. This article presents a complete wake-up radio design that targets simplicity in design for the monetary cost and flexibility concerns, along with a good operation range and very low power consumption. Both the transmitter (WuTx) and the receiver (WuRx) designs are presented with the accompanying physical experiments for several design alternatives. Detailed analysis of the end system is provided in terms of both operational distance (more than 10 m) and current consumption (less than 1 μA). As a reference, a commercial WuR system is analyzed and compared to the presented system by expressing the trade-offs and advantages of both systems.", "title": "" }, { "docid": "23e07013a82049f0c4e88bd071a083f8", "text": "A triple-resonance LC network increases the bandwidth of cascaded differential pairs by a factor of 2/spl radic/3, yielding a 40-Gb/s CMOS amplifier with a gain of 15 dB and a power dissipation of 190 mW from a 2.2-V supply. An ESD protection circuit employs negative capacitance along with T-coils and pn junctions to operate at 40 Gb/s while tolerating 700-800 V.", "title": "" } ]
scidocsrr
369280d4d23c9f75a9413a9616731578
Microstrip Yagi-Uda Antenna for 2 . 45 GHz RFID Handheld Reader
[ { "docid": "e6e19f678bfe46d8390e32f28f1d675d", "text": "In this paper, a miniaturized printed dipole antenna with the V-shaped ground is proposed for radio frequency identification (RFID) readers operating at the frequency of 2.45 GHz. The principles of the microstrip balun and the printed dipole are analyzed and design considerations are formulated. Through extending and shaping the ground to reduce the coupling between the balun and the dipole, the antenna’s impedance bandwidth is broadened and the antenna’s radiation pattern is improved. The 3D finite difference time domain (FDTD) Electromagnetic simulations are carried out to evaluate the antenna’s performance. The effects of the extending angle and the position of the ground are investigated to obtain the optimized parameters. The antenna was fabricated and measured in a microwave anechoic chamber. The results show that the proposed antenna achieves a broader impedance bandwidth, a higher forward radiation gain and a stronger suppression to backward radiation compared with the one without such a ground.", "title": "" } ]
[ { "docid": "46a4e4dbcb9b6656414420a908b51cc5", "text": "We review Bacry and Lévy-Leblond’s work on possible kinematics as applied to 2-dimensional spacetimes, as well as the nine types of 2-dimensional Cayley–Klein geometries, illustrating how the Cayley–Klein geometries give homogeneous spacetimes for all but one of the kinematical groups. We then construct a two-parameter family of Clifford algebras that give a unified framework for representing both the Lie algebras as well as the kinematical groups, showing that these groups are true rotation groups. In addition we give conformal models for these spacetimes.", "title": "" }, { "docid": "3c79c23036ed7c9a5542670264310141", "text": "This paper investigates possible improvements in grid voltage stability and transient stability with wind energy converter units using modified P/Q control. The voltage source converter (VSC) in modern variable speed wind turbines is utilized to achieve this enhancement. The findings show that using only available hardware for variable-speed turbines improvements could be obtained in all cases. Moreover, it was found that power system stability improvement is often larger when the control is modified for a given variable speed wind turbine rather than when standard variable speed turbines are used instead of fixed speed turbines. To demonstrate that the suggested modifications can be incorporated in real installations, a real situation is presented where short-term voltage stability is improved as an additional feature of an existing VSC high voltage direct current (HVDC) installation", "title": "" }, { "docid": "87bde2e773b513117ea51f3e6a5fe011", "text": "We present a study of the relationship between gender, linguistic style, and social networks, using a novel corpus of 14,000 Twitter users. Prior quantitative work on gender often treats this social variable as a female/male binary; we argue for a more nuanced approach. By clustering Twitter users, we find a natural decomposition of the dataset into various styles and topical interests. Many clusters have strong gender orientations, but their use of linguistic resources sometimes directly conflicts with the population-level language statistics. We view these clusters as a more accurate reflection of the multifaceted nature of gendered language styles. Previous corpus-based work has also had little to say about individuals whose linguistic styles defy population-level gender patterns. To identify such individuals, we train a statistical classifier, and measure the classifier confidence for each individual in the dataset. Examining individuals whose language does not match the classifier's model for their gender, we find that they have social networks that include significantly fewer same-gender social connections, and that in general, social network homophily is correlated with the use of same-gender language markers. Pairing computational methods and social theory thus offers a new perspective on how gender emerges as individuals position themselves relative to audiences, topics, and mainstream gender norms. [206 words]", "title": "" }, { "docid": "a418cff28ecf8c582d3f90a8b141d525", "text": "Sarcasm is a special form of irony or satirical wit in which people convey the opposite of what they mean. Sarcasm largely increases in social networks, especially in Twitter. Detecting sarcasm in tweets improves the automatic analysis tools that analyze the data to provide or enhance customer service and fabricate or enhance a product. Also, there are few studies that focus on detecting Arabic sarcasm in tweets. Consequently, we propose a classifier model that detects Arabic-sarcasm tweets by classifying them as sarcastic by setting some features that may declare a tweet as sarcastic using Weka. We evaluated our model through recall, precision, and f-score measurements that gave 0.659, 0.710, and 0.676 values, respectively, which these results are high especially when it comes to Arabic.", "title": "" }, { "docid": "9f3181d49f41e83dc6ea6d980b011f48", "text": "A new design of the compact nanosecond pulse generator based of the drift step recovery diodes as an ignition system for internal combustion engines (ICEs) has been presented. Experimental results of the comparative analysis of the standard ignition system and an ignition system on the basis of the nanosecond pulsed discharge for the four-cylinder ICE have been presented. It was shown that a nonequilibrium plasma formed by the discharge was an effective way to reduce the specific fuel consumption as well as the concentration of nitrogen oxides in the engine exhaust gases. A numerical analysis of possible mechanisms of the nonequilibrium plasma influence on the suppression of the nitrogen oxides formation has been carried out.", "title": "" }, { "docid": "3615db7b4a62f981ef62062084597ca5", "text": "Adoption is a topic of crucial importance both to those directly involved and to society. Yet, at this writing, the federal government collects no comprehensive national statistics on adoption. The purpose of this article is to address what we do know, what we do not know, and what we need to know about the statistics on adoption. The article provides an overview of adoption and describes data available regarding adoption arrangements and the characteristics of parents who relinquish children, of children who are adopted or in substitute care, and of adults who seek to adopt. Recommendations for future data collection are offered, including the establishment of a national data collection system for adoption statistics. doption is an issue of vital importance for all persons involved in Kathy S. Stolley, M.A., is an instructor in the A the adoption triangle: the child, the adoptive parents, and the Department of Sociology birthparents. According to national estimates, one million children in the United States live with adoptive parents, and from 2% to and Criminal Justice at Old Dominion Univer4% of American families include an adopted child. sity, Norfolk, VA. Adoption is most important for infertile couples seeking children and children in need of parents. Yet adoption issues also have consequences for the larger society in such areas as public welfare and mental health. Additionally, adoption can be framed as a public health issue, particularly in light of increasing numbers of pediatric AIDS cases and concerns regarding drug-exposed infants, and “boarder” babies available for adoption. Adoption is also often supported as an alternative to abortion. Limitations of Available Data Despite the importance of adoption to many groups, it remains an underresearched area and a topic on which the data are incomplete. Indeed, at this writing, no comprehensive national data on adoption are collected by the federal government. Through the Children’s Bureau and later the National Center for Social Statistics (NCSS), the federal government collected adoption data periodically between 1944 and 1957, then annually from 1957 to 1975. States voluntarily reported summary statistics on all types of finalized adoptions using data primarily drawn from court records. The number of states and territories participating in this reporting system varied from year to year, ranging from a low of 22 in 1944 to a high of 52 during the early 1960s.4 This data collection effort ended in 1975 with the dissolution of the NCSS. The Future of Children ADOPTION Vol. 3 • No. 1 Spring 1993", "title": "" }, { "docid": "df56d2914cdfbc31dff9ecd9a3093379", "text": "In this paper, square slot (SS) upheld by the substrate integrated waveguide (SIW) cavity is presented. A simple 50 Ω microstrip line is employed to feed this cavity. Then slot matched cavity modes are coupled to the slot and radiated efficiently. The proposed antenna features the following structural advantages, compact size, light weight and easy low cost fabrication. Concerning the electrical performance, it exhibits 15% impedance bandwidth for the reflection coefficient less than -10 dB and the realized gain touches 8.5 dB frontier.", "title": "" }, { "docid": "d9ac3ee5ccfa160da42bc740d35faa6f", "text": "This study aimed to determine the prevalence and sources of stress among Thai medical students. The questionnaires,which consisted of the Thai Stress Test (TST) and questions asking about sources of stress, were sent to all medical students in the Faculty of Medicine, Ramathibodi Hospital, Thailand. A total of 686 students participated. The results showed that about 61.4% of students had some degree of stress. Seventeen students (2.4%) reported a high level of stress. The prevalence of stress is highest among third-year medical students. Academic problems were found to be a major cause of stress among all students. The most prevalent source of academic stress was the test/exam. Other sources of stress in medical school and their relationships are also discussed. The findings can help medical teachers understand more about stress among their students and guide the way to improvement in an academic context, which is important for student achievement.", "title": "" }, { "docid": "d77ec9805763e9afd9a229f534338fde", "text": "The purpose of the study was to investigate the effects of teachers’ demographic variables on implementation of Information Communication Technology in public secondary schools in Nyeri Central district, Kenya. The dependent variable was implementation of ICT and the independent variables were teachers’ teaching experience and training. The research design used was descriptive survey design. The target population was 275 teachers working in 15 public secondary schools in Nyeri Central district. The sampling design was stratified random sampling and sample size was 82 teachers. The study targeted 15 principals of the schools in Nyeri Central district. The data collection tools were questionnaires, interview schedule and observation schedule. Data were analyzed quantitatively and qualitatively. Teachers’ training in ICT and teaching experience are not consistent in affecting ICT implementation. Many schools especially in rural areas had not embraced ICT mainly because teachers lacked adequate training, had lower levels of education, and had negative attitude towards ICT implementation. This has led to schools facing major challenges in ICT implementation. The researcher recommends that Public secondary schools should find a way to purchase more ICT facilities and support teachers’ training on the use of ICT. The government needs to give more financial support through free education programme and donations to enhance ICT implementation in public secondary schools. The teachers should change their attitude towards the use and implementation of ICT in the schools so as to create information technology culture in all aspects of teaching and learning. Wachiuri Reuben Nguyo", "title": "" }, { "docid": "264338f11dbd4d883e791af8c15aeb0d", "text": "With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learningbased 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose occupancy networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.", "title": "" }, { "docid": "d83d672642531e1744afe77ed8379b90", "text": "Customer churn prediction in Telecom Industry is a core research topic in recent years. A huge amount of data is generated in Telecom Industry every minute. On the other hand, there is lots of development in data mining techniques. Customer churn has emerged as one of the major issues in Telecom Industry. Telecom research indicates that it is more expensive to gain a new customer than to retain an existing one. In order to retain existing customers, Telecom providers need to know the reasons of churn, which can be realized through the knowledge extracted from Telecom data. This paper surveys the commonly used data mining techniques to identify customer churn patterns. The recent literature in the area of predictive data mining techniques in customer churn behavior is reviewed and a discussion on the future research directions is offered.", "title": "" }, { "docid": "9b5eca94a1e02e97e660d0f5e445a8a1", "text": "PURPOSE\nThe purpose of this study was to evaluate the effect of individualized repeated intravitreal injections of ranibizumab (Lucentis, Genentech, South San Francisco, CA) on visual acuity and central foveal thickness (CFT) for branch retinal vein occlusion-induced macular edema.\n\n\nMETHODS\nThis study was a prospective interventional case series. Twenty-eight eyes of 28 consecutive patients diagnosed with branch retinal vein occlusion-related macular edema treated with repeated intravitreal injections of ranibizumab (when CFT was >225 microm) were evaluated. Optical coherence tomography and fluorescein angiography were performed monthly.\n\n\nRESULTS\nThe mean best-corrected distance visual acuity improved from 62.67 Early Treatment of Diabetic Retinopathy Study letters (logarithm of the minimum angle of resolution = 0.74 +/- 0.28 [mean +/- standard deviation]) at baseline to 76.8 Early Treatment of Diabetic Retinopathy Study letters (logarithm of the minimum angle of resolution = 0.49 +/- 0.3; statistically significant, P < 0.001) at the end of the follow-up (9 months). The mean letter gain (including the patients with stable and worse visual acuities) was 14.3 letters (2.9 lines). During the same period, 22 of the 28 eyes (78.6%) showed improved visual acuity, 4 (14.2%) had stable visual acuity, and 2 (7.14%) had worse visual acuity compared with baseline. The mean CFT improved from 349 +/- 112 microm at baseline to 229 +/- 44 microm (significant, P < 0.001) at the end of follow-up. A mean of six injections was performed during the follow-up period. Our subgroup analysis indicated that patients with worse visual acuity at presentation (<or=50 letters in our series) showed greater visual benefit from treatment. \"Rebound\" macular edema was observed in 5 patients (17.85%) at the 3-month follow-up visit and in none at the 6- and 9-month follow-ups. In 18 of the 28 patients (53.6%), the CFT was <225 microm at the last follow-up visit, and therefore, further treatment was not instituted. No ocular or systemic side effects were noted.\n\n\nCONCLUSION\nIndividualized repeated intravitreal injections of ranibizumab showed promising short-term results in visual acuity improvement and decrease in CFT in patients with macular edema associated with branch retinal vein occlusion. Further studies are needed to prove the long-term effect of ranibizumab treatment on patients with branch retinal vein occlusion.", "title": "" }, { "docid": "8988aaa4013ef155cbb09644ca491bab", "text": "Uses and gratification theory aids in the assessment of how audiences use a particular medium and the gratifications they derive from that use. In this paper this theory has been applied to derive Internet uses and gratifications for Indian Internet users. This study proceeds in four stages. First, six first-order gratifications namely self development, wide exposure, user friendliness, relaxation, career opportunities, and global exchange were identified using an exploratory factor analysis. Then the first order gratifications were subjected to firstorder confirmatory factor analysis. Third, using second-order confirmatory factor analysis three types of secondorder gratifications were obtained, namely process gratifications, content gratifications and social gratifications. Finally, with the use of t-tests the study has shown that males and females differ significantly on the gratification factors “self development”, “user friendliness”, “wide exposure” and “relaxation.” The intended audience consists of masters’ level students and doctoral students who want to learn exploratory factor analysis and confirmatory factor analysis. This case study can also be used to teach the basics of structural equation modeling using the software AMOS.", "title": "" }, { "docid": "46eaa1108cf5027b5427fda8fc9197ff", "text": "ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts.", "title": "" }, { "docid": "9de7af8824594b5de7d510c81585c61b", "text": "The adoption of business process improvement strategies is a challenge to organizations trying to improve the quality and productivity of their services. The quest for the benefits of this improvement on resource optimization and the responsiveness of the organizations has raised several proposals for business process improvement approaches. However, proposals and results of scientific research on process improvement in higher education institutions, extremely complex and unique organizations, are still scarce. This paper presents a method that provides guidance about how practices and knowledge are gathered to contribute for business process improvement based on the communication between different stakeholders.", "title": "" }, { "docid": "951ad18af2b3c9b0ca06147b0c804f65", "text": "Food photos are widely used in food logs for diet monitoring and in social networks to share social and gastronomic experiences. A large number of these images are taken in restaurants. Dish recognition in general is very challenging, due to different cuisines, cooking styles, and the intrinsic difficulty of modeling food from its visual appearance. However, contextual knowledge can be crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and location of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then, we reformulate the problem using a probabilistic model connecting dishes, restaurants, and locations. We apply that model in three different tasks: dish recognition, restaurant recognition, and location refinement. Experiments on six datasets show that by integrating multiple evidences (visual, location, and external knowledge) our system can boost the performance in all tasks.", "title": "" }, { "docid": "db6f0d22ce46e416067821e99ec3faca", "text": "Traditional pet therapy enhances individual well-being. However, there are situations where a substitute artificial companion (i.e., robotic pet) may serve as a better alternative because of insufficient available resources to care for a real pet, allergic responses to pets, or other difficulties. This pilot study, which compared the benefits of a robotic cat and a plush toy cat as interventions for elderly persons with dementia, was conducted at a special care unit of a large, not-for-profit nursing home. Various aspects of a person's engagement and affect were assessed through direct observations. Though not identical, similar trends were seen for the two cats. Interacting with the cats was linked with decreased agitation and increased pleasure and interest. The study is intended to pave the way for future research on robotherapy with nursing home residents.", "title": "" }, { "docid": "8fc05d9e26c0aa98ffafe896d8c5a01b", "text": "We describe our clinical question answering system implemented for the Text Retrieval Conference (TREC 2016) Clinical Decision Support (CDS) track. We submitted five runs using a combination of knowledge-driven (based on a curated knowledge graph) and deep learning-based (using key-value memory networks) approaches to retrieve relevant biomedical articles for answering generic clinical questions (diagnoses, treatment, and test) for each clinical scenario provided in three forms: notes, descriptions, and summaries. The submitted runs were varied based on the use of notes, descriptions, or summaries in association with different diagnostic inferencing methodologies applied prior to biomedical article retrieval. Evaluation results demonstrate that our systems achieved best or close to best scores for 20% of the topics and better than median scores for 40% of the topics across all participants considering all evaluation measures. Further analysis shows that on average our clinical question answering system performed best with summaries using diagnostic inferencing from the knowledge graph whereas our key-value memory network model with notes consistently outperformed the knowledge graph-based system for notes and descriptions. ∗The author is also affiliated with Worcester Polytechnic Institute (szhao@wpi.edu). †The author is also affiliated with Northwestern University (kathy.lee@eecs.northwestern.edu). ‡The author is also affiliated with Brandeis University (aprakash@brandeis.edu).", "title": "" }, { "docid": "9adaeac8cedd4f6394bc380cb0abba6e", "text": "The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, \"cocktail-party\" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the \"cocktail party problem\".", "title": "" } ]
scidocsrr
5d2d098c19485cad4e7b45a252320940
Through-Wall Imaging of Moving Targets Using UWB Random Noise Radar
[ { "docid": "cdd0df004c24963c8ad1f405b1a3e1b0", "text": "Various parts of the human body have different movements when a person is performing different physical activities. There is a need to remotely detect human heartbeat and breathing for applications involving anti-terrorism and search-and-rescue. Ultrawideband noise radar systems are attractive because they are covert and immune from interference. The conventional time-frequency analyses of human activity are not generally applicable to nonlinear and nonstationary signals. If one can decompose the noisy baseband reflected signal and extract only the human-induced Doppler from it, the identification of various human activities becomes easier. We propose a nonstationary model to describe human motion and apply the Hilbert-Huang transform (HHT), which is adaptive to nonlinear and nonstationary signals, in order to analyze frequency characteristics of the baseband signal. When used with noise-like radar data, it is useful covertly identify specific human movement.", "title": "" } ]
[ { "docid": "4cba17be3bb11ba3f2051f5e574a2789", "text": "The recent advances in RFID offer vast opportunities for research, development and innovation in agriculture. The aim of this paper is to give readers a comprehensive view of current applications and new possibilities, but also explain the limitations and challenges of this technology. RFID has been used for years in animal identification and tracking, being a common practice in many farms. Also it has been used in the food chain for traceability control. The implementation of sensors in tags, make possible to monitor the cold chain of perishable food products and the development of new applications in fields like environmental monitoring, irrigation, specialty crops and farm machinery. However, it is not all advantages. There are also challenges and limitations that should be faced in the next years. The operation in harsh environments, with dirt, extreme temperatures; the huge volume of data that are difficult to manage; the need of longer reading ranges, due to the reduction of signal strength due to propagation in crop canopy; the behavior of the different frequencies, understanding what is the right one for each application; the diversity of the standards and the level of granularity are some of them. 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c27f8a936f1b5da0b6ddb68bdfb205a8", "text": "Developmental dyslexia refers to a group of children who fail to learn to read at the normal rate despite apparently normal vision and neurological functioning. Dyslexic children typically manifest problems in printed word recognition and spelling, and difficulties in phonological processing are quite common (Lyon, 1995; Rack, Snowling, & Olson, 1992; Stanovich, 1988; Wagner & Torgesen, 1987). The phonological processing problems include, but are not limited to difficulties in pronouncing nonsense words, poor phonemic awareness, problems in representing phonological information in short-term memory and difficulty in rapidly retrieving the names of familiar objects, digits and letters (Stanovich, 1988; Wagner & Torgesen, 1987; Wolf & Bowers, 1999). The underlying cause of phonological deficits in dyslexic children is not yet clear. One possible source is developmentally deviant perception of speech at the phoneme level. A number of studies have shown that dyslexics' categorizations of speech sounds are less sharp than normal readers (Chiappe, Chiappe, & Siegel, 2001; Godfrey, Syrdal-Lasky, Millay, & Knox, 1981; Maassen, Groenen, Crul, Assman-Hulsmans, & Gabreels, 2001; Reed, 1989; Serniclaes, Sprenger-Charolles, Carré, & Demonet, 2001;Werker & Tees, 1987). These group differences have appeared in tasks requiring the labeling of stimuli varying along a perceptual continuum (such as voicing or place of articulation), as well as on speech discrimination tasks. In two studies, there was evidence that dyslexics showed better discrimination of sounds differing phonetically within a category boundary (Serniclaes et al, 2001; Werker & Tees, 1987), whereas in one study, dyslexics were poorer at both within-phoneme and between phoneme discrimination (Maassen et al, 2001). There is evidence that newborns and 6-month olds with a familial risk for dyslexia have reduced sensitivity to speech and non-speech sounds (Molfese, 2000; Pihko, Leppanen, Eklund, Cheour, Guttorm & Lyytinen, 1999). If dyslexics are impaired from birth in auditory processing, or more specifically in speech perception, this would affect the development and use of phonological representations on a wide variety of tasks, most intensively in phonological awareness and decoding. Although differences in speech perception have been observed, it has also been noted that the effects are often weak, small in size or shown by only some of the dyslexic subjects (Adlard & Hazan, 1998; Brady, Shankweiler, & Mann, 1983; Elliot, Scholl, Grant, & Hammer, 1990; Manis, McBride-Chang, Seidenberg, Keating, Doi, Munson, & Petersen (1997); Nittrouer, 1999; Snowling, Goulandris, Bowlby, & Howell, 1986). One reason for small, or variable effects, might be that the dyslexic population is heterogeneous, and that speech perception problems are more common among particular subgroups of dyslexics. A specific hypothesis is that speech perception problems are more concentrated among dyslexic children showing greater", "title": "" }, { "docid": "2788ad279b96e830ba957106374e2537", "text": "We present a new lock-free parallel algorithm for computing betweenness centrality of massive complex networks that achieves better spatial locality compared with previous approaches. Betweenness centrality is a key kernel in analyzing the importance of vertices (or edges) in applications ranging from social networks, to power grids, to the influence of jazz musicians, and is also incorporated into the DARPA HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph analytics. We design an optimized implementation of betweenness centrality for the massively multithreaded Cray XMT system with the Thread-storm processor. For a small-world network of 268 million vertices and 2.147 billion edges, the 16-processor XMT system achieves a TEPS rate (an algorithmic performance count for the number of edges traversed per second) of 160 million per second, which corresponds to more than a 2× performance improvement over the previous parallel implementation. We demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for the large IMDb movie-actor network.", "title": "" }, { "docid": "428de42a8b3091728724ea9abefffb0b", "text": "BACKGROUND\nIn developed countries, regular breakfast consumption is inversely associated with excess weight and directly associated with better dietary and improved physical activity behaviors. Our objective was to describe the frequency of breakfast consumption among school-going adolescents in Delhi and evaluate its association with overweight and obesity as well as other dietary, physical activity, and sedentary behaviors.\n\n\nMETHODS\n\n\n\nDESIGN\nCross-sectional study.\n\n\nSETTING\nEight schools (Private and Government) of Delhi in the year 2006.\n\n\nPARTICIPANTS\n1814 students from 8th and 10th grades; response rate was 87.2%; 55% were 8th graders, 60% were boys and 52% attended Private schools.\n\n\nMAIN OUTCOME MEASURES\nBody mass index, self-reported breakfast consumption, diet and physical activity related behaviors, and psychosocial factors.\n\n\nDATA ANALYSIS\nMixed effects regression models were employed, adjusting for age, gender, grade level and school type (SES).\n\n\nRESULTS\nSignificantly more Government school (lower SES) students consumed breakfast daily as compared to Private school (higher SES) students (73.8% vs. 66.3%; p<0.01). More 8th graders consumed breakfast daily vs.10th graders (72.3% vs. 67.0%; p<0.05). A dose-response relationship was observed such that overall prevalence of overweight and obesity among adolescents who consumed breakfast daily (14.6%) was significantly lower vs. those who only sometimes (15.2%) or never (22.9%) consumed breakfast (p<0.05 for trend). This relationship was statistically significant for boys (15.4 % vs. 16.5% vs. 26.0; p<0.05 for trend) but not for girls. Intake of dairy products, fruits and vegetables was 5.5 (95% CI 2.4-12.5), 1.7 (95% CI 1.1-2.5) and 2.2 (95% CI 1.3-3.5) times higher among those who consumed breakfast daily vs. those who never consumed breakfast. Breakfast consumption was associated with greater physical activity vs. those who never consumed breakfast. Positive values and beliefs about healthy eating; body image satisfaction; and positive peer and parental influence were positively associated with daily breakfast consumption, while depression was negatively associated.\n\n\nCONCLUSION\nDaily breakfast consumption is associated with less overweight and obesity and with healthier dietary- and physical activity-related behaviors among urban Indian students. Although prospective studies should confirm the present results, intervention programs to prevent or treat childhood obesity in India should consider emphasizing regular breakfast consumption.", "title": "" }, { "docid": "1a78e17056cca09250c7cc5f81fb271b", "text": "This paper presents a lightweight stereo vision-based driving lane detection and classification system to achieve the ego-car’s lateral positioning and forward collision warning to aid advanced driver assistance systems (ADAS). For lane detection, we design a self-adaptive traffic lanes model in Hough Space with a maximum likelihood angle and dynamic pole detection region of interests (ROIs), which is robust to road bumpiness, lane structure changing while the ego-car’s driving and interferential markings on the ground. What’s more, this model can be improved with geographic information system or electronic map to achieve more accurate results. Besides, the 3-D information acquired by stereo matching is used to generate an obstacle mask to reduce irrelevant objects’ interfere and detect forward collision distance. For lane classification, a convolutional neural network is trained by using manually labeled ROI from KITTI data set to classify the left/right-side line of host lane so that we can provide significant information for lane changing strategy making in ADAS. Quantitative experimental evaluation shows good true positive rate on lane detection and classification with a real-time (15Hz) working speed. Experimental results also demonstrate a certain level of system robustness on variation of the environment.", "title": "" }, { "docid": "c451c09ca5535cce49d4fa5d0df7318f", "text": "This paper features the kinematic analysis of a SCORBOT-ER Vplus robot arm which is used for doing successful robotic manipulation task in its workspace. The SCORBOT-ER Vplus is a 5-dof vertical articulated robot and all the joints are revolute [1]. The kinematics problem is defined as the transformation from the Cartesian space to the joint space and vice versa. The Denavit-Harbenterg (D-H) model of representation is used to model robot links and joints in this study along with 4x4 homogeneous matrix. SCORBOT-ER Vplus is a dependable and safe robotic system designed for laboratory and training applications. This versatile system allows students to gain theoretical and practical experience in robotics, automation and control systems. The MATLAB 8.0 is used to solve this mathematical model for a set of joint parameter.", "title": "" }, { "docid": "f0e22717207ed3bc013d09db3edc337c", "text": "The bag-of-words model is one of the most popular representation methods for object categorization. The key idea is to quantize each extracted key point into one of visual words, and then represent each image by a histogram of the visual words. For this purpose, a clustering algorithm (e.g., K-means), is generally used for generating the visual words. Although a number of studies have shown encouraging results of the bag-of-words representation for object categorization, theoretical studies on properties of the bag-of-words model is almost untouched, possibly due to the difficulty introduced by using a heuristic clustering process. In this paper, we present a statistical framework which generalizes the bag-of-words representation. In this framework, the visual words are generated by a statistical process rather than using a clustering algorithm, while the empirical performance is competitive to clustering-based method. A theoretical analysis based on statistical consistency is presented for the proposed framework. Moreover, based on the framework we developed two algorithms which do not rely on clustering, while achieving competitive performance in object categorization when compared to clustering-based bag-of-words representations.", "title": "" }, { "docid": "7e38d03cdfcbfb002a07017b27260b4a", "text": "We present the CBTree, a new counting-based self-adjusting search tree that like splay trees, moves more frequently accessed nodes closer to the root. After m operations on n items, q of which access some item v, an operation on v traverses a path of length O(log m q ) while performing few if any rotations. In contrast to the traditional self-adjusting splay tree in which each accessed item is moved to the root through a sequence of tree rotations, CBTree performs rotations infrequently (an amortized subconstant o(1) per operation if m n), mostly at the bottom of the tree, therefore CBTree scales with the concurrency level. We adapt CBTree to a multicore setting and show experimentally that it improves performance compared to existing concurrent search trees on non-uniform access sequences derived from real workloads. CBTree achieves the above by trading-o rotations for keeping history. Each node maintains a count of the number of accesses to it, and CBTree uses the counters to decide when to do a rotation. CBTree's decision rule is inspired by the analysis of splaying, which we draw upon to prove that CBTree achieves similar time bounds to the splay tree while performing only few if any tree rotations per operation. We present another new self-adjusting algorithm LazyCBTree. LazyCBTree is near optimal sequentially, however it improves the performance on multi-core machines in practice. LazyCBTree, the lazy counting-based tree, like CBTree counts accesses to nodes and moves frequently accessed subtrees towards the root, obtaining optimal tree structure in practice including in a sequential setting. Unlike CBTree, LazyCBTree lazily makes at most one local tree rotation on each lookup, usually requiring no restructuring at all. LazyCBTree thus avoids creating a sequential bottleneck. We evaluate CBTree and LazyCBTree on real non-uniform access patterns and show that they are signi cantly improve performance compared to existing self-adjusting trees as well as balanced (non-adjusting) trees.", "title": "" }, { "docid": "44791d65e5f5e4645a6f99c0b2cdac8f", "text": "Electronic Music Distribution (EMD) is in demand of robust, automatically extracted music descriptors. We introduce a timbral similarity measures for comparing music titles. This measure is based on a Gaussian model of cepstrum coefficients. We describe the timbre extractor and the corresponding timbral similarity relation. We describe experiments in assessing the quality of the similarity relation, and show that the measure is able to yield interesting similarity relations, in particular when used in conjunction with other similarity relations. We illustrate the use of the descriptor in several EMD applications developed in the context of the Cuidado European project.", "title": "" }, { "docid": "fb224564e0a9344c08286e0cdad9aa58", "text": "This study investigates the practice of manufacturing flexibility in small and medium sized firms. Using the data collected from 87 firms from machinery and machine tool industries in Taiwan, we analyzed and prescribed the alignment of various manufacturing flexibility dimensions with business strategies. Several practical approaches to developing manufacturing flexibility in small and medium sized firms were discussed. In addition, statistical results indicate that the compatibility between business strategy and manufacturing flexibility is critical to business performance. The one-to-one relationship between business strategy and manufacturing flexibility is established to enable managers to set clear priorities in investing and developing necessary manufacturing flexibility. r 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "5ce4f8227c5eebfb8b7b1dffc5557712", "text": "In this paper, we propose a novel approach for face spoofing detection using the high-order Local Derivative Pattern from Three Orthogonal Planes (LDP-TOP). The proposed method is not only simple to derive and implement, but also highly efficient, since it takes into account both spatial and temporal information in different directions of subtle face movements. According to experimental results, the proposed approach outperforms state-of-the-art methods on three reference datasets, namely Idiap REPLAY-ATTACK, CASIA-FASD, and MSU MFSD. Moreover, it requires only 25 video frames from each video, i.e., only one second, and thus potentially can be performed in real time even on low-cost devices.", "title": "" }, { "docid": "44d35096c19c909c00b56a474d00377e", "text": "This paper introduces the large scale visual search algorithm and system infrastructure at Alibaba. The following challenges are discussed under the E-commercial circumstance at Alibaba (a) how to handle heterogeneous image data and bridge the gap between real-shot images from user query and the online images. (b) how to deal with large scale indexing for massive updating data. (c) how to train deep models for effective feature representation without huge human annotations. (d) how to improve the user engagement by considering the quality of the content. We take advantage of large image collection of Alibaba and state-of-the-art deep learning techniques to perform visual search at scale. We present solutions and implementation details to overcome those problems and also share our learnings from building such a large scale commercial visual search engine. Specifically, model and search-based fusion approach is introduced to effectively predict categories. Also, we propose a deep CNN model for joint detection and feature learning by mining user click behavior. The binary index engine is designed to scale up indexing without compromising recall and precision. Finally, we apply all the stages into an end-to-end system architecture, which can simultaneously achieve highly efficient and scalable performance adapting to real-shot images. Extensive experiments demonstrate the advancement of each module in our system. We hope visual search at Alibaba becomes more widely incorporated into today's commercial applications.", "title": "" }, { "docid": "7b654a88ed7db6db90856e947fe0327d", "text": "The need for automatic document summarization that can be used for practical applications is increasing rapidly. In this paper, we propose a general framework for summarization that extracts sentences from a document using externally related information. Our work is aimed at single document summarization using small amounts of reference summaries. In particular, we address document summarization in the framework of multitask learning using curriculum learning for sentence extraction and document classification. The proposed framework enables us to obtain better feature representations to extract sentences from documents. We evaluate our proposed summarization method on two datasets: financial report and news corpus. Experimental results demonstrate that our summarizers achieve performance that is comparable to stateof-the-art systems.", "title": "" }, { "docid": "f4239b2be54e80666bd21d8c50a6b1b0", "text": "Limited work has examined how self-affirmation might lead to positive outcomes beyond the maintenance of a favorable self-image. To address this gap in the literature, we conducted two studies in two cultures to establish the benefits of self-affirmation for psychological well-being. In Study 1, South Korean participants who affirmed their values for 2 weeks showed increased eudaimonic well-being (need satisfaction, meaning, and flow) relative to control participants. In Study 2, U.S. participants performed a self-affirmation activity for 4 weeks. Extending Study 1, after 2 weeks, self-affirmation led both to increased eudaimonic well-being and hedonic well-being (affect balance). By 4 weeks, however, these effects were non-linear, and the increases in affect balance were only present for vulnerable participants-those initially low in eudaimonic well-being. In sum, the benefits of self-affirmation appear to extend beyond self-protection to include two types of well-being.", "title": "" }, { "docid": "e44fefc20ed303064dabff3da1004749", "text": "Printflatables is a design and fabrication system for human-scale, functional and dynamic inflatable objects. We use inextensible thermoplastic fabric as the raw material with the key principle of introducing folds and thermal sealing. Upon inflation, the sealed object takes the expected three dimensional shape. The workflow begins with the user specifying an intended 3D model which is decomposed to two dimensional fabrication geometry. This forms the input for a numerically controlled thermal contact iron that seals layers of thermoplastic fabric. In this paper, we discuss the system design in detail, the pneumatic primitives that this technique enables and merits of being able to make large, functional and dynamic pneumatic artifacts. We demonstrate the design output through multiple objects which could motivate fabrication of inflatable media and pressure-based interfaces.", "title": "" }, { "docid": "2d6627f0cd3b184bae491d7ae003fe82", "text": "The aim of this paper is to explore the possibility of using geo-referenced satellite or aerial images to augment an Unmanned Aerial Vehicle (UAV) navigation system in case of GPS failure. A vision based navigation system which combines inertial sensors, visual odometer and registration of a UAV on-board video to a given geo-referenced aerial image has been developed and tested on real flight-test data. The experimental results show that it is possible to extract useful position information from aerial imagery even when the UAV is flying at low altitude. It is shown that such information can be used in an automated way to compensate the drift of the UAV state estimation which occurs when only inertial sensors and visual odometer are used.", "title": "" }, { "docid": "1f76b250a4bb89739cb24a70eba05c5f", "text": "This document aims to provide a review on learning with deep generative models (DGMs), which is an highly-active area in machine learning and more generally, artificial intelligence. This review is not meant to be a tutorial, but when necessary, we provide self-contained derivations for completeness. This review has two features. First, though there are different perspectives to classify DGMs, we choose to organize this review from the perspective of graphical modeling, because the learning methods for directed DGMs and undirected DGMs are fundamentally different. Second, we differentiate model definitions from model learning algorithms, since different learning algorithms can be applied to solve the learning problem on the same model, and an algorithm can be applied to learn different models. We thus separate model definition and model learning, with more emphasis on reviewing, differentiating and connecting different learning algorithms. We also discuss promising future research directions. This review is by no means comprehensive as the field is evolving rapidly. The authors apologize in advance for any missed papers and inaccuracies in descriptions. Corrections and comments are highly welcome.", "title": "" }, { "docid": "3267c5a5f4ab9602d6f69c3d9d137c96", "text": "This paper briefly discusses the measurement on soil moisture distribution using Electrical Capacitance Tomography (ECT) technique. ECT sensor with 12 electrodes was used for visualization measurement of permittivity distribution. ECT sensor was calibrated using low and high permittivity material i.e. dry sand and saturated soils (sand and clay) respectively. The measurements obtained were recorded and further analyzed by using Linear Back Projection (LBP) image reconstruction. Preliminary result shows that there is a positive correlation with increasing water volume.", "title": "" }, { "docid": "931b8f97d86902f984338285e62c8ef8", "text": "One of the goals of Artificial intelligence (AI) is the realization of natural dialogue between humans and machines. in recent years, the dialogue systems, also known as interactive conversational systems are the fastest growing area in AI. Many companies have used the dialogue systems technology to establish various kinds of Virtual Personal Assistants(VPAs) based on their applications and areas, such as Microsoft's Cortana, Apple's Siri, Amazon Alexa, Google Assistant, and Facebook's M. However, in this proposal, we have used the multi-modal dialogue systems which process two or more combined user input modes, such as speech, image, video, touch, manual gestures, gaze, and head and body movement in order to design the Next-Generation of VPAs model. The new model of VPAs will be used to increase the interaction between humans and the machines by using different technologies, such as gesture recognition, image/video recognition, speech recognition, the vast dialogue and conversational knowledge base, and the general knowledge base. Moreover, the new VPAs system can be used in other different areas of applications, including education assistance, medical assistance, robotics and vehicles, disabilities systems, home automation, and security access control.", "title": "" }, { "docid": "07e69863c4c6531e310b0302d290cbad", "text": "Recently two-stage detectors have surged ahead of single-shot detectors in the accuracy-vs-speed trade-off. Nevertheless single-shot detectors are immensely popular in embedded vision applications. This paper brings singleshot detectors up to the same level as current two-stage techniques. We do this by improving training for the stateof-the-art single-shot detector, RetinaNet, in three ways: integrating instance mask prediction for the first time, making the loss function adaptive and more stable, and including additional hard examples in training. We call the resulting augmented network RetinaMask. The detection component of RetinaMask has the same computational cost as the original RetinaNet, but is more accurate. COCO test-dev results are up to 41.4 mAP for RetinaMask-101 vs 39.1mAP for RetinaNet-101, while the runtime is the same during evaluation. Adding Group Normalization increases the performance of RetinaMask-101 to 41.7 mAP. Code is at: https://github.com/chengyangfu/", "title": "" } ]
scidocsrr
1e7e07f7844e4136846b32769ac683f6
Speed improvement of object recognition using Boundary-Bitmap of histogram of oriented Gradients
[ { "docid": "8588a3317d4b594d8e19cb005c3d35c7", "text": "Histograms of Oriented Gradients (HOG) is one of the wellknown features for object recognition. HOG features are calculated by taking orientation histograms of edge intensity in a local region. N.Dalal et al. proposed an object detection algorithm in which HOG features were extracted from all locations of a dense grid on a image region and the combined features are classified by using linear Support Vector Machine (SVM). In this paper, we employ HOG features extracted from all locations of a grid on the image as candidates of the feature vectors. Principal Component Analysis (PCA) is applied to these HOG feature vectors to obtain the score (PCA-HOG) vectors. Then a proper subset of PCA-HOG feature vectors is selected by using Stepwise Forward Selection (SFS) algorithm or Stepwise Backward Selection (SBS) algorithm to improve the generalization performance. The selected PCA-HOG feature vectors are used as an input of linear SVM to classify the given input into pedestrian/non-pedestrian. The improvement of the recognition rates are confirmed through experiments using MIT pedestrian dataset.", "title": "" } ]
[ { "docid": "2d1e211857c50e6c997b88258c171c4d", "text": "Bulbus Fritillariae is the most commonly used antitussive herb in China. Eleven species of Fritillaria are recorded as Bulbus Fritillariae in the Chinese Pharmacopoeia. Bulbus Fritillariae Cirrhosae is a group of six Fritillaria species with higher efficiency and lower toxicity derived mainly from wild sources. Because of their higher market price, five other Fritillaria species are often sold deceptively as Bulbus Fritillariae Cirrhosae in the herbal market. To ensure the efficacy and safety of medicinal herbs, the authentication of botanical resources is the first step in quality control. Here, a DNA based identification method was developed to authenticate the commercial sources of Bulbus Fritillariae Cirrhosae. A putative DNA marker (0.65 kb) specific for Bulbus Fritillariae Cirrhosae was identified using the Random Amplified Polymorphic DNA (RAPD) technique. A DNA marker representing a Sequence Characterized Amplified Region (SCAR) was developed from a RAPD amplicon. The SCAR marker was successfully applied to differentiate Bulbus Fritillariae Cirrhosae from different species of Fritillaria. Additionally, the SCAR marker was also useful in identifying the commercial samples of Bulbus Fritillariae Cirrhosae. Our results indicated that the RAPD-SCAR method was rapid, accurate and applicable in identifying Bulbus Fritillariae Cirrhosae at the DNA level.", "title": "" }, { "docid": "3f6fcee0073e7aaf587602d6510ed913", "text": "BACKGROUND\nTreatment of early onset scoliosis (EOS) is challenging. In many cases, bracing will not be effective and growing rod surgery may be inappropriate. Serial, Risser casts may be an effective intermediate method of treatment.\n\n\nMETHODS\nWe studied 20 consecutive patients with EOS who received serial Risser casts under general anesthesia between 1999 and 2011. Analyses included diagnosis, sex, age at initial cast application, major curve severity, initial curve correction, curve magnitude at the time of treatment change or latest follow-up for those still in casts, number of casts per patient, the type of subsequent treatment, and any complications.\n\n\nRESULTS\nThere were 8 patients with idiopathic scoliosis, 6 patients with neuromuscular scoliosis, 5 patients with syndromic scoliosis, and 1 patient with skeletal dysplasia. Fifteen patients were female and 5 were male. The mean age at first cast was 3.8±2.3 years (range, 1 to 8 y), and the mean major curve magnitude was 74±18 degrees (range, 40 to 118 degrees). After initial cast application, the major curve measured 46±14 degrees (range, 25 to 79 degrees). At treatment change or latest follow-up for those still in casts, the major curve measured 53±24 degrees (range, 13 to 112 degrees). The mean time in casts was 16.9±9.1 months (range, 4 to 35 mo). The mean number of casts per patient was 4.7±2.2 casts (range, 1 to 9 casts). At the time of this study, 7 patients had undergone growing rod surgery, 6 patients were still undergoing casting, 5 returned to bracing, and 2 have been lost to follow-up. Four patients had minor complications: 2 patients each with superficial skin irritation and cast intolerance.\n\n\nCONCLUSIONS\nSerial Risser casting is a safe and effective intermediate treatment for EOS. It can stabilize relatively large curves in young children and allows the child to reach a more suitable age for other forms of treatment, such as growing rods.\n\n\nLEVEL OF EVIDENCE\nLevel IV; case series.", "title": "" }, { "docid": "d1e9eb1357381310c4540a6dcbe8973a", "text": "We introduce a method for learning Bayesian networks that handles the discretization of continuous variables as an integral part of the learning process. The main ingredient in this method is a new metric based on the Minimal Description Length principle for choosing the threshold values for the discretization while learning the Bayesian network structure. This score balances the complexity of the learned discretization and the learned network structure against how well they model the training data. This ensures that the discretization of each variable introduces just enough intervals to capture its interaction with adjacent variables in the network. We formally derive the new metric, study its main properties, and propose an iterative algorithm for learning a discretization policy. Finally, we illustrate its behavior in applications to supervised learning.", "title": "" }, { "docid": "ef0d7de77d25cc574fe361178138d310", "text": "This paper proposes a new, conceptually simple and effective forensic method to address both the generality and the fine-grained tampering localization problems of image forensics. Corresponding to each kind of image operation, a rich GMM (Gaussian Mixture Model) is learned as the image statistical model for small image patches. Thereafter, the binary classification problem, whether a given image block has been previously processed, can be solved by comparing the average patch log-likelihood values calculated on overlapping image patches under different GMMs of original and processed images. With comparisons to a powerful steganalytic feature, experimental results demonstrate the efficiency of the proposed method, for multiple image operations, on whole images and small blocks.", "title": "" }, { "docid": "6dfc558d273ec99ffa7dc638912d272c", "text": "Recurrent neural networks (RNNs) with Long Short-Term memory cells currently hold the best known results in unconstrained handwriting recognition. We show that their performance can be greatly improved using dropout - a recently proposed regularization method for deep architectures. While previous works showed that dropout gave superior performance in the context of convolutional networks, it had never been applied to RNNs. In our approach, dropout is carefully used in the network so that it does not affect the recurrent connections, hence the power of RNNs in modeling sequences is preserved. Extensive experiments on a broad range of handwritten databases confirm the effectiveness of dropout on deep architectures even when the network mainly consists of recurrent and shared connections.", "title": "" }, { "docid": "2887fb157126497032c31459a8c9ae46", "text": "The amount of data in electronic and real world is constantly on the rise. Therefore, extracting useful knowledge from the total available data is very important and time consuming task. Data mining has various techniques for extracting valuable information or knowledge from data. These techniques are applicable for all data that are collected inall fields of science. Several research investigations are published about applications of data mining in various fields of sciences such as defense, banking, insurances, education, telecommunications, medicine and etc. This investigation attempts to provide a comprehensive survey about applications of data mining techniques in breast cancer diagnosis, treatment & prognosis till now. Further, the main challenges in these area is presented in this investigation. Since several research studies currently are going on in this issues, therefore, it is necessary to have a complete survey about all researches which are completed up to now, along with the results of those studies and important challenges which are currently exist in this area for helping young researchers and presenting to them the main problems that are still exist in this area.", "title": "" }, { "docid": "7b1782eb96134edda9bc5661b1ad4de6", "text": "Quality inspection is an important aspect of modern industrial manufacturing. In textile industry production, automate fabric inspection is important for maintain the fabric quality. For a long time the fabric defects inspection process is still carried out with human visual inspection, and thus, insufficient and costly. Therefore, automatic fabric defect inspection is required to reduce the cost and time waste caused by defects. The investment in automated fabric defect detection is more than economical when reduction in labor cost and associated benefits are considered. The development of fully automated web inspection system requires robust and efficient fabric defect detection algorithms. Image analysis has great potential to provide reliable measurements for detecting defects in fabrics. In this paper, we are using the principles of image analysis, an automatic fabric evaluation system, which enables automatic computerized defect detection (analysis of fabrics) was developed. Online fabric defect detection was tested automatically by analyzing fabric images captured by a digital camera.", "title": "" }, { "docid": "4f0865012265be44d8a39fedf01f70ce", "text": "In this paper, we derive new closed-form expressions for the gradient of the mutual information with respect to arbitrary parameters of the two-user multiple access cha nnel (MAC). The derived relations generalize the fundamental relation between the derivative of the mutual i nformation and the minimum mean squared error (MMSE) to multiuser setups. We prove that the derivative of t he mutual information with respect to the signal to noise ratio (SNR) is equal to the MMSE plus a covariance induc e due to the interference, quantified by a term with respect to the cross correlation of the multiuser input esti mates, the channels and the precoding matrices. We also derive new relations for the gradient of the conditional and non-conditional mutual information with respect to the MMSE. Capitalizing on the new fundamental relations, we inv estigate the linear precoding and power allocation policies that maximize the mutual information for the two-u ser MAC Gaussian channels with arbitrary input distributions. We show that the optimal design of linear pre coders may satisfy a fixed-point equation as a function of the channel and the input constellation under specific set up . We show also that the non-mutual interference in a multiuser setup introduces a term to the gradient of the mut ual information which plays a fundamental role in the design of optimal transmission strategies, particular ly the optimal precoding and power allocation, and explains the losses in the data rates. Therefore, we provide a novel in terpretation of the interference with respect to the channel, power, and input estimates of the main user and the i n erferer.", "title": "" }, { "docid": "c7eca07c70cab1eca77de2e10fc53a72", "text": "The revolutionary concept of Software Defined Networks (SDNs) potentially provides flexible and wellmanaged next-generation networks. All the hype surrounding the SDNs is predominantly because of its centralized management functionality, the separation of the control plane from the data forwarding plane, and enabling innovation through network programmability. Despite the promising architecture of SDNs, security was not considered as part of the initial design. Moreover, security concerns are potentially augmented considering the logical centralization of network intelligence. Furthermore, the security and dependability of the SDN has largely been a neglected topic and remains an open issue. The paper presents a broad overview of the security implications of each SDN layer/interface. This paper contributes further by devising a contemporary layered/interface taxonomy of the reported security vulnerabilities, attacks, and challenges of SDN. We also highlight and analyze the possible threats on each layer/interface of SDN to help design secure SDNs. Moreover, the ensuing paper contributes by presenting the state-ofthe-art SDNs security solutions. The categorization of solutions is followed by a critical analysis and discussion to devise a comprehensive thematic taxonomy. We advocate the production of secure and dependable SDNs by presenting potential requirements and key enablers. Finally, in an effort to anticipate secure and dependable SDNs, we present the ongoing open security issues, challenges and future research directions. & 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "be722a19b56ef604d6fe24012470e61f", "text": "In this paper, we derive optimality results for greedy Bayesian-network search algo­ rithms that perform single-edge modifica­ tions at each step and use asymptotically consistent scoring criteria. Our results ex­ tend those of Meek (1997) and Chickering (2002), who demonstrate that in the limit of large datasets, if the generative distribu­ tion is perfect with respect to a DAG defined over the observable variables, such search al­ gorithms will identify this optimal (i.e. gen­ erative) DAG model. We relax their assump­ tion about the generative distribution, and assume only that this distribution satisfies the composition property over the observable variables, which is a more realistic assump­ tion for real domains. Under this assump­ tion, we guarantee that the search algorithms identify an inclusion-optimal model; that is, a model that (1) contains the generative dis­ tribution and (2) has no sub-model that con­ tains this distribution. In addition, we show that the composition property is guaranteed to hold whenever the dependence relation­ ships in the generative distribution can be characterized by paths between singleton el­ ements in some generative graphical model (e.g. a DAG, a chain graph, or a Markov network) even when the generative model in­ cludes unobserved variables, and even when the observed data is subject to selection bias.", "title": "" }, { "docid": "102e1718e03b3a4e96ee8c2212738792", "text": "This paper introduces a new method for the rapid development of complex rule bases involving cue phrases for the purpose of classifying text segments. The method is based on Ripple-Down Rules, a knowledge acquisition method that proved very successful in practice for building medical expert systems and does not require a knowledge engineer. We implemented our system KAFTAN and demonstrate the applicability of our method to the task of classifying scientific citations. Building cue phrase rules in KAFTAN is easy and efficient. We demonstrate the effectiveness of our approach by presenting experimental results where our resulting classifier clearly outperforms previously built classifiers in the recent literature.", "title": "" }, { "docid": "3abdace0aaee5e68ad48045e267634fb", "text": "This paper presents a two-transformer active- clamping zero-voltage-switching (ZVS) flyback converter, which is mainly composed of two active-clamping flyback converters. [1]-[2] By utilizing two separate transformers[3], the proposed converter allows a low-profile design to be readily implemented while retaining the merits of a conventional single-transformer topology. The presented two-transformer active-clamping ZVS flyback converter can approximately share the total load current between two secondaries. Therefore the transformer copper loss and the rectifier diode conduction loss can be decreased. Detailed analysis and design of this new two-transformer active-clamping ZVS flyback converter are described.", "title": "" }, { "docid": "2a8e6cf4d19f62147b92993c30cbfde8", "text": "Off-line recognition of text play a significant role in several application such as the automatic sorting of postal mail or editing old documents. It is the ability of the computer to distinguish characters and words. Automatic off-line recognition of text can be divided into the recognition of printed and handwritten characters. Off-line Arabic handwriting recognition still faces great challenges. This paper provides a survey of Arabic character recognition systems which are classified into the character recognition categories: printed and handwritten. Also, it examines the literature on the most significant work in handwritten text recognition without segmentation and discusses algorithms which split the words into characters.", "title": "" }, { "docid": "369fc545f9007f3cf01b6c6cdfc98c8e", "text": "The provision of on-demand access to Cloud computing services and infrastructure is attracting numerous consumers, as a result migrating from traditional server centric network to Cloud computing becomes inevitable to benefit from the technology through overall expense diminution. This growth of Cloud computing service consumers may influence the future data centers and operational models. The issue of inter-cloud operability due to different Cloud computing vendors Reference Architecture (RA) needs to be addressed to allow consumers to use services from any vendor. In this paper we present the Cloud computing RA of major vendors available in scientific literature and the RA of National Institute of Standard Technology(NIST) by comparing the nature of their RA (role based/layer based) and mapping activities and capabilities to the layer(s) or role(s). Keywords—Cloud Computing, Cloud Computing Reference Architecture (RA), Cloud Service Consumers, Cloud Service Providers, SaaS , PaaS , IaaS", "title": "" }, { "docid": "c0cec61d37c4e0fe1fa82f8c182c5fc7", "text": "PURPOSE OF REVIEW\nCompassion has been recognized as a key aspect of high-quality healthcare, particularly in palliative care. This article provides a general review of the current understanding of compassion in palliative care and summarizes emergent compassionate initiatives in palliative care at three interdependent levels: compassion for patients, compassion in healthcare professionals, and compassionate communities at the end of life.\n\n\nRECENT FINDINGS\nCompassion is a constructive response to suffering that enhances treatment outcomes, fosters the dignity of the recipient, and provides self-care for the giver. Patients and healthcare professionals value compassion and perceive a general lack of compassion in healthcare systems. Compassion for patients and for professionals' self-care can be trained and implemented top-down (institutional policies) and bottom-up (compassion training). 'Compassionate communities' is an important emerging movement that complements regular healthcare and social services with a community-level approach to offer compassionate care for people at the end of life.\n\n\nSUMMARY\nCompassion can be enhanced through diverse methodologies at the organizational, professional, and community levels. This enhancement of compassion has the potential to improve quality of palliative care treatments, enhance healthcare providers' satisfaction, and reduce healthcare costs.", "title": "" }, { "docid": "382aac30f231b98aec07106fd458e525", "text": "New proposals for prosthetic hands fabricated by means of 3D printing are either body-powered for partial hand amputees or myoelectric powered prostheses for transradial amputees. There are no current studies to develop powered 3D printed prostheses for transmetacarpal, probably because at this level of amputation there is little space to fit actuators and their associated electronics. In this work, a design of a 3D-printed hand prosthesis for transmetacarpal amputees and powered by DC micromotors is presented. Four-bar linkage mechanisms were used for the index, middle, ring and little fingers flexion movements, while a mechanism of cylindrical gears and worm drive were used for the thumb. Additionally, a method for customizing prosthetic fingers to match a user specific anthropometry is proposed. Sensors and actuators' selection is explained, and a position control algorithm was developed for each local controller by modeling the DC motors and transmission mechanisms. Finally, a basic control scheme was tested on the prototype for velocity and force evaluation.", "title": "" }, { "docid": "7440cb90073c8d8d58e28447a1774b2c", "text": "Common maxims about beauty suggest that attractiveness is not important in life. In contrast, both fitness-related evolutionary theory and socialization theory suggest that attractiveness influences development and interaction. In 11 meta-analyses, the authors evaluate these contradictory claims, demonstrating that (a) raters agree about who is and is not attractive, both within and across cultures; (b) attractive children and adults are judged more positively than unattractive children and adults, even by those who know them; (c) attractive children and adults are treated more positively than unattractive children and adults, even by those who know them; and (d) attractive children and adults exhibit more positive behaviors and traits than unattractive children and adults. Results are used to evaluate social and fitness-related evolutionary theories and the veracity of maxims about beauty.", "title": "" }, { "docid": "d060d89c7c3fbdc35ccdf8b61fc26cbe", "text": "The increasing pervasiveness of location-acquisition technologies has enabled collection of huge amount of trajectories for almost any kind of moving objects. Discovering useful patterns from their movement behaviours can convey valuable knowledge to a variety of critical applications. In this light, we propose a novel concept, called gathering, which is a trajectory pattern modelling various group incidents such as celebrations, parades, protests, traffic jams and so on. A key observation is that these incidents typically involve large congregations of individuals, which form durable and stable areas with high density. Since the process of discovering gathering patterns over large-scale trajectory databases can be quite lengthy, we further develop a set of well thought out techniques to improve the performance. These techniques, including effective indexing structures, fast pattern detection algorithms implemented with bit vectors, and incremental algorithms for handling new trajectory arrivals, collectively constitute an efficient solution for this challenging task. Finally, the effectiveness of the proposed concepts and the efficiency of the approaches are validated by extensive experiments based on a real taxicab trajectory dataset.", "title": "" }, { "docid": "94d6182c7bf77d179e59247d04573bcd", "text": "Flash memory cells typically undergo a few thousand Program/Erase (P/E) cycles before they wear out. However, the programming strategy of flash devices and process variations cause some flash cells to wear out significantly faster than others. This paper studies this variability on two commercial devices, acknowledges its unavoidability, figures out how to identify the weakest cells, and introduces a wear unbalancing technique that let the strongest cells relieve the weak ones in order to lengthen the overall lifetime of the device. Our technique periodically skips or relieves the weakest pages whenever a flash block is programmed. Relieving the weakest pages can lead to a lifetime extension of up to 60% for a negligible memory and storage overhead, while minimally affecting (sometimes improving) the write performance. Future technology nodes will bring larger variance to page endurance, increasing the need for techniques similar to the one proposed in this work.", "title": "" }, { "docid": "d0b16a75fb7b81c030ab5ce1b08d8236", "text": "It is unquestionable that successive hardware generations have significantly improved GPU computing workload performance over the last several years. Moore's law and DRAM scaling have respectively increased single-chip peak instruction throughput by 3X and off-chip bandwidth by 2.2X from NVIDIA's GeForce 8800 GTX in November 2006 to its GeForce GTX 580 in November 2010. However, raw capability numbers typically underestimate the improvements in real application performance over the same time period, due to significant architectural feature improvements. To demonstrate the effects of architecture features and optimizations over time, we conducted experiments on a set of benchmarks from diverse application domains for multiple GPU architecture generations to understand how much performance has truly been improving for those workloads. First, we demonstrate that certain architectural features make a huge difference in the performance of unoptimized code, such as the inclusion of a general cache which can improve performance by 2-4× in some situations. Second, we describe what optimization patterns have been most essential and widely applicable for improving performance for GPU computing workloads across all architecture generations. Some important optimization patterns included data layout transformation, converting scatter accesses to gather accesses, GPU workload regularization, and granularity coarsening, each of which improved performance on some benchmark by over 20%, sometimes by a factor of more than 5×. While hardware improvements to baseline unoptimized code can reduce the speedup magnitude, these patterns remain important for even the most recent GPUs. Finally, we identify which added architectural features created significant new optimization opportunities, such as increased register file capacity or reduced bandwidth penalties for misaligned accesses, which increase performance by 2× or more in the optimized versions of relevant benchmarks.", "title": "" } ]
scidocsrr
fea5f429e46f071fc2ed15ea9eb5f8f5
Robust Multilingual Named Entity Recognition with Shallow Semi-Supervised Features
[ { "docid": "ebc8966779ba3b9e6a768f4c462093f5", "text": "Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003—significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data.", "title": "" }, { "docid": "d38df66fe85b4d12093965e649a70fe1", "text": "We describe the CoNLL-2002 shared task: language-independent named entity recognition. We give background information on the data sets and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance.", "title": "" }, { "docid": "f649286f5bb37530bbfced0a48513f4f", "text": "Collobert et al. (2011) showed that deep neural network architectures achieve stateof-the-art performance in many fundamental NLP tasks, including Named Entity Recognition (NER). However, results were only reported for English. This paper reports on experiments for German Named Entity Recognition, using the data from the GermEval 2014 shared task on NER. Our system achieves an F1-measure of 75.09% according to the official metric.", "title": "" } ]
[ { "docid": "cff3b4f6db26e66893a9db95fb068ef1", "text": "In this paper, we consider the task of text categorization as a graph classification problem. By representing textual documents as graph-of-words instead of historical n-gram bag-of-words, we extract more discriminative features that correspond to long-distance n-grams through frequent subgraph mining. Moreover, by capitalizing on the concept of k-core, we reduce the graph representation to its densest part – its main core – speeding up the feature extraction step for little to no cost in prediction performances. Experiments on four standard text classification datasets show statistically significant higher accuracy and macro-averaged F1-score compared to baseline approaches.", "title": "" }, { "docid": "6d65156ca8fed2aa61dda6f5c98ecdce", "text": "Emerging digital environments and infrastructures, such as distributed security services and distributed computing services, have generated new options of communication, information sharing, and resource utilization in past years. However, when distributed services are used, the question arises of to what extent we can trust service providers to not violate security requirements, whether in isolation or jointly. Answering this question is crucial for designing trustworthy distributed systems and selecting trustworthy service providers. This paper presents a novel trust measurement method for distributed systems, and makes use of propositional logic and probability theory. The results of the qualitative part include the specification of a formal trust language and the representation of its terms by means of propositional logic formulas. Based on these formulas, the quantitative part returns trust metrics for the determination of trustworthiness with which given distributed systems are assumed to fulfill a particular security requirement.", "title": "" }, { "docid": "5c3358aa3d9a931ba7c9186b1f5a2362", "text": "Compared with word-level and sentence-level convolutional neural networks (ConvNets), the character-level ConvNets has a better applicability for misspellings and typos input. Due to this, recent researches for text classification mainly focus on character-level ConvNets. However, while the majority of these researches employ English corpus for the character-level text classification, few researches have been done using Chinese corpus. This research hopes to bridge this gap, exploring character-level ConvNets for Chinese corpus test classification. We have constructed a large-scale Chinese dataset, and the result shows that character-level ConvNets works better on Chinese character dataset than its corresponding pinyin format dataset, which is the general solution in previous researches. This is the first time that character-level ConvNets has been applied to Chinese character dataset for text classification problem.", "title": "" }, { "docid": "4f6ce186679f9ab4f0aaada92ccf5a84", "text": "Sensor networks have a significant potential in diverse applications some of which are already beginning to be deployed in areas such as environmental monitoring. As the application logic becomes more complex, programming difficulties are becoming a barrier to adoption of these networks. The difficulty in programming sensor networks is not only due to their inherently distributed nature but also the need for mechanisms to address their harsh operating conditions such as unreliable communications, faulty nodes, and extremely constrained resources. Researchers have proposed different programming models to overcome these difficulties with the ultimate goal of making programming easy while making full use of available resources. In this article, we first explore the requirements for programming models for sensor networks. Then we present a taxonomy of the programming models, classified according to the level of abstractions they provide. We present an evaluation of various programming models for their responsiveness to the requirements. Our results point to promising efforts in the area and a discussion of the future directions of research in this area.", "title": "" }, { "docid": "4d4bf7b06c88fba54b794921ee67109f", "text": "This article provides surgical pathologists an overview of health information systems (HISs): what they are, what they do, and how such systems relate to the practice of surgical pathology. Much of this article is dedicated to the electronic medical record. Information, in how it is captured, transmitted, and conveyed, drives the effectiveness of such electronic medical record functionalities. So critical is information from pathology in integrated clinical care that surgical pathologists are becoming gatekeepers of not only tissue but also information. Better understanding of HISs can empower surgical pathologists to become stakeholders who have an impact on the future direction of quality integrated clinical care.", "title": "" }, { "docid": "9852ef6f1d5df6ca1cee8aebef2f5b78", "text": "A broadband coplanar waveguide (CPW) fed bow-tie slot antenna is proposed. By using a linear tapered transition, a 37% impedance bandwidth at -10 dB return loss is achieved. The antenna structure is very simple and the radiation patterns of the antenna in the whole bandwidth remain stable; moreover, the cross-polarization level is lower. An antenna model is fabricated on a high dielectric constant substrate. Experiments show that the simulated results agree well with the measured ones.", "title": "" }, { "docid": "0bd720d912575c0810c65d04f6b1712b", "text": "Digital painters commonly use a tablet and stylus to drive software like Adobe Photoshop. A high quality stylus with 6 degrees of freedom (DOFs: 2D position, pressure, 2D tilt, and 1D rotation) coupled to a virtual brush simulation engine allows skilled users to produce expressive strokes in their own style. However, such devices are difficult for novices to control, and many people draw with less expensive (lower DOF) input devices. This paper presents a data-driven approach for synthesizing the 6D hand gesture data for users of low-quality input devices. Offline, we collect a library of strokes with 6D data created by trained artists. Online, given a query stroke as a series of 2D positions, we synthesize the 4D hand pose data at each sample based on samples from the library that locally match the query. This framework optionally can also modify the stroke trajectory to match characteristic shapes in the style of the library. Our algorithm outputs a 6D trajectory that can be fed into any virtual brush stroke engine to make expressive strokes for novices or users of limited hardware.", "title": "" }, { "docid": "f0cd43ff855d6b10623504bf24a40fdc", "text": "Neural network-based encoder-decoder models are among recent attractive methodologies for tackling natural language generation tasks. This paper investigates the usefulness of structural syntactic and semantic information additionally incorporated in a baseline neural attention-based model. We encode results obtained from an abstract meaning representation (AMR) parser using a modified version of Tree-LSTM. Our proposed attention-based AMR encoder-decoder model improves headline generation benchmarks compared with the baseline neural attention-based model.", "title": "" }, { "docid": "71c2478c1eb50681fe0793976ffc24fe", "text": "Background subtraction is a common first step in the field of video processing and it is used to reduce the effective image size in subsequent processing steps by segmenting the mostly static background from the moving or changing foreground. In this paper previous approaches towards background modeling are extended to handle videos accompanied by information gained from a novel 2D/3D camera. This camera contains a color and a PMD chip which operates on the Time-of-Flight operating principle. The background is estimated using the widely spread Gaussian mixture model in color as well as in depth and amplitude modulation. A new matching function is presented that allows for better treatment of shadows and noise and reduces block artifacts. Problems and limitations to overcome the problem of fusing high resolution color information with low resolution depth data are addressed and the approach is tested with different parameters on several scenes and the results are compared to common and widely accepted methods.", "title": "" }, { "docid": "8020c67dd790bcff7aea0e103ea672f1", "text": "Recent efforts in satellite communication research considered the exploitation of higher frequency bands as a valuable alternative to conventional spectrum portions. An example of this can be provided by the W-band (70-110 GHz). Recently, a scientific experiment carried on by the Italian Space Agency (ASI), namely the DAVID-DCE experiment, was aimed at exploring the technical feasibility of the exploitation of the W-band for broadband networking applications. Some preliminary results of DAVID research activities pointed out that phase noise and high Doppler-shift can severely compromise the efficiency of the modulation system, particularly for what concerns the aspects related to the carrier recovery. This problem becomes very critical when the use of spectrally efficient M-ary modulations is considered in order to profitably exploit the large amount of bandwidth available in the W-band. In this work, a novel carrier recovery algorithm has been proposed for a 16-QAM modulation and tested, considering the presence of phase noise and other kinds of non-ideal behaviors of the communication devices typical of W-band satellite transmission. Simulation results demonstrated the effectiveness the proposed solution for carrier recovery and pointed out the achievable spectral efficiency of the transmission system, considering some constraints about transmitted power, data BER and receiver bandwidth", "title": "" }, { "docid": "889e20ac7d27caeb0c7158f194161d03", "text": "We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation. Typically, enforcing invertibility requires partitioning dimensions or restricting network architectures. In contrast, our approach only requires adding a simple normalization step during training, already available in standard frameworks. Invertible ResNets define a generative model which can be trained by maximum likelihood on unlabeled data. To compute likelihoods, we introduce a tractable approximation to the Jacobian log-determinant of a residual block. Our empirical evaluation shows that invertible ResNets perform competitively with both stateof-the-art image classifiers and flow-based generative models, something that has not been previously achieved with a single architecture.", "title": "" }, { "docid": "5259c661992baa926173348c4e0b0cd2", "text": "A controller assistant system is developed based on the closed-form solution of an offline optimization problem for a four-wheel-drive front-wheel-steerable vehicle. The objective of the controller is to adjust the actual vehicle attitude and motion according to the driver's manipulating commands. The controller takes feedback from acceleration signals, and the imposed conditions and limitations on the controller are studied through the concept of state-derivative feedback control systems. The controller gains are optimized using linear matrix inequality (LMI) and genetic algorithm (GA) techniques. Reference signals are calculated using a driver command interpreter module (DCIM) to accurately interpret the driver's intentions for vehicle motion and to allow the controller to generate proper control actions. It is shown that the controller effectively enhances the handling performance and stability of the vehicle under different road conditions and driving scenarios. Although controller performance is studied for a four-wheel-drive front-wheel-steerable vehicle, the algorithm can also be applied to other vehicle configurations with slight changes.", "title": "" }, { "docid": "128a616b2c33dded974d792579662f2c", "text": "III Editorial V Corporate social responsibility: Challenges and opportunities for trade unionists, by Dwight W. Justice 1 Sustainable bargaining: Labour agreements go global, by Ian Graham 15 The social responsibilities of business and workers' rights, by Guy Ryder 21 OECD Guidelines – one tool for corporate social accountability, by John Evans 25 Corporate social responsibility – new morals for business?, by Philip Jennings 31 Corporate social responsibility in Europe: A chance for social dialogue?, by Anne Renaut 35 Strengths and weaknesses of Belgium's social label, by Bruno Melckmans 41 Social auditing, freedom of association and the right to collective bargaining, by Philip Hunter and Michael Urminsky 47 Corporate public reporting on labour and employment issues, by Michael Urminsky 55 The ILO Conventions: A \" major reference \" , by Nicole Notat 63 Workers' capital and corporate social responsibility, by Jon Robinson 67 The social responsibility of business, by Reg Green 75", "title": "" }, { "docid": "231bb5d4b6fcfecbfd38262648ea6882", "text": "Recent innovations in text mining facilitate the use of novel data for sentiment analysis related to financial markets, and promise new approaches to the field of behavioural finance. Traditionally, text mining has allowed a near-real time analysis of available news feeds. The recent dissemination of web 2.0 has seen a drastic increase of user participation, providing comments on websites, social networks and blogs, creating a novel source of rich and personal sentiment data potentially of value to behavioural finance. This study explores the efficacy of using novel sentiment indicators from MarketPsych, which analyses social media in addition to newsfeeds to quantify various levels of individual's emotions, as a predictor for financial time series returns of the Australian Dollar (AUD) - US Dollar (USD) exchange rate. As one of the first studies evaluating both news and social media sentiment indicators as explanatory variables for linear and nonlinear regression algorithms, our study aims to make an original contribution to behavioural finance, combining technical and behavioural aspects of model building. An empirical out-of-sample evaluation with multiple input structures compares multivariate linear regression models (MLR) with multilayer perceptron (MLP) neural networks for descriptive modelling. The results indicate that sentiment indicators are explanatory for market movements of exchange rate returns, with nonlinear MLPs showing superior accuracy over linear regression models with a directional out-of-sample accuracy of 60.26% using cross validation.", "title": "" }, { "docid": "8f2b100dac154c54d928928296f830f6", "text": "The RPL routing protocol published in RFC 6550 was designed for efficient and reliable data collection in low-power and lossy networks. Specifically, it constructs a Destination Oriented Directed Acyclic Graph (DODAG) for data forwarding. However, due to the uneven deployment of sensor nodes in large areas, and the heterogeneous traffic patterns in the network, some sensor nodes may have much heavier workload in terms of packets forwarded than others. Such unbalanced workload distribution will result in these sensor nodes quickly exhausting their energy, and therefore shorten the overall network lifetime. In this paper, we propose a load balanced routing protocol based on the RPL protocol, named LB-RPL, to achieve balanced workload distribution in the network. Targeted at the low-power and lossy network environments, LB-RPL detects workload imbalance in a distributed and non-intrusive fashion. In addition, it optimizes the data forwarding path by jointly considering both workload distribution and link-layer communication qualities. We demonstrate the performance superiority of our LB-RPL protocol over original RPL through extensive simulations.", "title": "" }, { "docid": "e9b8787e5bb1f099e914db890e04dc23", "text": "This paper presents the design of a compact UHF-RFID tag antenna with several miniaturization techniques including meandering technique and capacitive tip-loading structure. Additionally, T-matching technique is also utilized in the antenna design for impedance matching. This antenna was designed on Rogers 5880 printed circuit board (PCB) with the dimension of 43 × 26 × 0.787 mm3 and relative permittivity, □r of 2.2. The performance of the proposed antenna was analyzed in terms of matched impedance, antenna gain, return loss and tag reading range through the simulation in CST Microwave Studio software. As a result, the proposed antenna obtained a gain of 0.97dB and a maximum reading range of 5.15 m at 921 MHz.", "title": "" }, { "docid": "1503d2a235b2ce75516d18cdea42bbb5", "text": "Phosphatidylinositol-3,4,5-trisphosphate (PtdIns(3,4,5)P3 or PIP3) mediates signalling pathways as a second messenger in response to extracellular signals. Although primordial functions of phospholipids and RNAs have been hypothesized in the ‘RNA world’, physiological RNA–phospholipid interactions and their involvement in essential cellular processes have remained a mystery. We explicate the contribution of lipid-binding long non-coding RNAs (lncRNAs) in cancer cells. Among them, long intergenic non-coding RNA for kinase activation (LINK-A) directly interacts with the AKT pleckstrin homology domain and PIP3 at the single-nucleotide level, facilitating AKT–PIP3 interaction and consequent enzymatic activation. LINK-A-dependent AKT hyperactivation leads to tumorigenesis and resistance to AKT inhibitors. Genomic deletions of the LINK-A PIP3-binding motif dramatically sensitized breast cancer cells to AKT inhibitors. Furthermore, meta-analysis showed the correlation between LINK-A expression and incidence of a single nucleotide polymorphism (rs12095274: A > G), AKT phosphorylation status, and poor outcomes for breast and lung cancer patients. PIP3-binding lncRNA modulates AKT activation with broad clinical implications.", "title": "" }, { "docid": "956cba3ab1f500fbb2d3d7a0723a0f86", "text": "Decision guidance models are a means for design space exploration and documentation. In this paper, we present decision guidance models for microservice monitoring. The selection of a monitoring system is an essential part of each microservice architecture due to the high level of dynamic structure and behavior of such a system. We present decision guidance models for generation of monitoring data, data management, processing monitoring data, and for disseminating and presenting monitoring information to stakeholders. The presented models have been derived from literature, our previous work on monitoring for distributed systems and microservice-based systems, and by analyzing existing monitoring systems. The developed models have been used for discussing monitoring requirements for a microservice-based system with a company in the process automation domain. They are part of a larger effort for developing decision guidance models for microservice architecture in general.", "title": "" }, { "docid": "1bd021886eac2358936240fb248cc6a3", "text": "We report the influence of uniaxial tensile mechanical strain in the range 0-2.2% on the phonon spectra and bandstructures of monolayer and bilayer molybdenum disulfide (MoS2) two-dimensional crystals. First, we employ Raman spectroscopy to observe phonon softening with increased strain, breaking the degeneracy in the E' Raman mode of MoS2, and extract a Grüneisen parameter of ~1.06. Second, using photoluminescence spectroscopy we measure a decrease in the optical band gap of MoS2 that is approximately linear with strain, ~45 meV/% strain for monolayer MoS2 and ~120 meV/% strain for bilayer MoS2. Third, we observe a pronounced strain-induced decrease in the photoluminescence intensity of monolayer MoS2 that is indicative of the direct-to-indirect transition of the character of the optical band gap of this material at applied strain of ~1%. These observations constitute a demonstration of strain engineering the band structure in the emergent class of two-dimensional crystals, transition-metal dichalcogenides.", "title": "" }, { "docid": "296c4cddd47307f8bf0161b8ca393840", "text": "We investigate non-orthogonal access with a successive interference canceller (SIC) in the cellular multiple-input multiple-output (MIMO) downlink for systems beyond LTE-Advanced. Taking into account the overhead for the downlink reference signaling for channel estimation at the user terminal in the case of non-orthogonal multiuser multiplexing and the applicability of the SIC receiver in the MIMO downlink, we propose intra-beam superposition coding of a multiuser signal at the transmitter and the spatial filtering of inter-beam interference followed by the intra-beam SIC at the user terminal receiver. The intra-beam SIC cancels out the inter-user interference within a beam. Furthermore, the transmitter beamforming (precoding) matrix is controlled based on open loop-type random beamforming, which is very efficient in terms of the amount of feedback information from the user terminal. Simulation results show that the proposed non-orthogonal access scheme with random beamforming and the intra-beam SIC simultaneously achieves better sum and cell-edge user throughput compared to orthogonal access, which is assumed in LTE-Advanced.", "title": "" } ]
scidocsrr
0e7bbd9884f3ae121495f37f04809883
Model predictive control of a multi-rotor with a slung load for avoiding obstacles
[ { "docid": "6115cdfda5f7eff0f13d0d841176a3f3", "text": "A quadrotor with a cable-suspended load with eight degrees of freedom and four degrees underactuation is considered and the system is established to be a differentially-flat hybrid system. Using the flatness property, a trajectory generation method is presented that enables finding nominal trajectories with various constraints that not only result in minimal load swing if required, but can also cause a large swing in the load for dynamically agile motions. A control design is presented for the system specialized to the planar case, that enables tracking of either the quadrotor attitude, the load attitude or the position of the load. Stability proofs for the controller design and experimental validation of the proposed controller are presented.", "title": "" }, { "docid": "03b3aa5c74eb4d66c1bd969fbce835c7", "text": "In the past few decades, unmanned aerial vehicles (UAVs) have become promising mobile platforms capable of navigating semiautonomously or autonomously in uncertain environments. The level of autonomy and the flexible technology of these flying robots have rapidly evolved, making it possible to coordinate teams of UAVs in a wide spectrum of tasks. These applications include search and rescue missions; disaster relief operations, such as forest fires [1]; and environmental monitoring and surveillance. In some of these tasks, UAVs work in coordination with other robots, as in robot-assisted inspection at sea [2]. Recently, radio-controlled UAVs carrying radiation sensors and video cameras were used to monitor, diagnose, and evaluate the situation at Japans Fukushima Daiichi nuclear plant facility [3].", "title": "" } ]
[ { "docid": "87133250a9e04fd42f5da5ecacd39d70", "text": "Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.", "title": "" }, { "docid": "03abab0bc882ada2c7ba4d512ac98d0e", "text": "The main goal of this project is to use the solar or AC power to charge all kind of regulated and unregulated battery like electric vehicle’s battery. Besides that, it will charge Lithium-ion (Li-ion) batteries of different voltage level. A standard pulse width modulation (PWM) which is controlled by duty cycle is used to build the solar or AC fed battery charger. A microcontroller unit and Buck/Boost converters are also used to build the charger. This charger changes the output voltages from variable input voltages with fixed amplitude in PWM. It gives regulated voltages for charging sensitive batteries. An unregulated output voltage can be obtained for electric vehicle’s battery. The battery charger is tested and the obtained result allowed to conclude the conditions of permanent control on the battery charger.", "title": "" }, { "docid": "a3e3ccb4dad5777196dcd3749295161e", "text": "There are increasing volumes of spatio-temporal data from various sources such as sensors, social networks and urban environments. Analysis of such data requires flexible exploration and visualizations, but queries that span multiple geographical regions over multiple time slices are expensive to compute, making it challenging to attain interactive speeds for large data sets. In this paper, we propose a new indexing scheme that makes use of modern GPUs to efficiently support spatio-temporal queries over point data. The index covers multiple dimensions, thus allowing simultaneous filtering of spatial and temporal attributes. It uses a block-based storage structure to speed up OLAP-type queries over historical data, and supports query processing over in-memory and disk-resident data. We present different query execution algorithms that we designed to allow the index to be used in different hardware configurations, including CPU-only, GPU-only, and a combination of CPU and GPU. To demonstrate the effectiveness of our techniques, we implemented them on top of MongoDB and performed an experimental evaluation using two real-world data sets: New York City's (NYC) taxi data - consisting of over 868 million taxi trips spanning a period of five years, and Twitter posts - over 1.1 billion tweets collected over a period of 14 months. Our results show that our GPU-based index obtains interactive, sub-second response times for queries over large data sets and leads to at least two orders of magnitude speedup over spatial indexes implemented in existing open-source and commercial database systems.", "title": "" }, { "docid": "7e422bc9e691d552543c245e7c154cbf", "text": "Personality assessment and, specifically, the assessment of personality disorders have traditionally been indifferent to computational models. Computational personality is a new field that involves the automatic classification of individuals' personality traits that can be compared against gold-standard labels. In this context, we introduce a new vectorial semantics approach to personality assessment, which involves the construction of vectors representing personality dimensions and disorders, and the automatic measurements of the similarity between these vectors and texts written by human subjects. We evaluated our approach by using a corpus of 2468 essays written by students who were also assessed through the five-factor personality model. To validate our approach, we measured the similarity between the essays and the personality vectors to produce personality disorder scores. These scores and their correspondence with the subjects' classification of the five personality factors reproduce patterns well-documented in the psychological literature. In addition, we show that, based on the personality vectors, we can predict each of the five personality factors with high accuracy.", "title": "" }, { "docid": "abdc445e498c6d04e8f046e9c2610f9f", "text": "Ontologies have recently received popularity in the area of knowledge management and knowledge sharing, especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d) market status and penetration. The results of the review in ontologies are analyzed for each application area, such as transport, tourism, personal services, health and social services, natural languages and other HCI-related domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks. Although each tool provides different functionalities, most of the users just use only one, because they are not able to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different ontologies with different development and management tools. The paper is also concerns the detection of commonalities and differences between the examined ontologies, both on the same domain (application area) and among different domains.", "title": "" }, { "docid": "540a6dd82c7764eedf99608359776e66", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aea.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" }, { "docid": "8ba1621257292f04bb6fa2328ba5abda", "text": "In this paper we propose an explicit computer model for learning natural language syntax based on Angluin's (1982) efficient induction algorithms, using a complete corpus of grammatical example sentences. We use these results to show how inductive inference methods may be applied to learn substantial, coherent subparts of at least one natural language – English – that are not susceptible to the kinds of learning envisioned in linguistic theory. As two concrete case studies, we show how to learn English auxiliary verb sequences (such as could be taking, will have been taking) and the sequences of articles and adjectives that appear before noun phrases (such as the very old big deer). Both systems can be acquired in a computationally feasible amount of time using either positive examples, or, in an incremental mode, with implicit negative examples (examples outside a finite corpus are considered to be negative examples). As far as we know, this is the first computer procedure that learns a full-scale range of noun subclasses and noun phrase structure. The generalizations and the time required for acquisition match our knowledge of child language acquisition for these two cases. More importantly, these results show that just where linguistic theories admit to highly irregular subportions, we can apply efficient automata-theoretic learning algorithms. Since the algorithm works only for fragments of language syntax, we do not believe that it suffices for all of language acquisition. Rather, we would claim that language acquisition is nonuniform and susceptible to a variety of acquisition strategies; this algorithm may be one these.", "title": "" }, { "docid": "ce9b9cc57277b635262a5d4af999dc32", "text": "Age invariant face recognition has received increasing attention due to its great potential in real world applications. In spite of the great progress in face recognition techniques, reliably recognizing faces across ages remains a difficult task. The facial appearance of a person changes substantially over time, resulting in significant intra-class variations. Hence, the key to tackle this problem is to separate the variation caused by aging from the person-specific features that are stable. Specifically, we propose a new method, called Hidden Factor Analysis (HFA). This method captures the intuition above through a probabilistic model with two latent factors: an identity factor that is age-invariant and an age factor affected by the aging process. Then, the observed appearance can be modeled as a combination of the components generated based on these factors. We also develop a learning algorithm that jointly estimates the latent factors and the model parameters using an EM procedure. Extensive experiments on two well-known public domain face aging datasets: MORPH (the largest public face aging database) and FGNET, clearly show that the proposed method achieves notable improvement over state-of-the-art algorithms.", "title": "" }, { "docid": "9a7f9ecf4dafaaaee2a76d49b51c545e", "text": "Given a set of documents from a specific domain (e.g., medical research journals), how do we automatically build a Knowledge Graph (KG) for that domain? Automatic identification of relations and their schemas, i.e., type signature of arguments of relations (e.g., undergo(Patient, Surgery)), is an important first step towards this goal. We refer to this problem as Relation Schema Induction (RSI). In this paper, we propose Schema Induction using Coupled Tensor Factorization (SICTF), a novel tensor factorization method for relation schema induction. SICTF factorizes Open Information Extraction (OpenIE) triples extracted from a domain corpus along with additional side information in a principled way to induce relation schemas. To the best of our knowledge, this is the first application of tensor factorization for the RSI problem. Through extensive experiments on multiple real-world datasets, we find that SICTF is not only more accurate than state-of-the-art baselines, but also significantly faster (about 14x faster).", "title": "" }, { "docid": "678b90e0a7fdc1166928ff952b603f29", "text": "Semantic search promises to produce precise answers to user queries by taking advantage of the availability of explicit semantics of information in the context of the semantic web. Existing tools have been primarily designed to enhance the performance of traditional search technologies but with little support for naive users, i.e., ordinary end users who are not necessarily familiar with domain specific semantic data, ontologies, or SQL-like query languages. This paper presents SemSearch, a search engine, which pays special attention to this issue by hiding the complexity of semantic search from end users and making it easy to use and effective. In contrast with existing semantic-based keyword search engines which typically compromise their capability of handling complex user queries in order to overcome the problem of knowledge overhead, SemSearch not only overcomes the problem of knowledge overhead but also supports complex queries. Further, SemSearch provides comprehensive means to produce precise answers that on the one hand satisfy user queries and on the other hand are self-explanatory and understandable by end users. A prototype of the search engine has been implemented and applied in the semantic web portal of our lab. An initial evaluation shows promising results.", "title": "" }, { "docid": "62c49155e92350a0420fb215f0a92f78", "text": "Coordination, the process by which an agent reasons about its local actions and the (anticipated) actions of others to try and ensure the community acts in a coherent manner, is perhaps the key problem of the discipline of Distributed Artificial Intelligence (DAI). In order to make advances it is important that the theories and principles which guide this central activity are uncovered and analysed in a systematic and rigourous manner. To this end, this paper models agent communities using a distributed goal search formalism, and argues that commitments (pledges to undertake a specific course of action) and conventions (means of monitoring commitments in changing circumstances) are the foundation of coordination in all DAI systems. 1. The Coordination Problem Participation in any social situation should be both simultaneously constraining, in that agents must make a contribution to it, and yet enriching, in that participation provides resources and opportunities which would otherwise be unavailable (Gerson, 1976). Coordination, the process by which an agent reasons about its local actions and the (anticipated) actions of others to try and ensure the community acts in a coherent manner, is the key to achieving this objective. Without coordination the benefits of decentralised problem solving vanish and the community may quickly degenerate into a collection of chaotic, incohesive individuals. In more detail, the objectives of the coordination process are to ensure: that all necessary portions of the overall problem are included in the activities of at least one agent, that agents interact in a manner which permits their activities to be developed and integrated into an overall solution, that team members act in a purposeful and consistent manner, and that all of these objectives are achievable within the available computational and resource limitations (Lesser and Corkill, 1987). Specific examples of coordination activities include supplying timely information to needy agents, ensuring the actions of multiple actors are synchronised and avoiding redundant problem solving. There are three main reasons why the actions of multiple agents need to be coordinated: • because there are dependencies between agents’ actions Interdependence occurs when goals undertaken by individual agents are related either because local decisions made by one agent have an impact on the decisions of other community members (eg when building a house, decisions about the size and location of rooms impacts upon the wiring and plumbing) or because of the possibility of harmful interactions amongst agents (eg two mobile robots may attempt to pass through a narrow exit simultaneously, resulting in a collision, damage to the robots and blockage of the exit). Contribution to Foundations of DAI 2 • because there is a need to meet global constraints Global constraints exist when the solution being developed by a group of agents must satisfy certain conditions if it is to be deemed successful. For instance, a house building team may have a budget of £250,000, a distributed monitoring system may have to react to critical events within 30 seconds and a distributed air traffic control system may have to control the planes with a fixed communication bandwidth. If individual agents acted in isolation and merely tried to optimise their local performance, then such overarching constraints are unlikely to be satisfied. Only through coordinated action will acceptable solutions be developed. • because no one individual has sufficient competence, resources or information to solve the entire problem Many problems cannot be solved by individuals working in isolation because they do not possess the necessary expertise, resources or information. Relevant examples include the tasks of lifting a heavy object, driving in a convoy and playing a symphony. It may be impractical or undesirable to permanently synthesize the necessary components into a single entity because of historical, political, physical or social constraints, therefore temporary alliances through cooperative problem solving may be the only way to proceed. Differing expertise may need to be combined to produce a result outside of the scope of any of the individual constituents (eg in medical diagnosis, knowledge about heart disease, blood disorders and respiratory problems may need to be combined to diagnose a patient’s illness). Different agents may have different resources (eg processing power, memory and communications) which all need to be harnessed to solve a complex problem. Finally, different agents may have different information or viewpoints of a problem (eg in concurrent engineering systems, the same product may be viewed from a design, manufacturing and marketing perspective). Even when individuals can work independently, meaning coordination is not essential, information discovered by one agent can be of sufficient use to another that the two agents can solve the problem more than twice as fast. For example, when searching for a lost object in a large area it is often better, though not essential, to do so as a team. Analysis of this “combinatorial implosion” phenomena (Kornfield and Hewitt, 1981) has resulted in the postulation that cooperative search, when sufficiently large, can display universal characteristics which are independent of the nature of either the individual processes or the particular domain being tackled (Clearwater et al., 1991). If all the agents in the system could have complete knowledge of the goals, actions and interactions of their fellow community members and could also have infinite processing power, it would be possible to know exactly what each agent was doing at present and what it is intending to do in the future. In such instances, it would be possible to avoid conflicting and redundant efforts and systems could be perfectly coordinated (Malone, 1987). However such complete knowledge is infeasible, in any community of reasonable complexity, because bandwidth limitations make it impossible for agents to be constantly informed of all developments. Even in modestly sized communities, a complete analysis to determine the detailed activities of each agent is impractical the computation and communication costs of determining the optimal set and allocation of activities far outweighs the improvement in problem solving performance (Corkill and Lesser, 1986). Contribution to Foundations of DAI 3 As all community members cannot have a complete and accurate perspective of the overall system, the next easiest way of ensuring coherent behaviour is to have one agent with a wider picture. This global controller could then direct the activities of the others, assign agents to tasks and focus problem solving to ensure coherent behaviour. However such an approach is often impractical in realistic applications because even keeping one agent informed of all the actions in the community would swamp the available bandwidth. Also the controller would become a severe communication bottleneck and would render the remaining components unusable if it failed. To produce systems without bottlenecks and which exhibit graceful degradation of performance, most DAI research has concentrated on developing communities in which both control and data are distributed. Distributed control means that individuals have a degree of autonomy in generating new actions and in deciding which tasks to do next. When designing such systems it is important to ensure that agents spend the bulk of their time engaged on solving the domain level problems for which they were built, rather than in communication and coordination activities. To this end, the community should be decomposed into the most modular units possible. However the designer should ensure that these units are of sufficient granularity to warrant the overhead inherent in goal distribution distributing small tasks can prove more expensive than performing them in one place (Durfee et al., 1987). The disadvantage of distributing control and data is that knowledge of the system’s overall state is dispersed throughout the community and each individual has only a partial and imprecise perspective. Thus there is an increased degree of uncertainty about each agent’s actions, meaning that it more difficult to attain coherent global behaviour for example, agents may spread misleading and distracting information, multiple agents may compete for unshareable resources simultaneously, agents may unwittingly undo the results of each others activities and the same actions may be carried out redundantly. Also the dynamics of such systems can become extremely complex, giving rise to nonlinear oscillations and chaos (Huberman and Hogg, 1988). In such cases the coordination process becomes correspondingly more difficult as well as more important1. To develop better and more integrated models of coordination, and hence improve the efficiency and utility of DAI systems, it is necessary to obtain a deeper understanding of the fundamental concepts which underpin agent interactions. The first step in this analysis is to determine the perspective from which coordination should be described. When viewing agents from a purely behaviouristic (external) perspective, it is, in general, impossible to determine whether they have coordinated their actions. Firstly, actions may be incoherent even if the agents tried to coordinate their behaviour. This may occur, for instance, because their models of each other or of the environment are incorrect. For example, robot1 may see robot2 heading for exit2 and, based on this observation and the subsequent deduction that it will use this exit, decide to use exit1. However if robot2 is heading towards exit2 to pick up a particular item and actually intends to use exit1 then there may be incoherent behaviour (both agents attempting to use the same exit) although there was coordination. Secondly, even if there is coherent action, it may not", "title": "" }, { "docid": "6bb318e50887e972cbfe52936c82c26f", "text": "We model the photo cropping problem as a cascade of attention box regression and aesthetic quality classification, based on deep learning. A neural network is designed that has two branches for predicting attention bounding box and analyzing aesthetics, respectively. The predicted attention box is treated as an initial crop window where a set of cropping candidates are generated around it, without missing important information. Then, aesthetics assessment is employed to select the final crop as the one with the best aesthetic quality. With our network, cropping candidates share features within full-image convolutional feature maps, thus avoiding repeated feature computation and leading to higher computation efficiency. Via leveraging rich data for attention prediction and aesthetics assessment, the proposed method produces high-quality cropping results, even with the limited availability of training data for photo cropping. The experimental results demonstrate the competitive results and fast processing speed (5 fps with all steps).", "title": "" }, { "docid": "b255a513fe6140fc9534087563efb36e", "text": "Traditional decision tree classifiers work with data whose values are known and precise. We extend such classifiers to handle data with uncertain information. Value uncertainty arises in many applications during the data collection process. Example sources of uncertainty include measurement/quantization errors, data staleness, and multiple repeated measurements. With uncertainty, the value of a data item is often represented not by one single value, but by multiple values forming a probability distribution. Rather than abstracting uncertain data by statistical derivatives (such as mean and median), we discover that the accuracy of a decision tree classifier can be much improved if the \"complete information\" of a data item (taking into account the probability density function (pdf)) is utilized. We extend classical decision tree building algorithms to handle data tuples with uncertain values. Extensive experiments have been conducted which show that the resulting classifiers are more accurate than those using value averages. Since processing pdfs is computationally more costly than processing single values (e.g., averages), decision tree construction on uncertain data is more CPU demanding than that for certain data. To tackle this problem, we propose a series of pruning techniques that can greatly improve construction efficiency.", "title": "" }, { "docid": "88660d823f1c20cf0b75b665c66af696", "text": "A pectus index can be derived from dividing the transverse diameter of the chest by the anterior-posterior diameter on a simple CT scan. In a preliminary report, all patients who required operative correction for pectus excavatum had a pectus index greater than 3.25 while matched normal controls were all less than 3.25. A simple CT scan may be a useful adjunct in objective evaluation of children and teenagers for surgery of pectus excavatum.", "title": "" }, { "docid": "518dc6882c6e13352c7b41f23dfd2fad", "text": "The Diagnostic and Statistical Manual of Mental Disorders (DSM) is considered to be the gold standard manual for assessing the psychiatric diseases and is currently in its fourth version (DSM-IV), while a fifth (DSM-V) has just been released in May 2013. The DSM-V Anxiety Work Group has put forward recommendations to modify the criteria for diagnosing specific phobias. In this manuscript, we propose to consider the inclusion of nomophobia in the DSM-V, and we make a comprehensive overview of the existing literature, discussing the clinical relevance of this pathology, its epidemiological features, the available psychometric scales, and the proposed treatment. Even though nomophobia has not been included in the DSM-V, much more attention is paid to the psychopathological effects of the new media, and the interest in this topic will increase in the near future, together with the attention and caution not to hypercodify as pathological normal behaviors.", "title": "" }, { "docid": "426826d9ede3c0146840e4ec9190e680", "text": "We propose methods to classify lines of military chat, or posts, which contain items of interest. We evaluated several current text categorization and feature selection methodologies on chat posts. Our chat posts are examples of 'micro-text', or text that is generally very short in length, semi-structured, and characterized by unstructured or informal grammar and language. Although this study focused specifically on tactical updates via chat, we believe the findings are applicable to content of a similar linguistic structure. Completion of this milestone is a significant first step in allowing for more complex categorization and information extraction.", "title": "" }, { "docid": "1ea8d9c2b1f2285c17082dedda550afe", "text": "Background: Medical profession has been always a noble and prestigious path but the endeavour behind it has been truly known by the persons who undergone the training of becoming a doctor. Medical students face many stresses in their academic life. This study is carried out to provide data and re-establish the effect of academic examination stress on the plasma cortisol levels. Methods: A longitudinal follow up study was carried out on the first MBBS medical students who were appearing for their first credit examination by measuring their plasma cortisol levels in pre-examination and post-examination stage in fasting condition. Serum Cortisol was estimated by using Byer’s Advia Centuse advanced Chemiluminescence’s technique with inbuilt calibrators and controls; the results obtained were statistically analysed using paired ‘t’ test. Results: On statistically analysing the results of our study we found that medical students in stage – I had significantly higher values of plasma cortisol than when they were in stage – II. Conclusion: The results cover a significant correlation of examination stress factors to changes in plasma cortisol values. It is important for medical students to use stress reducing measures, or reduce them as much as possible in order to avoid factors that can affect themselves and their patients in stressful way.", "title": "" }, { "docid": "77a92d896da31390bb0bd0c593361c6b", "text": "Non-inflammatory cystic lesions of the pancreas are increasingly recognized. Two distinct entities have been defined, i.e., intraductal papillary mucinous neoplasm (IPMN) and mucinous cystic neoplasm (MCN). Ovarian-type stroma has been proposed as a requisite to distinguish MCN from IPMN. Some other distinct features to characterize IPMN and MCN have been identified, but there remain ambiguities between the two diseases. In view of the increasing frequency with which these neoplasms are being diagnosed worldwide, it would be helpful for physicians managing patients with cystic neoplasms of the pancreas to have guidelines for the diagnosis and treatment of IPMN and MCN. The proposed guidelines represent a consensus of the working group of the International Association of Pancreatology.", "title": "" }, { "docid": "0a2a39149013843b0cece63687ebe9e9", "text": "177Lu-labeled PSMA-617 is a promising new therapeutic agent for radioligand therapy (RLT) of patients with metastatic castration-resistant prostate cancer (mCRPC). Initiated by the German Society of Nuclear Medicine, a retrospective multicenter data analysis was started in 2015 to evaluate efficacy and safety of 177Lu-PSMA-617 in a large cohort of patients.\n\n\nMETHODS\nOne hundred forty-five patients (median age, 73 y; range, 43-88 y) with mCRPC were treated with 177Lu-PSMA-617 in 12 therapy centers between February 2014 and July 2015 with 1-4 therapy cycles and an activity range of 2-8 GBq per cycle. Toxicity was categorized by the common toxicity criteria for adverse events (version 4.0) on the basis of serial blood tests and the attending physician's report. The primary endpoint for efficacy was biochemical response as defined by a prostate-specific antigen decline ≥ 50% from baseline to at least 2 wk after the start of RLT.\n\n\nRESULTS\nA total of 248 therapy cycles were performed in 145 patients. Data for biochemical response in 99 patients as well as data for physician-reported and laboratory-based toxicity in 145 and 121 patients, respectively, were available. The median follow-up was 16 wk (range, 2-30 wk). Nineteen patients died during the observation period. Grade 3-4 hematotoxicity occurred in 18 patients: 10%, 4%, and 3% of the patients experienced anemia, thrombocytopenia, and leukopenia, respectively. Xerostomia occurred in 8%. The overall biochemical response rate was 45% after all therapy cycles, whereas 40% of patients already responded after a single cycle. Elevated alkaline phosphatase and the presence of visceral metastases were negative predictors and the total number of therapy cycles positive predictors of biochemical response.\n\n\nCONCLUSION\nThe present retrospective multicenter study of 177Lu-PSMA-617 RLT demonstrates favorable safety and high efficacy exceeding those of other third-line systemic therapies in mCRPC patients. Future phase II/III studies are warranted to elucidate the survival benefit of this new therapy in patients with mCRPC.", "title": "" }, { "docid": "1564a94998151d52785dd0429b4ee77d", "text": "Location management refers to the problem of updating and searching the current location of mobile nodes in a wireless network. To make it efficient, the sum of update costs of location database must be minimized. Previous work relying on fixed location databases is unable to fully exploit the knowledge of user mobility patterns in the system so as to achieve this minimization. The study presents an intelligent location management approach which has interacts between intelligent information system and knowledge-base technologies, so we can dynamically change the user patterns and reduce the transition between the VLR and HLR. The study provides algorithms are ability to handle location registration and call delivery.", "title": "" } ]
scidocsrr
2795d9c15f3c9210879ea4fa99c1c5d0
Design of high efficiency line start permanent magnet motor for submersible pumps
[ { "docid": "90f3c2ea17433ee296702cca53511b9e", "text": "This paper presents the design process, detailed analysis, and prototyping of a novel-structured line-start solid-rotor-based axial-flux permanent-magnet (AFPM) motor capable of autostarting with solid-rotor rings. The preliminary design is a slotless double-sided AFPM motor with four poles for high torque density and stable operation. Two concentric unilevel-spaced raised rings are added to the inner and outer radii of the rotor discs for smooth line-start of the motor. The design allows the motor to operate at both starting and synchronous speeds. The basic equations for the solid rings of the rotor of the proposed AFPM motor are discussed. Nonsymmetry of the designed motor led to its 3-D time-stepping finite-element analysis (FEA) via Vector Field Opera 14.0, which evaluates the design parameters and predicts the transient performance. To verify the design, a prototype 1-hp four-pole three-phase line-start AFPM synchronous motor is built and is used to test the performance in real time. There is a good agreement between experimental and FEA-based computed results. It is found that the prototype motor maintains high starting torque and good synchronization.", "title": "" } ]
[ { "docid": "29ea1cfa755ae438f989d41d85dfefaa", "text": "Early case studies and noncontrolled trial studies focusing on the treatment of delusions and hallucinations have laid the foundation for more recent developments in comprehensive cognitive behavioral therapy (CBT) interventions for schizophrenia. Seven randomized, controlled trial studies testing the efficacy of CBT for schizophrenia were identified by electronic search (MEDLINE and PsychInfo) and by personal correspondence. After a review of these studies, effect size (ES) estimates were computed to determine the statistical magnitude of clinical change in CBT and control treatment conditions. CBT has been shown to produce large clinical effects on measures of positive and negative symptoms of schizophrenia. Patients receiving routine care and adjunctive CBT have experienced additional benefits above and beyond the gains achieved with routine care and adjunctive supportive therapy. These results reveal promise for the role of CBT in the treatment of schizophrenia although additional research is required to test its efficacy, long-term durability, and impact on relapse rates and quality of life. Clinical refinements are needed also to help those who show only minimal benefit with the intervention.", "title": "" }, { "docid": "06a10608b51cc1ae6c7ef653faf637a9", "text": "WE aLL KnoW how to protect our private or most valuable data from unauthorized access: encrypt it. When a piece of data M is encrypted under a key K to yield a ciphertext C=EncK(M), only the intended recipient (who knows the corresponding secret decryption key S) will be able to invert the encryption function and recover the original plaintext using the decryption algorithm DecS(C)=DecS(EncK(M))=M. Encryption today—in both symmetric (where S=K) and public key versions (where S remains secret even when K is made publicly available)—is widely used to achieve confidentiality in many important and well-known applications: online banking, electronic shopping, and virtual private networks are just a few of the most common applications using encryption, typically as part of a larger protocol, like the TLS protocol used to secure communication over the Internet. Still, the use of encryption to protect valuable or sensitive data can be very limiting and inflexible. Once the data M is encrypted, the corresponding ciphertext C behaves to a large extent as a black box: all we can do with the box is keep it closed or opened in order to access and operate on the data. In many situations this may be exactly what we want. For example, take a remote storage system, where we want to store a large collection of documents or data files. We store the data in encrypted form, and when we want to access a specific piece of data, we retrieve the corresponding ciphertext, decrypting it locally on our own trusted computer. But as soon as we go beyond the simple data storage/ retrieval model, we are in trouble. Say we want the remote system to provide a more complex functionality, like a database system capable of indexing and searching our data, or answering complex relational or semistructured queries. Using standard encryption technology we are immediately faced with a dilemma: either we store our data unencrypted and reveal our precious or sensitive data to the storage/ database service provider, or we encrypt it and make it impossible for the provider to operate on it. If data is encrypted, then answering even a simple counting query (for example, the number of records or files that contain a certain keyword) would typically require downloading and decrypting the entire database content. Homomorphic encryption is a special kind of encryption that allows operating on ciphertexts without decrypting them; in fact, without even knowing the decryption key. For example, given ciphertexts C=EncK(M) and C'=EncK(M'), an additively homomorphic encryption scheme would allow to combine C and C' to obtain EncK(M+M'). Such encryption schemes are immensely useful in the design of complex cryptographic protocols. For example, an electronic voting scheme may collect encrypted votes Ci=EncK(Mi) where each vote Mi is either 0 or 1, and then tally them to obtain the encryption of the outcome C=EncK(M1+..+Mn). This would be decrypted by an appropriate authority that has the decryption key and ability to announce the result, but the entire collection and tallying process would operate on encrypted data without the use of the secret key. (Of course, this is an oversimplified protocol, as many other issues must be addressed in a real election scheme, but it well illustrates the potential usefulness of homomorphic encryption.) To date, all known homomorphic encryption schemes supported essentially only one basic operation, for example, addition. But the potential of fully homomorphic encryption (that is, homomorphic encryption supporting arbitrarily complex computations on ciphertexts) is clear. Think of encrypting your queries before you send them to your favorite search engine, and receive the encryption of the result without the search engine even knowing what the query was. Imagine running your most computationally intensive programs on your large datasets on a cluster of remote computers, as in a cloud computing environment, while keeping both your programs, data, and results encrypted and confidential. The idea of fully homomorphic encryption schemes was first proposed by Rivest, Adleman, and Dertouzos the late 1970s, but remained a mirage for three decades, the never-to-be-found Holy Grail of cryptography. At least until 2008, when Craig Gentry announced a new approach to the construction of fully homomorphic cryptosystems. In the following paper, Gentry describes his innovative method for constructing fully homomorphic encryption schemes, the first credible solution to this long-standing major problem in cryptography and theoretical computer science at large. While much work is still to be done before fully homomorphic encryption can be used in practice, Gentry’s work is clearly a landmark achievement. Before Gentry’s discovery many members of the cryptography research community thought fully homomorphic encryption was impossible to achieve. Now, most cryptographers (me among them) are convinced the Holy Grail exists. In fact, there must be several of them, more or less efficient ones, all out there waiting to be discovered. Gentry gives a very accessible and enjoyable description of his general method to achieve fully homomorphic encryption as well as a possible instantiation of his framework recently proposed by van Dijik, Gentry, Halevi, and Vaikuntanathan. He has taken great care to explain his technically complex results, some of which have their roots in lattice-based cryptography, using a metaphorical tale of a jeweler and her quest to keep her precious materials safe, while at the same time allowing her employees to work on them. Gentry’s homomorphic encryption work is truly worth a read.", "title": "" }, { "docid": "b214270aacf9c9672af06e58ff26aa5a", "text": "Traditional techniques for measuring similarities between time series are based on handcrafted similarity measures, whereas more recent learning-based approaches cannot exploit external supervision. We combine ideas from timeseries modeling and metric learning, and study siamese recurrent networks (SRNs) that minimize a classification loss to learn a good similarity measure between time series. Specifically, our approach learns a vectorial representation for each time series in such a way that similar time series are modeled by similar representations, and dissimilar time series by dissimilar representations. Because it is a similarity prediction models, SRNs are particularly well-suited to challenging scenarios such as signature recognition, in which each person is a separate class and very few examples per class are available. We demonstrate the potential merits of SRNs in withindomain and out-of-domain classification experiments and in one-shot learning experiments on tasks such as signature, voice, and sign language recognition.", "title": "" }, { "docid": "571f07c7c8ba724d3e266788e5dac622", "text": "The memory system is a fundamental performance and energy bottleneck in almost all computing systems. Recent system design, application, and technology trends that require more capacity, bandwidth, efficiency, and predictability out of the memory system make it an even more important system bottleneck. At the same time, DRAM technology is experiencing difficult technology scaling challenges that make the maintenance and enhancement of its capacity, energy-efficiency, and reliability significantly more costly with conventional techniques. In this paper, after describing the demands and challenges faced by the memory system, we examine some promising research and design directions to overcome challenges posed by memory scaling. Specifically, we survey three key solution directions: 1) enabling new DRAM architectures, functions, interfaces, and better integration of the DRAM and the rest of the system, 2) designing a memory system that employs emerging memory technologies and takes advantage of multiple different technologies, 3) providing predictable performance and QoS to applications sharing the memory system. We also briefly describe our ongoing related work in combating scaling challenges of NAND flash memory.", "title": "" }, { "docid": "55ff1f320953b9c9405541c8afd7841a", "text": "Uber used a disruptive business model driven by dig ital technology to trigger a ride-sharing revolution. The institutional sources of the compa ny’s platform ecosystem architecture were analyzed to explain this revolutionary change. Both an empirical analysis of a co-existing develop ment trajectory with taxis and institutional enablers that helped to create Uber’s platform ecos ystem were analyzed. The analysis identified a correspondence with the “ wo-faced” nature of ICT that nurtures uncaptured GDP. This two-faced nature of ICT can be attributed to a virtuous cycle of decline in prices and an increase in the number of trips. We show that this cycle can be attributed to a self -propagating function that plays a vital role in the spinoff from traditional co-evolution to new co-evolution. Furthermore, we use the three mega-trends of ICT advancement, paradigm chan ge d a shift in people’s preferences to explain the secret of Uber’s system success. All these noteworthy elements seem essential to a w ell-functioning platform ecosystem architecture, not only in transportation but also f or other business institutions.", "title": "" }, { "docid": "7850280ba2c29dc328b9594f4def05a6", "text": "Electric traction motors in automotive applications work in operational conditions characterized by variable load, rotational speed and other external conditions: this complicates the task of diagnosing bearing defects. The objective of the present work is the development of a diagnostic system for detecting the onset of degradation, isolating the degrading bearing, classifying the type of defect. The developed diagnostic system is based on an hierarchical structure of K-Nearest Neighbours classifiers. The selection of the features from the measured vibrational signals to be used in input by the bearing diagnostic system is done by a wrapper approach based on a Multi-Objective (MO) optimization that integrates a Binary Differential Evolution (BDE) algorithm with the K-Nearest Neighbour (KNN) classifiers. The developed approach is applied to an experimental dataset. The satisfactory diagnostic performances obtain show the capability of the method, independently from the bearings operational conditions.", "title": "" }, { "docid": "ed4ca741a9e5590b90adc5b514269e09", "text": "Glass provides many opportunities for advanced packaging. The most obvious advantage is given by the material properties. As an insulator, glass has low electrical loss, particularly at high frequencies. The relatively high stiffness and ability to adjust the coefficient of thermal expansion gives advantages to manage warp in glass core substrates and bonded stacks for both through glass vias (TGV) and carrier applications. Glass also gives advantages for developing cost effective solutions. Glass forming processes allow the potential to form both in panel format as well as at thicknesses as low as 100 um, giving opportunities to optimize or eliminate current manufacturing methods. As the industry adopts glass solutions, significant advancements have been made in downstream processes such as glass handling and via/surface metallization. Of particular interest is the ability to leverage tool sets and processes for panel fabrication to enable cost structures desired by the industry. By utilizing the stiffness and adjustable CTE of glass substrates, as well as continuously reducing via size that can be made in a panel format, opportunities to manufacture glass TGV substrates in a panel format increase. We will provide an update on advancements in these areas as well as handling techniques to achieve desired process flows. We will also provide the latest demonstrations of electrical, thermal and mechanical reliability.", "title": "" }, { "docid": "ae4cebb3b37c1d168a827249c314af6f", "text": "A broadcast news stream consists of a number of stories and each story consists of several sentences. We capture this structure using a hierarchical model based on a word-level Recurrent Neural Network (RNN) sentence modeling layer and a sentence-level bidirectional Long Short-Term Memory (LSTM) topic modeling layer. First, the word-level RNN layer extracts a vector embedding the sentence information from the given transcribed lexical tokens of each sentence. These sentence embedding vectors are fed into a bidirectional LSTM that models the sentence and topic transitions. A topic posterior for each sentence is estimated discriminatively and a Hidden Markov model (HMM) follows to decode the story sequence and identify story boundaries. Experiments on the topic detection and tracking (TDT2) task indicate that the hierarchical RNN topic modeling achieves the best story segmentation performance with a higher F1-measure compared to conventional state-of-the-art methods. We also compare variations of our model to infer the optimal structure for the story segmentation task.", "title": "" }, { "docid": "a79c9ee27a13b35c1d6710cf9a1ee9cf", "text": "We present a new end-to-end network architecture for facial expression recognition with an attention model. It focuses attention in the human face and uses a Gaussian space representation for expression recognition. We devise this architecture based on two fundamental complementary components: (1) facial image correction and attention and (2) facial expression representation and classification. The first component uses an encoder-decoder style network and a convolutional feature extractor that are pixel-wise multiplied to obtain a feature attention map. The second component is responsible for obtaining an embedded representation and classification of the facial expression. We propose a loss function that creates a Gaussian structure on the representation space. To demonstrate the proposed method, we create two larger and more comprehensive synthetic datasets using the traditional BU3DFE and CK+ facial datasets. We compared results with the PreActResNet18 baseline. Our experiments on these datasets have shown the superiority of our approach in recognizing facial expressions.", "title": "" }, { "docid": "646a1a07019d0f2965051baebcfe62c5", "text": "We present a computing model based on the DNA strand displacement technique, which performs Bayesian inference. The model will take single-stranded DNA as input data, that represents the presence or absence of a specific molecular signal (evidence). The program logic encodes the prior probability of a disease and the conditional probability of a signal given the disease affecting a set of different DNA complexes and their ratios. When the input and program molecules interact, they release a different pair of single-stranded DNA species whose ratio represents the application of Bayes’ law: the conditional probability of the disease given the signal. The models presented in this paper can have the potential to enable the application of probabilistic reasoning in genetic diagnosis in vitro.", "title": "" }, { "docid": "c4e43160e9c3d4358d03cc32170e6c60", "text": "A cavity-backed dual slant polarized and low mutual coupling antenna array panel with frequency band from 4.9 to 6 GHz is analyzed and realized for the MIMO antenna 5G applications. The beamforming capability of this array is also explored. The printed cross dipoles fed with balun and enclosed in a cavity are used as radiating elements. The two cross dipoles are placed at an angle of 45° and 135° giving slant polarizations. A <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> subarray of dimension <inline-formula> <tex-math notation=\"LaTeX\">$2.8\\lambda \\times 2.8\\lambda \\times 0.26\\lambda $ </tex-math></inline-formula> where <inline-formula> <tex-math notation=\"LaTeX\">$\\lambda $ </tex-math></inline-formula> is free space wavelength at 6 GHz is designed, fabricated, and experimentally verified. It shows good impedance matching, port isolation, envelope correlation coefficient, and radiation characteristics which are desired for MIMO applications. Beamforming capability in the digital domain is verified using the Keysight SystemVue simulation tool for both <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$16\\times 16$ </tex-math></inline-formula> panel arrays which employ measured 3-D embedded element radiation pattern data of the fabricated <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> subarray. Four simultaneous beams using digital beamforming approach are also presented for the <inline-formula> <tex-math notation=\"LaTeX\">$16 \\times 16$ </tex-math></inline-formula> array for multiuser environment base station antenna applications.", "title": "" }, { "docid": "29eaf6ebd4fc26fac5bd9d29d71d4dc4", "text": "The continuing growth of published scholarly content on the web ensures the availability of the most recent scientific findings to researchers. Scholarly documents, such as research articles, are easily accessed by using academic search engines that are built on large repositories of scholarly documents. Scientific information extraction from documents into a structured knowledge graph representation facilitates automated machine understanding of a document's content. Traditional information extraction approaches, that either require training samples or a preexisting knowledge base to assist in the extraction, can be challenging when applied to large repositories of digital documents. Labeled training examples for such large scale are difficult to obtain for such datasets. Also, most available knowledge bases are built from web data and do not have sufficient coverage to include concepts found in scientific articles. In this paper we aim to construct a knowledge graph from scholarly documents while addressing both these issues. We propose a fully automatic, unsupervised system for scientific information extraction that does not build on an existing knowledge base and avoids manually-tagged training data. We describe and evaluate a constructed taxonomy that contains over 15k entities resulting from applying our approach to 10k documents.", "title": "" }, { "docid": "4a227bddcaed44777eb7a29dcf940c6c", "text": "Deep neural networks have achieved great success on a variety of machine learning tasks. There are many fundamental and open questions yet to be answered, however. We introduce the Extended Data Jacobian Matrix (EDJM) as an architecture-independent tool to analyze neural networks at the manifold of interest. The spectrum of the EDJM is found to be highly correlated with the complexity of the learned functions. After studying the effect of dropout, ensembles, and model distillation using EDJM, we propose a novel spectral regularization method, which improves network performance.", "title": "" }, { "docid": "3e9a214856235ef36a4dd2e9684543b7", "text": "Leaf area index (LAI) is a key biophysical variable that can be used to derive agronomic information for field management and yield prediction. In the context of applying broadband and high spatial resolution satellite sensor data to agricultural applications at the field scale, an improved method was developed to evaluate commonly used broadband vegetation indices (VIs) for the estimation of LAI with VI–LAI relationships. The evaluation was based on direct measurement of corn and potato canopies and on QuickBird multispectral images acquired in three growing seasons. The selected VIs were correlated strongly with LAI but with different efficiencies for LAI estimation as a result of the differences in the stabilities, the sensitivities, and the dynamic ranges. Analysis of error propagation showed that LAI noise inherent in each VI–LAI function generally increased with increasing LAI and the efficiency of most VIs was low at high LAI levels. Among selected VIs, the modified soil-adjusted vegetation index (MSAVI) was the best LAI estimator with the largest dynamic range and the highest sensitivity and overall efficiency for both crops. QuickBird image-estimated LAI with MSAVI–LAI relationships agreed well with ground-measured LAI with the root-mean-square-error of 0.63 and 0.79 for corn and potato canopies, respectively. LAI estimated from the high spatial resolution pixel data exhibited spatial variability similar to the ground plot measurements. For field scale agricultural applications, MSAVI–LAI relationships are easy-to-apply and reasonably accurate for estimating LAI. # 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ac2d144c5c06fcfb2d0530b115f613dc", "text": "In medical imaging, Computer Aided Diagnosis (CAD) is a rapidly growing dynamic area of research. In recent years, significant attempts are made for the enhancement of computer aided diagnosis applications because errors in medical diagnostic systems can result in seriously misleading medical treatments. Machine learning is important in Computer Aided Diagnosis. After using an easy equation, objects such as organs may not be indicated accurately. So, pattern recognition fundamentally involves learning from examples. In the field of bio-medical, pattern recognition and machine learning promise the improved accuracy of perception and diagnosis of disease. They also promote the objectivity of decision-making process. For the analysis of high-dimensional and multimodal bio-medical data, machine learning offers a worthy approach for making classy and automatic algorithms. This survey paper provides the comparative analysis of different machine learning algorithms for diagnosis of different diseases such as heart disease, diabetes disease, liver disease, dengue disease and hepatitis disease. It brings attention towards the suite of machine learning algorithms and tools that are used for the analysis of diseases and decision-making process accordingly.", "title": "" }, { "docid": "c3d25395aff2ec6039b21bd2415bcf1f", "text": "A growing trend for information technology is to not just react to changes, but anticipate them as much as possible. This paradigm made modern solutions, such as recommendation systems, a ubiquitous presence in today’s digital transactions. Anticipatory networking extends the idea to communication technologies by studying patterns and periodicity in human behavior and network dynamics to optimize network performance. This survey collects and analyzes recent papers leveraging context information to forecast the evolution of network conditions and, in turn, to improve network performance. In particular, we identify the main prediction and optimization tools adopted in this body of work and link them with objectives and constraints of the typical applications and scenarios. Finally, we consider open challenges and research directions to make anticipatory networking part of next generation networks.", "title": "" }, { "docid": "33e1dad6c4f163c0d69bd3f58ecf9058", "text": "Resistive random access memory (RRAM) has gained significant attentions because of its excellent characteristics which are suitable for next-generation non-volatile memory applications. It is also very attractive to build neuromorphic computing chip based on RRAM cells due to non-volatile and analog properties. Neuromorphic computing hardware technologies using analog weight storage allow the scaling-up of the system size to complete cognitive tasks such as face classification much faster while consuming much lower energy. In this paper, RRAM technology development from material selection to device structure, from small array to full chip will be discussed in detail. Neuromorphic computing using RRAM devices is demonstrated, and speed & energy consumption are compared with Xeon Phi processor.", "title": "" }, { "docid": "44491cab59a3f26d559edce907c50fd3", "text": "Grammatical error correction (GEC) systems strive to correct both global errors in word order and usage, and local errors in spelling and inflection. Further developing upon recent work on neural machine translation, we propose a new hybrid neural model with nested attention layers for GEC. Experiments show that the new model can effectively correct errors of both types by incorporating word and character-level information, and that the model significantly outperforms previous neural models for GEC as measured on the standard CoNLL14 benchmark dataset. Further analysis also shows that the superiority of the proposed model can be largely attributed to the use of the nested attention mechanism, which has proven particularly effective in correcting local errors that involve small edits in orthography.", "title": "" }, { "docid": "e4da3b7fbbce2345d7772b0674a318d5", "text": "5", "title": "" } ]
scidocsrr
5a82d30b5b3db8ee29e44ca3b06f2aa1
Classification of design parameters for E-commerce websites: A novel fuzzy Kano approach
[ { "docid": "1c0efa706f999ee0129d21acbd0ef5ab", "text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN", "title": "" } ]
[ { "docid": "8eb2a660107b304caf574bdf7fad3f23", "text": "To enhance torque density by harmonic current injection, optimal slot/pole combinations for five-phase permanent magnet synchronous motors (PMSM) with fractional-slot concentrated windings (FSCW) are chosen. The synchronous and the third harmonic winding factors are calculated for a series of slot/pole combinations. Two five-phase PMSM, with general FSCW (GFSCW) and modular stator and FSCW (MFSCW), are analyzed and compared in detail, including the stator structures, star of slots diagrams, and MMF harmonic analysis based on the winding function theory. The analytical results are verified by finite element method, the torque characteristics and phase back-EMF are also taken into considerations. Results show that the MFSCW PMSM can produce higher average torque, while characterized by more MMF harmonic contents and larger ripple torque.", "title": "" }, { "docid": "1f7fa34fd7e0f4fd7ff9e8bba2a78e3c", "text": "Today many multi-national companies or organizations are adopting the use of automation. Automation means replacing the human by intelligent robots or machines which are capable to work as human (may be better than human). Artificial intelligence is a way of making machines, robots or software to think like human. As the concept of artificial intelligence is use in robotics, it is necessary to understand the basic functions which are required for robots to think and work like human. These functions are planning, acting, monitoring, perceiving and goal reasoning. These functions help robots to develop its skills and implement it. Since robotics is a rapidly growing field from last decade, it is important to learn and improve the basic functionality of robots and make it more useful and user-friendly.", "title": "" }, { "docid": "23305a36194ad3c9b6b3f667c79bd273", "text": "Evidence used to reconstruct the morphology and function of the brain (and the rest of the central nervous system) in fossil hominin species comes from the fossil and archeological records. Although the details provided about human brain evolution are scarce, they benefit from interpretations informed by interspecific comparative studies and, in particular, human pathology studies. In recent years, new information has come to light about fossil DNA and ontogenetic trajectories, for which pathology research has significant implications. We briefly describe and summarize data from the paleoarcheological and paleoneurological records about the evolution of fossil hominin brains, including behavioral data most relevant to brain research. These findings are brought together to characterize fossil hominin taxa in terms of brain structure and function and to summarize brain evolution in the human lineage.", "title": "" }, { "docid": "4d18ea8816e9e4abf428b3f413c82f9e", "text": "This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.", "title": "" }, { "docid": "16a1f15e8e414b59a230fb4a28c53cc7", "text": "In this study we examined whether the effects of mental fatigue on behaviour are due to reduced action monitoring as indexed by the error related negativity (Ne/ERN), N2 and contingent negative variation (CNV) event-related potential (ERP) components. Therefore, we had subjects perform a task, which required a high degree of action monitoring, continuously for 2h. In addition we tried to relate the observed behavioural and electrophysiological changes to motivational processes and individual differences. Changes in task performance due to fatigue were accompanied by a decrease in Ne/ERN and N2 amplitude, reflecting impaired action monitoring, as well as a decrease in CNV amplitude which reflects reduced response preparation with increasing fatigue. Increasing the motivational level of our subjects resulted in changes in behaviour and brain activity that were different for individual subjects. Subjects that increased their performance accuracy displayed an increase in Ne/ERN amplitude, while subjects that increased their response speed displayed an increase in CNV amplitude. We will discuss the effects prolonged task performance on the behavioural and physiological indices of action monitoring, as well as the relationship between fatigue, motivation and individual differences.", "title": "" }, { "docid": "eaf1c419853052202cb90246e48a3697", "text": "The objective of this document is to promote the use of dynamic daylight performance measures for sustainable building design. The paper initially explores the shortcomings of conventional, static daylight performance metrics which concentrate on individual sky conditions, such as the common daylight factor. It then provides a review of previously suggested dynamic daylight performance metrics, discussing the capability of these metrics to lead to superior daylighting designs and their accessibility to nonsimulation experts. Several example offices are examined to demonstrate the benefit of basing design decisions on dynamic performance metrics as opposed to the daylight factor. Keywords—–daylighting, dynamic, metrics, sustainable buildings", "title": "" }, { "docid": "a20ba0bb564711edc201b0e021e0dee9", "text": "We approach the task of human silhouette extraction from color and thermal image sequences using automatic image registration. Image registration between color and thermal images is a challenging problem due to the difficulties associated with finding correspondence. However, the moving people in a static scene provide cues to address this problem. In this paper, we propose a hierarchical scheme to automatically find the correspondence between the preliminary human silhouettes extracted from synchronous color and thermal image sequences for image registration. Next, we discuss strategies for probabilistically combining cues from registered color and thermal images for improved human silhouette detection. It is shown that the proposed approach achieves good results for image registration and human silhouette extraction. Experimental results also show a comparison of various sensor fusion strategies and demonstrate the improvement in performance over nonfused cases for human silhouette extraction. 2006 Published by Elsevier Ltd on behalf of Pattern Recognition Society.", "title": "" }, { "docid": "04a15b226d2466ea03306e3f413b4bd0", "text": "More and more people express their opinions on social media such as Facebook and Twitter. Predictive analysis on social media time-series allows the stake-holders to leverage this immediate, accessible and vast reachable communication channel to react and proact against the public opinion. In particular, understanding and predicting the sentiment change of the public opinions will allow business and government agencies to react against negative sentiment and design strategies such as dispelling rumors and post balanced messages to revert the public opinion. In this paper, we present a strategy of building statistical models from the social media dynamics to predict collective sentiment dynamics. We model the collective sentiment change without delving into micro analysis of individual tweets or users and their corresponding low level network structures. Experiments on large-scale Twitter data show that the model can achieve above 85% accuracy on directional sentiment prediction.", "title": "" }, { "docid": "bd89993bebdbf80b516626881d459333", "text": "Creating a mobile application often requires the developers to create one for Android och one for iOS, the two leading operating systems for mobile devices. The two applications may have the same layout and logic but several components of the user interface (UI) will differ and the applications themselves need to be developed in two different languages. This process is gruesome since it is time consuming to create two applications and it requires two different sets of knowledge. There have been attempts to create techniques, services or frameworks in order to solve this problem but these hybrids have not been able to provide a native feeling of the resulting applications. This thesis has evaluated the newly released framework React Native that can create both iOS and Android applications by compiling the code written in React. The resulting applications can share code and consists of the UI components which are unique for each platform. The thesis focused on Android and tried to replicate an existing Android application in order to measure user experience and performance. The result was surprisingly positive for React Native as some user could not tell the two applications apart and nearly all users did not mind using a React Native application. The performance evaluation measured GPU frequency, CPU load, memory usage and power consumption. Nearly all measurements displayed a performance advantage for the Android application but the differences were not protruding. The overall experience is that React Native a very interesting framework that can simplify the development process for mobile applications to a high degree. As long as the application itself is not too complex, the development is uncomplicated and one is able to create an application in very short time and be compiled to both Android and iOS. First of all I would like to express my deepest gratitude for Valtech who aided me throughout the whole thesis with books, tools and knowledge. They supplied me with two very competent consultants Alexander Lindholm and Tomas Tunström which made it possible for me to bounce off ideas and in the end having a great thesis. Furthermore, a big thanks to the other students at Talangprogrammet who have supported each other and me during this period of time and made it fun even when it was as most tiresome. Furthermore I would like to thank my examiner Erik Berglund at Linköpings university who has guided me these last months and provided with insightful comments regarding the paper. Ultimately I would like to thank my family who have always been there to support me and especially my little brother who is my main motivation in life.", "title": "" }, { "docid": "ee9e24f38d7674e601ab13b73f3d37db", "text": "This paper presents the design of an application specific hardware for accelerating High Frequency Trading applications. It is optimized to achieve the lowest possible latency for interpreting market data feeds and hence enable minimal round-trip times for executing electronic stock trades. The implementation described in this work enables hardware decoding of Ethernet, IP and UDP as well as of the FAST protocol which is a common protocol to transmit market feeds. For this purpose, we developed a microcode engine with a corresponding instruction set as well as a compiler which enables the flexibility to support a wide range of applied trading protocols. The complete system has been implemented in RTL code and evaluated on an FPGA. Our approach shows a 4x latency reduction in comparison to the conventional Software based approach.", "title": "" }, { "docid": "5e503aaee94e2dc58f9311959d5a142e", "text": "The use of the fast Fourier transform in power spectrum analysis is described. Principal advantages of this method are a reduction in the number of computations and in required core storage, and convenient application in nonstationarity tests. The method involves sectioning the record and averaging modified periodograms of the sections. T INTRODLCTION HIS PAPER outlines a method for the application of the fast Fourier transform algorithm to the estimation of power spectra, which involves sectioning the record, taking modified periodograms of these sections, and averaging these modified periodo-grams. In many instances this method involves fewer computations than other methods. Moreover, it involves the transformation of sequences which are shorter than the whole record which is an advantage when computations are to be performed on a machine with limited core storage. Finally, it directly yields a potential resolution in the time dimension which is useful for testing and measuring nonstationarity. As will be pointed out, it is closely related to the method of complex demodulation described Let X(j), j= 0, N-1 be a sample from a stationary , second-order stochastic sequence. Assume for simplicity that E(X) 0. Let X(j) have spectral density Pcf), I f \\ 5%. We take segments, possibly overlapping, of length L with the starting points of these segments D units apart. Let X,(j),j=O, L 1 be the first such segment. Then Xdj) X($ and finally X&) X(j+ (K 1)D) j 0, ,L-1. We suppose we have K such segments; Xl(j), X,($, and that they cover the entire record, Le., that (K-1)DfL N. This segmenting is illustrated in Fig. 1. The method of estimation is as follows. For each segment of length L we calculate a modified periodo-gram. That is, we select a data window W(j), j= 0, L-1, and form the sequences Xl(j)W(j), X,(j) W(j). We then take the finite Fourier transforms A1(n), AK(~) of these sequences. Here ~k(n) xk(j) w(j)e-z~cijnlL 1 L-1 L j-0 and i= Finally, we obtain the K modified periodograms L U Ik(fn) I Ah(%) k 1, 2, K, where f n 0 , o-,L/2 n \" L and 1 Wyj). L j=o The spectral estimate is the average of these periodo", "title": "" }, { "docid": "8e7088af6940cf3c2baa9f6261b402be", "text": "Empathy is an integral part of human social life, as people care about and for others who experience adversity. However, a specific “pathogenic” form of empathy, marked by automatic contagion of negative emotions, can lead to stress and burnout. This is particularly detrimental for individuals in caregiving professions who experience empathic states more frequently, because it can result in illness and high costs for health systems. Automatically recognizing pathogenic empathy from text is potentially valuable to identify at-risk individuals and monitor burnout risk in caregiving populations. We build a model to predict this type of empathy from social media language on a data set we collected of users’ Facebook posts and their answers to a new questionnaire measuring empathy. We obtain promising results in identifying individuals’ empathetic states from their social media (Pearson r = 0.252,", "title": "" }, { "docid": "dcbec6eea7b3157285298f303eb78840", "text": "Osteochondral tissue engineering has shown an increasing development to provide suitable strategies for the regeneration of damaged cartilage and underlying subchondral bone tissue. For reasons of the limitation in the capacity of articular cartilage to self-repair, it is essential to develop approaches based on suitable scaffolds made of appropriate engineered biomaterials. The combination of biodegradable polymers and bioactive ceramics in a variety of composite structures is promising in this area, whereby the fabrication methods, associated cells and signalling factors determine the success of the strategies. The objective of this review is to present and discuss approaches being proposed in osteochondral tissue engineering, which are focused on the application of various materials forming bilayered composite scaffolds, including polymers and ceramics, discussing the variety of scaffold designs and fabrication methods being developed. Additionally, cell sources and biological protein incorporation methods are discussed, addressing their interaction with scaffolds and highlighting the potential for creating a new generation of bilayered composite scaffolds that can mimic the native interfacial tissue properties, and are able to adapt to the biological environment.", "title": "" }, { "docid": "5123d52a50b75e37e90ed7224d531a18", "text": "Tarlov or perineural cysts are nerve root cysts found most commonly at the sacral spine level arising between covering layers of the perineurium and the endoneurium near the dorsal root ganglion. The cysts are relatively rare and most of them are asymptomatic. Some Tarlov cysts can exert pressure on nerve elements resulting in pain, radiculopathy and even multiple radiculopathy of cauda equina. There is no consensus on the appropriate therapeutic options of Tarlov cysts. The authors present a case of two sacral cysts diagnosed with magnetic resonance imaging. The initial symptoms were low back pain and sciatica and progressed to cauda equina syndrome. Surgical treatment was performed by sacral laminectomy and wide cyst fenestration. The neurological deficits were recovered and had not recurred after a follow-up period of nine months. The literature was reviewed and discussed. This is the first reported case in Thailand.", "title": "" }, { "docid": "eebca83626e8568e8b92019541466873", "text": "There is a need for new spectrum access protocols that are opportunistic, flexible and efficient, yet fair. Game theory provides a framework for analyzing spectrum access, a problem that involves complex distributed decisions by independent spectrum users. We develop a cooperative game theory model to analyze a scenario where nodes in a multi-hop wireless network need to agree on a fair allocation of spectrum. We show that in high interference environments, the utility space of the game is non-convex, which may make some optimal allocations unachievable with pure strategies. However, we show that as the number of channels available increases, the utility space becomes close to convex and thus optimal allocations become achievable with pure strategies. We propose the use of the Nash Bargaining Solution and show that it achieves a good compromise between fairness and efficiency, using a small number of channels. Finally, we propose a distributed algorithm for spectrum sharing and show that it achieves allocations reasonably close to the Nash Bargaining Solution.", "title": "" }, { "docid": "ad78f226f21bd020e625659ad3ddbf74", "text": "We study the approach to jamming in hard-sphere packings and, in particular, the pair correlation function g(2) (r) around contact, both theoretically and computationally. Our computational data unambiguously separate the narrowing delta -function contribution to g(2) due to emerging interparticle contacts from the background contribution due to near contacts. The data also show with unprecedented accuracy that disordered hard-sphere packings are strictly isostatic: i.e., the number of exact contacts in the jamming limit is exactly equal to the number of degrees of freedom, once rattlers are removed. For such isostatic packings, we derive a theoretical connection between the probability distribution of interparticle forces P(f) (f) , which we measure computationally, and the contact contribution to g(2) . We verify this relation for computationally generated isostatic packings that are representative of the maximally random jammed state. We clearly observe a maximum in P(f) and a nonzero probability of zero force, shedding light on long-standing questions in the granular-media literature. We computationally observe an unusual power-law divergence in the near-contact contribution to g(2) , persistent even in the jamming limit, with exponent -0.4 clearly distinguishable from previously proposed inverse-square-root divergence. Additionally, we present high-quality numerical data on the two discontinuities in the split-second peak of g(2) and use a shared-neighbor analysis of the graph representing the contact network to study the local particle clusters responsible for the peculiar features. Finally, we present the computational data on the contact contribution to g(2) for vacancy-diluted fcc crystal packings and also investigate partially crystallized packings along the transition from maximally disordered to fully ordered packings. We find that the contact network remains isostatic even when ordering is present. Unlike previous studies, we find that ordering has a significant impact on the shape of P(f) for small forces.", "title": "" }, { "docid": "b57b392e89b92aecb03235eeaaf248c8", "text": "Recent advances in semiconductor performance made possible by organic π-electron molecules, carbon-based nanomaterials, and metal oxides have been a central scientific and technological research focus over the past decade in the quest for flexible and transparent electronic products. However, advances in semiconductor materials require corresponding advances in compatible gate dielectric materials, which must exhibit excellent electrical properties such as large capacitance, high breakdown strength, low leakage current density, and mechanical flexibility on arbitrary substrates. Historically, conventional silicon dioxide (SiO2) has dominated electronics as the preferred gate dielectric material in complementary metal oxide semiconductor (CMOS) integrated transistor circuitry. However, it does not satisfy many of the performance requirements for the aforementioned semiconductors due to its relatively low dielectric constant and intransigent processability. High-k inorganics such as hafnium dioxide (HfO2) or zirconium dioxide (ZrO2) offer some increases in performance, but scientists have great difficulty depositing these materials as smooth films at temperatures compatible with flexible plastic substrates. While various organic polymers are accessible via chemical synthesis and readily form films from solution, they typically exhibit low capacitances, and the corresponding transistors operate at unacceptably high voltages. More recently, researchers have combined the favorable properties of high-k metal oxides and π-electron organics to form processable, structurally well-defined, and robust self-assembled multilayer nanodielectrics, which enable high-performance transistors with a wide variety of unconventional semiconductors. In this Account, we review recent advances in organic-inorganic hybrid gate dielectrics, fabricated by multilayer self-assembly, and their remarkable synergy with unconventional semiconductors. We first discuss the principals and functional importance of gate dielectric materials in thin-film transistor (TFT) operation. Next, we describe the design, fabrication, properties, and applications of solution-deposited multilayer organic-inorganic hybrid gate dielectrics, using self-assembly techniques, which provide bonding between the organic and inorganic layers. Finally, we discuss approaches for preparing analogous hybrid multilayers by vapor-phase growth and discuss the properties of these materials.", "title": "" }, { "docid": "20fd36e287a631c82aa8527e6a36931f", "text": "Creating a mesh is the first step in a wide range of applications, including scientific computing and computer graphics. An unstructured simplex mesh requires a choice of meshpoints (vertex nodes) and a triangulation. We want to offer a short and simple MATLAB code, described in more detail than usual, so the reader can experiment (and add to the code) knowing the underlying principles. We find the node locations by solving for equilibrium in a truss structure (using piecewise linear force-displacement relations) and we reset the topology by the Delaunay algorithm. The geometry is described implicitly by its distance function. In addition to being much shorter and simpler than other meshing techniques, our algorithm typically produces meshes of very high quality. We discuss ways to improve the robustness and the performance, but our aim here is simplicity. Readers can download (and edit) the codes from http://math.mit.edu/~persson/mesh.", "title": "" }, { "docid": "5d934dd45e812336ad12cee90d1e8cdf", "text": "As research on the connection between narcissism and social networking site (SNS) use grows, definitions of SNS and measurements of their use continue to vary, leading to conflicting results. To improve understanding of the relationship between narcissism and SNS use, as well as the implications of differences in definition and measurement, we examine two ways of measuring Facebook and Twitter use by testing the hypothesis that SNS use is positively associated with narcissism. We also explore the relation between these types of SNS use and different components of narcissism within college students and general adult samples. Our findings suggest that for college students, posting on Twitter is associated with the Superiority component of narcissistic personality while Facebook posting is associated with the Exhibitionism component. Conversely, adults high in Superiority post on Facebook more rather than Twitter. For adults, Facebook and Twitter are both used more by those focused on their own appearances but not as a means of showing off, as is the case with college students. Given these differences, it is essential for future studies of SNS use and personality traits to distinguish between different types of SNS, different populations, and different types of use. 2013 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
461c10b3a42492bd6dc8d0f4113abb32
Design of a slot antenna for future 5G wireless communication systems
[ { "docid": "b811c82ff944715edc2b7dec382cb529", "text": "The mobile industry has experienced a dramatic growth; it evolves from analog to digital 2G (GSM), then to high date rate cellular wireless communication such as 3G (WCDMA), and further to packet optimized 3.5G (HSPA) and 4G (LTE and LTE advanced) systems. Today, the main design challenges of mobile phone antenna are the requirements of small size, built-in structure, and multisystems in multibands, including all cellular 2G, 3G, 4G, and other noncellular radio-frequency (RF) bands, and moreover the need for a nice appearance and meeting all standards and requirements such as specific absorption rates (SARs), hearing aid compatibility (HAC), and over the air (OTA). This paper gives an overview of some important antenna designs and progress in mobile phones in the last 15 years, and presents the recent development on new antenna technology for LTE and compact multiple-input-multiple-output (MIMO) terminals.", "title": "" } ]
[ { "docid": "24a23aff0026141d1b6970e8216347f8", "text": "Internet of Things (IoT) is a technology paradigm where millions of sensors monitor, and help inform or manage, physical, environmental and human systems in real-time. The inherent closed-loop responsiveness and decision making of IoT applications makes them ideal candidates for using low latency and scalable stream processing platforms. Distributed Stream Processing Systems (DSPS) are becoming essential components of any IoT stack, but the efficacy and performance of contemporary DSPS have not been rigorously studied for IoT data streams and applications. Here, we develop a benchmark suite and performance metrics to evaluate DSPS for streaming IoT applications. The benchmark includes 13 common IoT tasks classified across various functional categories and forming micro-benchmarks, and two IoT applications for statistical summarization and predictive analytics that leverage various dataflow compositional features of DSPS. These are coupled with stream workloads sourced from real IoT observations from smart cities. We validate the IoT benchmark for the popular Apache Storm DSPS, and present empirical results.", "title": "" }, { "docid": "d99747fb44a839a2ab8765c1176e4c77", "text": "The aim of this paper is to explore text topic influence in authorship attribution. Specifically, we test the widely accepted belief that stylometric variables commonly used in authorship attribution are topic-neutral and can be used in multi-topic corpora. In order to investigate this hypothesis, we created a special corpus, which was controlled for topic and author simultaneously. The corpus consists of 200 Modern Greek newswire articles written by two authors in two different topics. Many commonly used stylometric variables were calculated and for each one we performed a two-way ANOVA test, in order to estimate the main effects of author, topic and the interaction between them. The results showed that most of the variables exhibit considerable correlation with the text topic and their exploitation in authorship analysis should be done with caution.", "title": "" }, { "docid": "a6e0bbc761830bc74d58793a134fa75b", "text": "With the explosion of multimedia data, semantic event detection from videos has become a demanding and challenging topic. In addition, when the data has a skewed data distribution, interesting event detection also needs to address the data imbalance problem. The recent proliferation of deep learning has made it an essential part of many Artificial Intelligence (AI) systems. Till now, various deep learning architectures have been proposed for numerous applications such as Natural Language Processing (NLP) and image processing. Nonetheless, it is still impracticable for a single model to work well for different applications. Hence, in this paper, a new ensemble deep learning framework is proposed which can be utilized in various scenarios and datasets. The proposed framework is able to handle the over-fitting issue as well as the information losses caused by single models. Moreover, it alleviates the imbalanced data problem in real-world multimedia data. The whole framework includes a suite of deep learning feature extractors integrated with an enhanced ensemble algorithm based on the performance metrics for the imbalanced data. The Support Vector Machine (SVM) classifier is utilized as the last layer of each deep learning component and also as the weak learners in the ensemble module. The framework is evaluated on two large-scale and imbalanced video datasets (namely, disaster and TRECVID). The extensive experimental results illustrate the advantage and effectiveness of the proposed framework. It also demonstrates that the proposed framework outperforms several well-known deep learning methods, as well as the conventional features integrated with different classifiers.", "title": "" }, { "docid": "d258a14fc9e64ba612f2c8ea77f85d08", "text": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.", "title": "" }, { "docid": "767de215cc843a255aa31ee3b45cc373", "text": "Breast cancer is the most frequently diagnosed cancer and leading cause of cancer-related death among females worldwide. In this article, we investigate the applicability of densely connected convolutional neural networks to the problems of histology image classification and whole slide image segmentation in the area of computer-aided diagnoses for breast cancer. To this end, we study various approaches for transfer learning and apply them to the data set from the 2018 grand challenge on breast cancer histology images (BACH).", "title": "" }, { "docid": "ffb65e7e1964b9741109c335f37ff607", "text": "To build a redundant medium-voltage converter, the semiconductors must be able to turn OFF different short circuits. The most challenging one is a hard turn OFF of a diode which is called short-circuit type IV. Without any protection measures this short circuit destroys the high-voltage diode. Therefore, a novel three-level converter with an increased short-circuit inductance is used. In this paper several short-circuit measurements on a 6.5 kV diode are presented which explain the effect of the protection measures. Moreover, the limits of the protection scheme are presented.", "title": "" }, { "docid": "930b64774bb10983540c6ccf092a36d9", "text": "We consider the solution of discounted optimal stopping problems using linear function approximation methods. A Q-learning algorithm for such problems, proposed by Tsitsiklis and Van Roy, is based on the method of temporal differences and stochastic approximation. We propose alternative algorithms, which are based on projected value iteration ideas and least squares. We prove the convergence of some of these algorithms and discuss their properties.", "title": "" }, { "docid": "21ca1c1fce82a764e9dc7b31e11cb0fa", "text": "We describe an approach to learning from long-tailed, imbalanced datasets that are prevalent in real-world settings. Here, the challenge is to learn accurate “fewshot” models for classes in the tail of the class distribution, for which little data is available. We cast this problem as transfer learning, where knowledge from the data-rich classes in the head of the distribution is transferred to the data-poor classes in the tail. Our key insights are as follows. First, we propose to transfer meta-knowledge about learning-to-learn from the head classes. This knowledge is encoded with a meta-network that operates on the space of model parameters, that is trained to predict many-shot model parameters from few-shot model parameters. Second, we transfer this meta-knowledge in a progressive manner, from classes in the head to the “body”, and from the “body” to the tail. That is, we transfer knowledge in a gradual fashion, regularizing meta-networks for few-shot regression with those trained with more training data. This allows our final network to capture a notion of model dynamics, that predicts how model parameters are likely to change as more training data is gradually added. We demonstrate results on image classification datasets (SUN, Places, and ImageNet) tuned for the long-tailed setting, that significantly outperform common heuristics, such as data resampling or reweighting.", "title": "" }, { "docid": "9827fa3952b7ba4e5e777793cc241148", "text": "We address the problem of segmenting a sequence of images of natural scenes into disjoint regions that are characterized by constant spatio-temporal statistics. We model the spatio-temporal dynamics in each region by Gauss-Markov models, and infer the model parameters as well as the boundary of the regions in a variational optimization framework. Numerical results demonstrate that – in contrast to purely texture-based segmentation schemes – our method is effective in segmenting regions that differ in their dynamics even when spatial statistics are identical.", "title": "" }, { "docid": "f8aeaf04486bdbc7254846d95e3cab24", "text": "In this paper, we present a novel wearable RGBD camera based navigation system for the visually impaired. The system is composed of a smartphone user interface, a glass-mounted RGBD camera device, a real-time navigation algorithm, and haptic feedback system. A smartphone interface provides an effective way to communicate to the system using audio and haptic feedback. In order to extract orientational information of the blind users, the navigation algorithm performs real-time 6-DOF feature based visual odometry using a glass-mounted RGBD camera as an input device. The navigation algorithm also builds a 3D voxel map of the environment and analyzes 3D traversability. A path planner of the navigation algorithm integrates information from the egomotion estimation and mapping and generates a safe and an efficient path to a waypoint delivered to the haptic feedback system. The haptic feedback system consisting of four micro-vibration motors is designed to guide the visually impaired user along the computed path and to minimize cognitive loads. The proposed system achieves real-time performance faster than 30Hz in average on a laptop, and helps the visually impaired extends the range of their activities and improve the mobility performance in a cluttered environment. The experiment results show that navigation in indoor environments with the proposed system avoids collisions successfully and improves mobility performance of the user compared to conventional and state-of-the-art mobility aid devices.", "title": "" }, { "docid": "5c38ad54e43b71ea5588418620bcf086", "text": "Chondrosarcomas are indolent but invasive chondroid malignancies that can form in the skull base. Standard management of chondrosarcoma involves surgical resection and adjuvant radiation therapy. This review evaluates evidence from the literature to assess the importance of the surgical approach and extent of resection on outcomes for patients with skull base chondrosarcoma. Also evaluated is the ability of the multiple modalities of radiation therapy, such as conventional fractionated radiotherapy, proton beam, and stereotactic radiosurgery, to control tumor growth. Finally, emerging therapies for the treatment of skull-base chondrosarcoma are discussed.", "title": "" }, { "docid": "e9a9938b77b2f739a83b987455bc2ef7", "text": "Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering. However, the previous neural models cannot extract the complicated feature compositions as the traditional methods with discrete features. In this paper, we propose a gated recursive neural network (GRNN) for Chinese word segmentation, which contains reset and update gates to incorporate the complicated combinations of the context characters. Since GRNN is relative deep, we also use a supervised layer-wise training method to avoid the problem of gradient diffusion. Experiments on the benchmark datasets show that our model outperforms the previous neural network models as well as the state-of-the-art methods.", "title": "" }, { "docid": "b5ca7ce46418c992a5fbe1fe01676023", "text": "Labeling topics learned by topic models is a challenging problem. Previous studies have used words, phrases and images to label topics. In this paper, we propose to use text summaries for topic labeling. Several sentences are extracted from the most related documents to form the summary for each topic. In order to obtain summaries with both high relevance, coverage and discrimination for all the topics, we propose an algorithm based on submodular optimization. Both automatic and manual analysis have been conducted on two real document collections, and we find 1) the summaries extracted by our proposed algorithm are superior over the summaries extracted by existing popular summarization methods; 2) the use of summaries as labels has obvious advantages over the use of words and phrases.", "title": "" }, { "docid": "2088c56bb59068a33de09edc6831e74b", "text": "We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional treestructured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the stateof-the-art feature-based model on end-toend relation extraction, achieving 3.5% and 4.8% relative error reductions in F1score on ACE2004 and ACE2005, respectively. We also show a 2.5% relative error reduction in F1-score over the state-ofthe-art convolutional neural network based model on nominal relation classification (SemEval-2010 Task 8).", "title": "" }, { "docid": "4445f128f31d6f42750049002cb86a29", "text": "Convolutional neural networks are a popular choice for current object detection and classification systems. Their performance improves constantly but for effective training, large, hand-labeled datasets are required. We address the problem of obtaining customized, yet large enough datasets for CNN training by synthesizing them in a virtual world, thus eliminating the need for tedious human interaction for ground truth creation. We developed a CNN-based multi-class detection system that was trained solely on virtual world data and achieves competitive results compared to state-of-the-art detection systems.", "title": "" }, { "docid": "28a11e458f0c922e3354065c7f1feb8e", "text": "Diabetes mellitus (DM) is the most common of the endocrine disorders and represents a global health problem. DM is characterized by chronic hyperglycaemia due to relative or absolute lack of insulin or the actions of insulin. Insulin is the main treatment for patients with type 1 DM and it is also important in type 2 DM when blood glucose levels cannot be controlled by diet, weight loss, exercise and oral medications alone. Prior to the availability of insulin, dietary measures, including the traditional medicines derived from plants, were the major form of treatment. A multitude of plants have been used for the treatment of diabetes throughout the world. One such plant is Momordica charantia (Linn Family: Cucurbaceae), whose fruit is known as Karela or bittergourd. For a long time, several workers have studied the effects of this plant in DM. Treatment with M. charantia fruit juice reduced blood glucose levels, improved body weight and glucose tolerance. M. charantia fruit juice can also inhibit glucose uptake by the gut and stimulate glucose uptake by skeletal muscle cells. Moreover, the juice of this plant preserves islet β cells and β cell functions, normalises the systolic blood pressure, and modulates xenobiotic metabolism and oxidative stress. M. charantia also has anti-carcinogenic properties. In conclusion, M. charantia has tremendous beneficial values in the treatment of DM.", "title": "" }, { "docid": "152d2dc6a96621ee6beb29ce472c6bb5", "text": "Value functions are a core component of reinforcement learning systems. The main idea is to to construct a single function approximator V (s; θ) that estimates the long-term reward from any state s, using parameters θ. In this paper we introduce universal value function approximators (UVFAs) V (s, g; θ) that generalise not just over states s but also over goals g. We develop an efficient technique for supervised learning of UVFAs, by factoring observed values into separate embedding vectors for state and goal, and then learning a mapping from s and g to these factored embedding vectors. We show how this technique may be incorporated into a reinforcement learning algorithm that updates the UVFA solely from observed rewards. Finally, we demonstrate that a UVFA can successfully generalise to previously unseen goals.", "title": "" }, { "docid": "c30daf9d6dac96d416a399b9b2ac5b17", "text": "In the past 10 years, several researches studied video game development process who proposed approaches to improve the way how games are developed. These approaches usually adopt agile methodologies because of claims that traditional practices and the waterfall process are gone. However, are the \"old days\" really gone in the game industry?\n In this paper, we present a survey of software engineering processes in video game industry from postmortem project analyses. We analyzed 20 postmortems from Gamasutra Portal. We extracted their processes and modelled them through using the Business Process Model and Notation (BPMN).\n This work presents three main contributions. First, a postmortem analysis methodology to identify and extract project processes. Second, the study main result: the \"old days\" are gone, but not completely. Iterative practices are increasing and are applied to at least 65% of projects in which 45% of this projects explicitly adopted Agile practices. However, waterfall process is still applied at least 30% of projects. Finally, we discuss some implications, directions and opportunities for video game development community.", "title": "" }, { "docid": "8b362e80f15b211a227f8a930f5d1ddb", "text": "Collagen hydrolysate is a well-known dietary supplement for the treatment of skin aging; however, its mode of action remains unknown. Previous studies have shown that the oral ingestion of collagen hydrolysate leads to elevated levels of collagen-derived peptides in the blood, but whether these peptides reach the skin remains unclear. Here, we analyzed the plasma concentration of collagen-derived peptides after ingestion of high tripeptide containing collagen hydrolysate in humans. We identified 17 types of collagen-derived peptides transiently, with a particular enrichment in Gly-Pro-Hyp. This was also observed using an in vivo mouse model in the plasma and skin, albeit with a higher enrichment of Pro-Hyp in the skin. Interestingly, this Pro-Hyp enrichment in the skin was derived from Gly-Pro-Hyp hydrolysis, as the administration of pure Gly-Pro-Hyp peptide led to similar results. Therefore, we propose that functional peptides can be transferred to the skin by dietary supplements of collagen.", "title": "" }, { "docid": "3f5b44f905ac779b0322bd599b1c77e9", "text": "This paper aims at creating a broad picture of security awareness and the ways it has been approached and also concerns, problems or gaps that may inhibit its successful implementation, towards understanding the reasons why security awareness practice remains problematic. Open coding analysis was performed on numerous publications (articles, surveys, standards, reports and books). A classification scheme of six categories of concern has emerged from the content analysis (e.g. terminology ambiguity) and the chosen publications were classified based on it. The paper identifies ambiguous aspects of current security awareness approaches and the proposed classification provides a guide to identify the range of options available to researchers and practitioners when they design their research and practice on information security awareness.", "title": "" } ]
scidocsrr
5b249be5ecd6332f3b560cd46fbf4d90
Chinese Grammatical Error Diagnosis with Long Short-Term Memory Networks
[ { "docid": "b205346e003c429cd2b32dc759921643", "text": "Sentence correction has been an important emerging issue in computer-assisted language learning. However, existing techniques based on grammar rules or statistical machine translation are still not robust enough to tackle the common errors in sentences produced by second language learners. In this paper, a relative position language model and a parse template language model are proposed to complement traditional language modeling techniques in addressing this problem. A corpus of erroneous English-Chinese language transfer sentences along with their corrected counterparts is created and manually judged by human annotators. Experimental results show that compared to a state-of-the-art phrase-based statistical machine translation system, the error correction performance of the proposed approach achieves a significant improvement using human evaluation.", "title": "" }, { "docid": "aa80366addac8af9cc5285f98663b9b6", "text": "Automatic detection of sentence errors is an important NLP task and is valuable to assist foreign language learners. In this paper, we investigate the problem of word ordering errors in Chinese sentences and propose classifiers to detect this type of errors. Word n-gram features in Google Chinese Web 5-gram corpus and ClueWeb09 corpus, and POS features in the Chinese POStagged ClueWeb09 corpus are adopted in the classifiers. The experimental results show that integrating syntactic features, web corpus features and perturbation features are useful for word ordering error detection, and the proposed classifier achieves 71.64% accuracy in the experimental datasets. 協助非中文母語學習者偵測中文句子語序錯誤 自動偵測句子錯誤是自然語言處理研究一項重要議題,對於協助外語學習者很有價值。在 這篇論文中,我們研究中文句子語序錯誤的問題,並提出分類器來偵測這種類型的錯誤。 在分類器中我們使用的特徵包括:Google 中文網路 5-gram 語料庫、與 ClueWeb09 語料庫 的中文詞彙 n-grams及中文詞性標注特徵。實驗結果顯示,整合語法特徵、網路語料庫特 徵、及擾動特徵對偵測中文語序錯誤有幫助。在實驗所用的資料集中,合併使用這些特徵 所得的分類器效能可達 71.64%。", "title": "" } ]
[ { "docid": "d5e54133fa5166f0e72884bd3501bbfb", "text": "In order to explore the characteristics of the evolution behavior of the time-varying relationships between multivariate time series, this paper proposes an algorithm to transfer this evolution process to a complex network. We take the causality patterns as nodes and the succeeding sequence relations between patterns as edges. We used four time series as sample data. The results of the analysis reveal some statistical evidences that the causalities between time series is in a dynamic process. It implicates that stationary long-term causalities are not suitable for some special situations. Some short-term causalities that our model recognized can be referenced to the dynamic adjustment of the decisions. The results also show that weighted degree of the nodes obeys power law distribution. This implies that a few types of causality patterns play a major role in the process of the transition and that international crude oil market is statistically significantly not random. The clustering effect appears in the transition process and different clusters have different transition characteristics which provide probability information for predicting the evolution of the causality. The approach presents a potential to analyze multivariate time series and provides important information for investors and decision makers.", "title": "" }, { "docid": "f6bb2c30fb95a8d120b525875bc2fda6", "text": "We propose a method to learn deep ReLU-based classifiers that are provably robust against normbounded adversarial perturbations on the training data. For previously unseen examples, the approach is guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well. The basic idea is to consider a convex outer approximation of the set of activations reachable through a norm-bounded perturbation, and we develop a robust optimization procedure that minimizes the worst case loss over this outer region (via a linear program). Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. The end result is that by executing a few more forward and backward passes through a slightly modified version of the original network (though possibly with much larger batch sizes), we can learn a classifier that is provably robust to any norm-bounded adversarial attack. We illustrate the approach on a number of tasks to train classifiers with robust adversarial guarantees (e.g. for MNIST, we produce a convolutional classifier that provably has less than 5.8% test error for any adversarial attack with bounded `∞ norm less than = 0.1), and code for all experiments is available at http://github.com/ locuslab/convex_adversarial. Machine Learning Department, Carnegie Mellon University, Pittsburgh PA, 15213, USA Computer Science Department, Carnegie Mellon University, Pittsburgh PA, 15213, USA. Correspondence to: Eric Wong <ericwong@cs.cmu.edu>, J. Zico Kolter <zkolter@cs.cmu.edu>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).", "title": "" }, { "docid": "4ad1aa5086c15be3d5ba9d692d1772a2", "text": "We demonstrate the feasibility of detecting pathology in chest x-rays using deep learning approaches based on non-medical learning. Convolutional neural networks (CNN) learn higher level image representations. In this work we explore the features extracted from layers of the CNN along with a set of classical features, including GIST and bag-ofwords (BoW). We show results of classification using each feature set as well as fusing among the features. Finally, we perform feature selection on the collection of features to show the most informative feature set for the task. Results of 0.78-0.95 AUC for various pathologies are shown on a dataset of more than 600 radiographs. This study shows the strength and robustness of the CNN features. We conclude that deep learning with large scale nonmedical image databases may be a good substitute, or addition to domain specific representations which are yet to be available for general medical image recognition tasks.", "title": "" }, { "docid": "980565c38859db2df10db238d8a4dc61", "text": "Performing High Voltage (HV) tasks with a multi craft work force create a special set of safety circumstances. This paper aims to present vital information relating to when it is acceptable to use a single or a two-layer soil structure. Also it discusses the implication of the high voltage infrastructure on the earth grid and the safety of this implication under a single or a two-layer soil structure. A multiple case study is investigated to show the importance of using the right soil resistivity structure during the earthing system design. Keywords—Earth Grid, EPR, High Voltage, Soil Resistivity Structure, Step Voltage, Touch Voltage.", "title": "" }, { "docid": "8b9e4490a1e9a70d9bb35a9c87a391d4", "text": "The latest advances in eHealth and mHealth have propitiated the rapidly creation and expansion of mobile applications for health care. One of these types of applications are the clinical decision support systems, which nowadays are being implemented in mobile apps to facilitate the access to health care professionals in their daily clinical decisions. The aim of this paper is twofold. Firstly, to make a review of the current systems available in the literature and in commercial stores. Secondly, to analyze a sample of applications in order to obtain some conclusions and recommendations. Two reviews have been done: a literature review on Scopus, IEEE Xplore, Web of Knowledge and PubMed and a commercial review on Google play and the App Store. Five applications from each review have been selected to develop an in-depth analysis and to obtain more information about the mobile clinical decision support systems. Ninety-two relevant papers and 192 commercial apps were found. Forty-four papers were focused only on mobile clinical decision support systems. One hundred seventy-one apps were available on Google play and 21 on the App Store. The apps are designed for general medicine and 37 different specialties, with some features common in all of them despite of the different medical fields objective. The number of mobile clinical decision support applications and their inclusion in clinical practices has risen in the last years. However, developers must be careful with their interface or the easiness of use, which can impoverish the experience of the users.", "title": "" }, { "docid": "5b545c14a8784383b8d921eb27991749", "text": "In this chapter, neural networks are used to predict the future stock prices and develop a suitable trading system. Wavelet analysis is used to de-noise the time series and the results are compared with the raw time series prediction without wavelet de-noising. Standard and Poor 500 (S&P 500) is used in experiments. We use a gradual data sub-sampling technique, i.e., training the network mostly with recent data, but without neglecting past data. In addition, effects of NASDAQ 100 are studied on prediction of S&P 500. A daily trading strategy is employed to buy/sell according to the predicted prices and to calculate the directional efficiency and the rate of returns for different periods. There are numerous exchange traded funds (ETF’s), which attempt to replicate the performance of S&P 500 by holding the same stocks in the same proportions as the index, and therefore, giving the same percentage returns as S&P 500. Therefore, this study can be used to help invest in any of the various ETFs, which replicates the performance of S&P 500. The experimental results show that neural networks, with appropriate training and input data, can be used to achieve high profits by investing in ETFs based on S&P 500.", "title": "" }, { "docid": "e8bf5fbe2ec29e0ea7ef6a368a54147e", "text": "In this paper a combined Ground Penetrating Radar (GPR) and Synthetic Aperture Radar (SAR) technique is introduced, which considers the soil surface refraction and the wave propagation in the ground. By using Fermat's principle and the Sober operator, the SAR image of the GPR data is optimized, whereas the soil's permittivity is estimated. The theoretical approach is discussed thoroughly and measurements that were carried out on a test sand box verify the proposed technique.", "title": "" }, { "docid": "1e7b1bbaba8b9f9a1e28db42e18c23bf", "text": "To use their pool of resources efficiently, distributed stream-processing systems push query operators to nodes within the network. Currently, these operators, ranging from simple filters to custom business logic, are placed manually at intermediate nodes along the transmission path to meet application-specific performance goals. Determining placement locations is challenging because network and node conditions change over time and because streams may interact with each other, opening venues for reuse and repositioning of operators. This paper describes a stream-based overlay network (SBON), a layer between a stream-processing system and the physical network that manages operator placement for stream-processing systems. Our design is based on a cost space, an abstract representation of the network and on-going streams, which permits decentralized, large-scale multi-query optimization decisions. We present an evaluation of the SBON approach through simulation, experiments on PlanetLab, and an integration with Borealis, an existing stream-processing engine. Our results show that an SBON consistently improves network utilization, provides low stream latency, and enables dynamic optimization at low engineering cost.", "title": "" }, { "docid": "4beac4e75474bdda0f0d005e5d235f90", "text": "We present a neural transducer model with visual attention that learns to generate LATEX markup of a real-world math formula given its image. Applying sequence modeling and transduction techniques that have been very successful across modalities such as natural language, image, handwriting, speech and audio; we construct an image-to-markup model that learns to produce syntactically and semantically correct LATEX markup code over 150 words long and achieves a BLEU score of 89%; improving upon the previous state-of-art for the Im2Latex problem. We also demonstrate with heat-map visualization how attention helps in interpreting the model and can pinpoint (localize) symbols on the image accurately despite having been trained without any bounding box data.", "title": "" }, { "docid": "986f55bb12d71e534e1e2fe970f610fb", "text": "Code corpora, as observed in large software systems, are now known to be far more repetitive and predictable than natural language corpora. But why? Does the difference simply arise from the syntactic limitations of programming languages? Or does it arise from the differences in authoring decisions made by the writers of these natural and programming language texts? We conjecture that the differences are not entirely due to syntax, but also from the fact that reading and writing code is un-natural for humans, and requires substantial mental effort; so, people prefer to write code in ways that are familiar to both reader and writer. To support this argument, we present results from two sets of studies: 1) a first set aimed at attenuating the effects of syntax, and 2) a second, aimed at measuring repetitiveness of text written in other settings (e.g. second language, technical/specialized jargon), which are also effortful to write. We find that this repetition in source code is not entirely the result of grammar constraints, and thus some repetition must result from human choice. While the evidence we find of similar repetitive behavior in technical and learner corpora does not conclusively show that such language is used by humans to mitigate difficulty, it is consistent with that theory. This discovery of “non-syntactic” repetitive behaviour is actionable, and can be leveraged for statistically significant improvements on the code suggestion task. We discuss this finding, and other future implications on practice, and for research.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "e2a5f57497e57881092e33c6ab3ec817", "text": "Doc2Sent2Vec is an unsupervised approach to learn low-dimensional feature vector (or embedding) for a document. This embedding captures the semantics of the document and can be fed as input to machine learning algorithms to solve a myriad number of applications in the field of data mining and information retrieval. Some of these applications include document classification, retrieval, and ranking.\n The proposed approach is two-phased. In the first phase, the model learns a vector for each sentence in the document using a standard word-level language model. In the next phase, it learns the document representation from the sentence sequence using a novel sentence-level language model. Intuitively, the first phase captures the word-level coherence to learn sentence embeddings, while the second phase captures the sentence-level coherence to learn document embeddings. Compared to the state-of-the-art models that learn document vectors directly from the word sequences, we hypothesize that the proposed decoupled strategy of learning sentence embeddings followed by document embeddings helps the model learn accurate and rich document representations.\n We evaluate the learned document embeddings by considering two classification tasks: scientific article classification and Wikipedia page classification. Our model outperforms the current state-of-the-art models in the scientific article classification task by ?12.07% and the Wikipedia page classification task by ?6.93%, both in terms of F1 score. These results highlight the superior quality of document embeddings learned by the Doc2Sent2Vec approach.", "title": "" }, { "docid": "3405c4808237f8d348db27776d6e9b61", "text": "Pheochromocytomas are catecholamine-releasing tumors that can be found in an extraadrenal location in 10% of the cases. Almost half of all pheochromocytomas are now discovered incidentally during cross-sectional imaging for unrelated causes. We present a case of a paragaglioma of the organ of Zuckerkandl that was discovered incidentally during a magnetic resonance angiogram performed for intermittent claudication. Subsequent investigation with computed tompgraphy and I-123 metaiodobenzylguanine scintigraphy as well as an overview of the literature are also presented.", "title": "" }, { "docid": "cd13c8d9b950c35c73aeaadd2cfa1efb", "text": "The significant worldwide increase in observed river runoff has been tentatively attributed to the stomatal \"antitranspirant\" response of plants to rising atmospheric CO(2) [Gedney N, Cox PM, Betts RA, Boucher O, Huntingford C, Stott PA (2006) Nature 439: 835-838]. However, CO(2) also is a plant fertilizer. When allowing for the increase in foliage area that results from increasing atmospheric CO(2) levels in a global vegetation model, we find a decrease in global runoff from 1901 to 1999. This finding highlights the importance of vegetation structure feedback on the water balance of the land surface. Therefore, the elevated atmospheric CO(2) concentration does not explain the estimated increase in global runoff over the last century. In contrast, we find that changes in mean climate, as well as its variability, do contribute to the global runoff increase. Using historic land-use data, we show that land-use change plays an additional important role in controlling regional runoff values, particularly in the tropics. Land-use change has been strongest in tropical regions, and its contribution is substantially larger than that of climate change. On average, land-use change has increased global runoff by 0.08 mm/year(2) and accounts for approximately 50% of the reconstructed global runoff trend over the last century. Therefore, we emphasize the importance of land-cover change in forecasting future freshwater availability and climate.", "title": "" }, { "docid": "21e17ad2d2a441940309b7eacd4dec6e", "text": "ÐWith a huge amount of data stored in spatial databases and the introduction of spatial components to many relational or object-relational databases, it is important to study the methods for spatial data warehousing and OLAP of spatial data. In this paper, we study methods for spatial OLAP, by integration of nonspatial OLAP methods with spatial database implementation techniques. A spatial data warehouse model, which consists of both spatial and nonspatial dimensions and measures, is proposed. Methods for computation of spatial data cubes and analytical processing on such spatial data cubes are studied, with several strategies proposed, including approximation and selective materialization of the spatial objects resulted from spatial OLAP operations. The focus of our study is on a method for spatial cube construction, called object-based selective materialization, which is different from cuboid-based selective materialization proposed in previous studies of nonspatial data cube construction. Rather than using a cuboid as an atomic structure during the selective materialization, we explore granularity on a much finer level, that of a single cell of a cuboid. Several algorithms are proposed for object-based selective materialization of spatial data cubes and the performance study has demonstrated the effectiveness of these techniques. Index TermsÐData warehouse, data mining, online analytical processing (OLAP), spatial databases, spatial data analysis, spatial", "title": "" }, { "docid": "48f25218a45d12907dba7b42b2148a40", "text": "Cross-site scripting (XSS) vulnerabilities are among the most common and serious web application vulnerabilities. It is challenging to eliminate XSS vulnerabilities because it is difficult for web applications to sanitize all user input appropriately. We present Noncespaces, a technique that enables web clients to distinguish between trusted and untrusted content to prevent exploitation of XSS vulnerabilities. Using Noncespaces, a web application randomizes the the (X)HTML tags and attributes in each document before delivering it to the client. As long as the attacker is unable to guess the random mapping, the client can distinguish between trusted content created by the web application and untrusted content provided by an attacker. To implement Noncespaces with minimal changes to web applications, we leverage a popular web application architecture to automatically apply Noncespaces to static content processed through a popular PHP template engine. We design a policy language for Noncespaces, implement a training mode to assist policy development, and conduct extensive security testing of a generated policy for two large web applications to show the effectiveness of our technique. a 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8970ace14fef5499de4bf810ab66c7ce", "text": "Glioblastoma multiforme is the most common primary malignant brain tumour, with a median survival of about one year. This poor prognosis is due to therapeutic resistance and tumour recurrence after surgical removal. Precisely how recurrence occurs is unknown. Using a genetically engineered mouse model of glioma, here we identify a subset of endogenous tumour cells that are the source of new tumour cells after the drug temozolomide (TMZ) is administered to transiently arrest tumour growth. A nestin-ΔTK-IRES-GFP (Nes-ΔTK-GFP) transgene that labels quiescent subventricular zone adult neural stem cells also labels a subset of endogenous glioma tumour cells. On arrest of tumour cell proliferation with TMZ, pulse-chase experiments demonstrate a tumour re-growth cell hierarchy originating with the Nes-ΔTK-GFP transgene subpopulation. Ablation of the GFP+ cells with chronic ganciclovir administration significantly arrested tumour growth, and combined TMZ and ganciclovir treatment impeded tumour development. Thus, a relatively quiescent subset of endogenous glioma cells, with properties similar to those proposed for cancer stem cells, is responsible for sustaining long-term tumour growth through the production of transient populations of highly proliferative cells.", "title": "" }, { "docid": "9bb86141611c54978033e2ea40f05b15", "text": "In this work we investigate the problem of road scene semanti c segmentation using Deconvolutional Networks (DNs). Several c onstraints limit the practical performance of DNs in this context: firstly, the pa ucity of existing pixelwise labelled training data, and secondly, the memory const rai ts of embedded hardware, which rule out the practical use of state-of-theart DN architectures such as fully convolutional networks (FCN). To address the fi rst constraint, we introduce a Multi-Domain Road Scene Semantic Segmentation (M DRS3) dataset, aggregating data from six existing densely and sparsely lab elled datasets for training our models, and two existing, separate datasets for test ing their generalisation performance. We show that, while MDRS3 offers a greater volu me and variety of data, end-to-end training of a memory efficient DN does not yield satisfactory performance. We propose a new training strategy to over c me this, based on (i) the creation of a best-possible source network (S-Net ) from the aggregated data, ignoring time and memory constraints; and (ii) the tra nsfer of knowledge from S-Net to the memory-efficient target network (T-Net). W e evaluate different techniques for S-Net creation and T-Net transferral, and de monstrate that training a constrained deconvolutional network in this manner can un lock better performance than existing training approaches. Specifically, we s how that a target network can be trained to achieve improved accuracy versus an FC N despite using less than 1% of the memory. We believe that our approach can be useful beyond automotive scenarios where labelled data is similarly scar ce o fragmented and where practical constraints exist on the desired model size . We make available our network models and aggregated multi-domain dataset for reproducibility.", "title": "" }, { "docid": "75177326b8408f755100bf86e1f8bd90", "text": "We propose a general method for constructing Tanner graphs having a large girth by establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) algorithm. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting low-density parity-check (LDPC) codes are derived in terms of parameters of the graphs. Simple variations of the PEG algorithm can also be applied to generate linear-time encodeable LDPC codes. Regular and irregular LDPC codes using PEG Tanner graphs and allowing symbol nodes to take values over GF(q) (q>2) are investigated. Simulation results show that the PEG algorithm is a powerful algorithm to generate good short-block-length LDPC codes.", "title": "" }, { "docid": "3b9c658245726acdb246e984cae666c5", "text": "In pursuing a refined Learning Styles Inventory (LSI), Kolb has moved away from the original cyclical nature of his model of experiential learning. Kolb’s model has not adapted to current research and has failed to increase understanding of learning. A critical examination of Kolb’s experiential learning theory in terms of epistemology, educational neuroscience, and model analysis reveals the need for an experiential learning theory that addresses these issues. This article re-conceptualizes experiential learning by building from cognitive neuroscience, Dynamic Skill Theory, and effective experiential education practices into a self-adjusting fractal-like cycle that we call CoConstructed Developmental Teaching Theory (CDTT). CDTT is a biologically driven model of teaching. It is a cohesive framework of ideas that have been presented before but not linked in a coherent manner to the biology of the learning process. In addition, it orders the steps in a neurobiologically supported sequence. CDTT opens new avenues of research utilizing evidenced-based teaching practices and provides a basis for a new conversation. However, thorough testing remains.", "title": "" } ]
scidocsrr
8c692f26f58833c37a11b73b421b6b85
3D Surface Reconstruction by Pointillism
[ { "docid": "f03f84dd248d06049a177768f0fc8671", "text": "We propose a framework that infers mid-level visual properties of an image by learning about ordinal relationships. Instead of estimating metric quantities directly, the system proposes pairwise relationship estimates for points in the input image. These sparse probabilistic ordinal measurements are globalized to create a dense output map of continuous metric measurements. Estimating order relationships between pairs of points has several advantages over metric estimation: it solves a simpler problem than metric regression, humans are better at relative judgements, so data collection is easier, ordinal relationships are invariant to monotonic transformations of the data, thereby increasing the robustness of the system and providing qualitatively different information. We demonstrate that this frame-work works well on two important mid-level vision tasks: intrinsic image decomposition and depth from an RGB image. We train two systems with the same architecture on data from these two modalities. We provide an analysis of the resulting models, showing that they learn a number of simple rules to make ordinal decisions. We apply our algorithm to depth estimation, with good results, and intrinsic image decomposition, with state-of-the-art results.", "title": "" }, { "docid": "54d3d5707e50b979688f7f030770611d", "text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.", "title": "" }, { "docid": "98e557f291de3b305a91e47f59a9ed34", "text": "We propose SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frameto-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates. The model can be trained with various degrees of supervision: 1) self-supervised by the reprojection photometric error (completely unsupervised), 2) supervised by ego-motion (camera motion), or 3) supervised by depth (e.g., as provided by RGBD sensors). SfMNet extracts meaningful depth estimates and successfully estimates frame-to-frame camera rotations and translations. It often successfully segments the moving objects in the scene, even though such supervision is never provided.", "title": "" } ]
[ { "docid": "4716f812737e5ae082e30bab3fde16f9", "text": "Recently, electronic books (e-books) have become prevalent amongst the general population, as well as students, owing to their advantages over traditional books. In South Africa, a number of schools have integrated tablets into the classroom with the promise of replacing traditional books. In order to realise the potential of e-books and their associated devices within an academic context, where reading speed and comprehension are critical for academic performance and personal growth, the effectiveness of reading from a tablet screen should be evaluated. To achieve this objective, a quasi-experimental withinsubjects design was employed in order to compare the reading speed and comprehension performance of 68 students. The results of this study indicate the majority of participants read faster on an iPad, which is in contrast to previous studies that have found reading from tablets to be slower. It was also found that comprehension scores did not differ significantly between the two media. For students, these results provide evidence that tablets and e-books are suitable tools for reading and learning, and therefore, can be used for academic work. For educators, e-books can be introduced without concern that reading performance and comprehension will be hindered.", "title": "" }, { "docid": "44438acfb2ae3a17f91d411fc7f39eec", "text": "Tonal languages, such as Chinese, use systematic variations of pitch to distinguish lexical or grammatical meaning. Thus, tone recognition is essential for tonal languages. Typically, tone recognition for isolated syllables involves three major steps: fundamental frequency (F0) detection, feature extraction, and classification. The work compares different techniques for these three steps and to answer the questions: for Mandarin Chinese syllables, what combination of fundamental frequency detection and feature extraction methods best prepare data for classification, and what is the most effective classification method for tone recognition. Three types of F0 detection methods (autocorrelation, cross-correlation and cepstrum), two feature extraction schemes (sampled F0 and average F0, slope and energy from three subsegments), four normalization methods (slope only, 0--100 scaled, z-score and T1 shift), and two classification methods (Support Vector Machine (SVM) and Multilayer Perceptron (MLP)) were experimentally studied using 700 collected data samples.", "title": "" }, { "docid": "689c2bac45b0933994337bd28ce0515d", "text": "Jealousy is a powerful emotional force in couples' relationships. In just seconds it can turn love into rage and tenderness into acts of control, intimidation, and even suicide or murder. Yet it has been surprisingly neglected in the couples therapy field. In this paper we define jealousy broadly as a hub of contradictory feelings, thoughts, beliefs, actions, and reactions, and consider how it can range from a normative predicament to extreme obsessive manifestations. We ground jealousy in couples' basic relational tasks and utilize the construct of the vulnerability cycle to describe processes of derailment. We offer guidelines on how to contain the couple's escalation, disarm their ineffective strategies and power struggles, identify underlying vulnerabilities and yearnings, and distinguish meanings that belong to the present from those that belong to the past, or to other contexts. The goal is to facilitate relational and personal changes that can yield a better fit between the partners' expectations.", "title": "" }, { "docid": "42bdc6f7616cfa2e2d24c6f8df183adc", "text": "In robotic single-port surgery, it is desirable for a manipulator to exhibit the property of variable stiffness. Small-port incisions may require both high flexibility of the manipulator for safety purposes, as well as high structural stiffness for operational precision and high payload capability. This paper presents a new hyperredundant tubular manipulator with a variable neutral-line mechanisms and adjustable stiffness. A unique asymmetric arrangement of the tendons and the links realizes both articulation of the manipulator and continuous stiffness modulation. This asymmetric motion of the manipulator is compensated by a novel actuation mechanism without affecting its structural stiffness. The paper describes the basic mechanics of the variable neutral-line manipulator, and its stiffness characteristics. Simulation and experimental results verify the performance of the proposed mechanism.", "title": "" }, { "docid": "4a3496a835d3948299173b4b2767d049", "text": "We describe an augmented reality (AR) system that allows multiple participants to interact with 2D and 3D data using tangible user interfaces. The system features face-to-face communication, collaborative viewing and manipulation of 3D models, and seamless access to 2D desktop applications within the shared 3D space. All virtual content, including 3D models and 2D desktop windows, is attached to tracked physical objects in order to leverage the efficiencies of natural two-handed manipulation. The presence of 2D desktop space within 3D facilitates data exchange between the two realms, enables control of 3D information by 2D applications, and generally increases productivity by providing access to familiar tools. We present a general concept for a collaborative tangible AR system, including a comprehensive set of interaction techniques, a distributed hardware setup, and a component-based software architecture that can be flexibly configured using XML. We show the validity of our concept with an implementation of an application scenario from the automotive industry.", "title": "" }, { "docid": "2a5194f83142bbaef832011d08acd780", "text": "This paper proposes a novel data-driven approach for inertial navigation, which learns to estimate trajectories of natural human motions just from an inertial measurement unit (IMU) in every smartphone. The key observation is that human motions are repetitive and consist of a few major modes (e.g., standing, walking, or turning). Our algorithm regresses a velocity vector from the history of linear accelerations and angular velocities, then corrects low-frequency bias in the linear accelerations, which are integrated twice to estimate positions. We have acquired training data with ground truth motion trajectories across multiple human subjects and multiple phone placements (e.g., in a bag or a hand). The qualitatively and quantitatively evaluations have demonstrated that our simple algorithm outperforms existing heuristic-based approaches and is even comparable to full Visual Inertial navigation to our surprise. As far as we know, this paper is the first to introduce supervised training for inertial navigation, potentially opening up a new line of research in the domain of data-driven inertial navigation. We will publicly share our code and data to facilitate further research.", "title": "" }, { "docid": "ff272e6b59a3069372a694f99963929d", "text": "Nowadays, Information Technology (IT) plays an important role in efficiency and effectiveness of the organizational performance. As an IT application, Enterprise Resource Planning (ERP) systems is considered one of the most important IT applications because it enables the organizations to connect and interact with its administrative units in order to manage data and organize internal procedures. Many institutions use ERP systems, most notably Higher Education Institutions (HEIs). However, many projects fail or exceed scheduling and budget constraints; the rate of failure in HEIs sector is higher than in other sectors. With HEIs’ recent movement to implement ERP systems and the lack of research studies examining successful implementation in HEIs, this paper provides a critical literature review with a special focus on Saudi Arabia. Further, it defines Critical Success Factors (CSFs) contributing to the success of ERP implementation in HEIs. This paper is part of a larger research effort aiming to provide guidelines and useful findings that help HEIs to manage the challenges for ERP systems and define CSFs that will help practitioners to implement them in the Saudi context.", "title": "" }, { "docid": "dacb4491a0cf1e05a2972cc1a82a6c62", "text": "Human parechovirus type 3 (HPeV3) can cause serious conditions in neonates, such as sepsis and encephalitis, but data for adults are lacking. The case of a pregnant woman with HPeV3 infection is reported herein. A 28-year-old woman at 36 weeks of pregnancy was admitted because of myalgia and muscle weakness. Her grip strength was 6.0kg for her right hand and 2.5kg for her left hand. The patient's symptoms, probably due to fasciitis and not myositis, improved gradually with conservative treatment, however labor pains with genital bleeding developed unexpectedly 3 days after admission. An obstetric consultation was obtained and a cesarean section was performed, with no complications. A real-time PCR assay for the detection of viral genomic ribonucleic acid against HPeV showed positive results for pharyngeal swabs, feces, and blood, and negative results for the placenta, umbilical cord, umbilical cord blood, amniotic fluid, and breast milk. The HPeV3 was genotyped by sequencing of the VP1 region. The woman made a full recovery and was discharged with her infant in a stable condition.", "title": "" }, { "docid": "67c3e39341c5522b309016b2bbb6a64a", "text": "Process discovery, i.e., learning process models from event logs, has attracted the attention of researchers and practitioners. Today, there exists a wide variety of process mining techniques that are able to discover the control-flow of a process based on event data. These techniques are able to identify decision points, but do not analyze data flow to find rules explaining why individual cases take a particular path. Fortunately, recent advances in conformance checking can be used to align an event log with data and a process model with decision points. These alignments can be used to generate a well-defined classification problem per decision point. This way data flow and guards can be discovered and added to the process model.", "title": "" }, { "docid": "0a34ed8b01c6c700e7bb8bb15644590f", "text": "Almost all automatic semantic role labeling (SRL) systems rely on a preliminary parsing step that derives a syntactic structure from the sentence being analyzed. This makes the choice of syntactic representation an essential design decision. In this paper, we study the influence of syntactic representation on the performance of SRL systems. Specifically, we compare constituent-based and dependencybased representations for SRL of English in the FrameNet paradigm. Contrary to previous claims, our results demonstrate that the systems based on dependencies perform roughly as well as those based on constituents: For the argument classification task, dependencybased systems perform slightly higher on average, while the opposite holds for the argument identification task. This is remarkable because dependency parsers are still in their infancy while constituent parsing is more mature. Furthermore, the results show that dependency-based semantic role classifiers rely less on lexicalized features, which makes them more robust to domain changes and makes them learn more efficiently with respect to the amount of training data.", "title": "" }, { "docid": "6fdd0c7d239417234cfc4706a82b5a0f", "text": "We propose a method of generating teaching policies for use in intelligent tutoring systems (ITS) for concept learning tasks <xref ref-type=\"bibr\" rid=\"ref1\">[1]</xref> , e.g., teaching students the meanings of words by showing images that exemplify their meanings à la Rosetta Stone <xref ref-type=\"bibr\" rid=\"ref2\">[2]</xref> and Duo Lingo <xref ref-type=\"bibr\" rid=\"ref3\">[3]</xref> . The approach is grounded in control theory and capitalizes on recent work by <xref ref-type=\"bibr\" rid=\"ref4\">[4] </xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> that frames the “teaching” problem as that of finding approximately optimal teaching policies for approximately optimal learners (AOTAOL). Our work expands on <xref ref-type=\"bibr\" rid=\"ref4\">[4]</xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> in several ways: (1) We develop a novel student model in which the teacher's actions can <italic>partially </italic> eliminate hypotheses about the curriculum. (2) With our student model, inference can be conducted <italic> analytically</italic> rather than numerically, thus allowing computationally efficient planning to optimize learning. (3) We develop a reinforcement learning-based hierarchical control technique that allows the teaching policy to search through <italic>deeper</italic> learning trajectories. We demonstrate our approach in a novel ITS for foreign language learning similar to Rosetta Stone and show that the automatically generated AOTAOL teaching policy performs favorably compared to two hand-crafted teaching policies.", "title": "" }, { "docid": "750db71b0662772cd9eb7e7d246fb62c", "text": "Microalgal blooms are a natural part of the seasonal cycle of photosynthetic organisms in marine ecosystems. They are key components of the structure and dynamics of the oceans and thus sustain the benefits that humans obtain from these aquatic environments. However, some microalgal blooms can cause harm to humans and other organisms. These harmful algal blooms (HABs) have direct impacts on human health and negative influences on human wellbeing, mainly through their consequences to coastal ecosystem services (fisheries, tourism and recreation) and other marine organisms and environments. HABs are natural phenomena, but these events can be favoured by anthropogenic pressures in coastal areas. Global warming and associated changes in the oceans could affect HAB occurrences and toxicity as well, although forecasting the possible trends is still speculative and requires intensive multidisciplinary research. At the beginning of the 21st century, with expanding human populations, particularly in coastal and developing countries, mitigating HABs impacts on human health and wellbeing is becoming a more pressing public health need. The available tools to address this global challenge include maintaining intensive, multidisciplinary and collaborative scientific research, and strengthening the coordination with stakeholders, policymakers and the general public. Here we provide an overview of different aspects of the HABs phenomena, an important element of the intrinsic links between oceans and human health and wellbeing.", "title": "" }, { "docid": "3ad19b3710faeda90db45e2f7cebebe8", "text": "Motion planning is a fundamental problem in robotics. It comes in a variety of forms, but the simplest version is as follows. We are given a robot system B, which may consist of several rigid objects attached to each other through various joints, hinges, and links, or moving independently, and a 2D or 3D environment V cluttered with obstacles. We assume that the shape and location of the obstacles and the shape of B are known to the planning system. Given an initial placement Z1 and a final placement Z2 of B, we wish to determine whether there exists a collisionavoiding motion of B from Z1 to Z2, and, if so, to plan such a motion. In this simplified and purely geometric setup, we ignore issues such as incomplete information, nonholonomic constraints, control issues related to inaccuracies in sensing and motion, nonstationary obstacles, optimality of the planned motion, and so on. Since the early 1980s, motion planning has been an intensive area of study in robotics and computational geometry. In this chapter we will focus on algorithmic motion planning, emphasizing theoretical algorithmic analysis of the problem and seeking worst-case asymptotic bounds, and only mention briefly practical heuristic approaches to the problem. The majority of this chapter is devoted to the simplified version of motion planning, as stated above. Section 51.1 presents general techniques and lower bounds. Section 51.2 considers efficient solutions to a variety of specific moving systems with a small number of degrees of freedom. These efficient solutions exploit various sophisticated methods in computational and combinatorial geometry related to arrangements of curves and surfaces (Chapter 30). Section 51.3 then briefly discusses various extensions of the motion planning problem such as computing optimal paths with respect to various quality measures, computing the path of a tethered robot, incorporating uncertainty, moving obstacles, and more.", "title": "" }, { "docid": "6ccdfa4cc3bfbb8bf8f488aaf0c0fc1e", "text": "Strings are ubiquitous in computer systems and hence string processing has attracted extensive research effort from computer scientists in diverse areas. One of the most important problems in string processing is to efficiently evaluate the similarity between two strings based on a specified similarity measure. String similarity search is a fundamental problem in information retrieval, database cleaning, biological sequence analysis, and more. While a large number of dissimilarity measures on strings have been proposed, edit distance is the most popular choice in a wide spectrum of applications. Existing indexing techniques for similarity search queries based on edit distance, e.g., approximate selection and join queries, rely mostly on n-gram signatures coupled with inverted list structures. These techniques are tailored for specific query types only, and their performance remains unsatisfactory especially in scenarios with strict memory constraints or frequent data updates. In this paper\n we propose the Bed-tree, a B+-tree based index structure for evaluating all types of similarity queries on edit distance and normalized edit distance. We identify the necessary properties of a mapping from the string space to the integer space for supporting searching and pruning for these queries. Three transformations are proposed that capture different aspects of information inherent in strings, enabling efficient pruning during the search process on the tree. Compared to state-of-the-art methods on string similarity search, the Bed-tree is a complete solution that meets the requirements of all applications, providing high scalability and fast response time.", "title": "" }, { "docid": "d6b6cbfa8c872b9f9066ea7beda2d2e4", "text": "Computer Science (CS) Unplugged activities have been deployed in many informal settings to present computing concepts in an engaging manner. To justify use in the classroom, however, it is critical for activities to have a strong educational component. For the past three years, we have been developing and refining a CS Unplugged curriculum for use in middle school classrooms. In this paper, we describe an assessment that maps questions from a comprehensive project to computational thinking (CT) skills and Bloom's Taxonomy. We present results from two different deployments and discuss limitations and implications of our approach.", "title": "" }, { "docid": "e6a92df6b717a55f86425b0164e9aa3a", "text": "The COmpound Semiconductor Materials On Silicon (COSMOS) program of the U.S. Defense Advanced Research Projects Agency (DARPA) focuses on developing transistor-scale heterogeneous integration processes to intimately combine advanced compound semiconductor (CS) devices with high-density silicon circuits. The technical approaches being explored in this program include high-density micro assembly, monolithic epitaxial growth, and epitaxial layer printing processes. In Phase I of the program, performers successfully demonstrated world-record differential amplifiers through heterogeneous integration of InP HBTs with commercially fabricated CMOS circuits. In the current Phase II, complex wideband, large dynamic range, high-speed digital-to-analog convertors (DACs) are under development based on the above heterogeneous integration approaches. These DAC designs will utilize InP HBTs in the critical high-speed, high-voltage swing circuit blocks and will employ sophisticated in situ digital correction techniques enabled by CMOS transistors. This paper will also discuss the Phase III program plan as well as future directions for heterogeneous integration technology that will benefit mixed signal circuit applications.", "title": "" }, { "docid": "faa82c37ea37ac9703b471302466c735", "text": "An accurate and robust face recognition system was developed and tested. This system exploits the feature extraction capabilities of the discrete cosine transform (DCT) and invokes certain normalization techniques that increase its robustness to variations in facial geometry and illumination. The method was tested on a variety of available face databases, including one collected at McGill University. The system was shown to perform very well when compared to other approaches.", "title": "" }, { "docid": "fdf979667641e1447f237eb25605c76b", "text": "A green synthesis of highly stable gold and silver nanoparticles (NPs) using arabinoxylan (AX) from ispaghula (Plantago ovata) seed husk is being reported. The NPs were synthesized by stirring a mixture of AX and HAuCl(4)·H(2)O or AgNO(3), separately, below 100 °C for less than an hour, where AX worked as the reducing and the stabilizing agent. The synthesized NPs were characterized by surface plasmon resonance (SPR) spectroscopy, transmission electron microscopy (TEM), atomic force microscopy (AFM), and X-ray diffraction (XRD). The particle size was (silver: 5-20 nm and gold: 8-30 nm) found to be dependent on pH, temperature, reaction time and concentrations of AX and the metal salts used. The NPs were poly-dispersed with a narrow range. They were stable for more than two years time.", "title": "" }, { "docid": "e0eded1237c635af3c762f6bbe5d1b26", "text": "Locating boundaries between coherent and/or repetitive segments of a time series is a challenging problem pervading many scientific domains. In this paper we propose an unsupervised method for boundary detection, combining three basic principles: novelty, homogeneity, and repetition. In particular, the method uses what we call structure features, a representation encapsulating both local and global properties of a time series. We demonstrate the usefulness of our approach in detecting music structure boundaries, a task that has received much attention in recent years and for which exist several benchmark datasets and publicly available annotations. We find our method to significantly outperform the best accuracies published so far. Importantly, our boundary approach is generic, thus being applicable to a wide range of time series beyond the music and audio domains.", "title": "" }, { "docid": "a36d019f5016d0e86ac8d7c412a3c9fd", "text": "Increasing population density in urban centers demands adequate provision of services and infrastructure to meet the needs of city inhabitants, encompassing residents, workers, and visitors. The utilization of information and communications technologies to achieve this objective presents an opportunity for the development of smart cities, where city management and citizens are given access to a wealth of real-time information about the urban environment upon which to base decisions, actions, and future planning. This paper presents a framework for the realization of smart cities through the Internet of Things (IoT). The framework encompasses the complete urban information system, from the sensory level and networking support structure through to data management and Cloud-based integration of respective systems and services, and forms a transformational part of the existing cyber-physical system. This IoT vision for a smart city is applied to a noise mapping case study to illustrate a new method for existing operations that can be adapted for the enhancement and delivery of important city services.", "title": "" } ]
scidocsrr
b67dca02e6a56702e530eed344a8e000
A Graph-Based Algorithm for Inducing Lexical Taxonomies from Scratch
[ { "docid": "074011796235a8ab0470ba0fe967918f", "text": "We present a novel approach to weakly supervised semantic class learning from the web, using a single powerful hyponym pattern combined with graph structures, which capture two properties associated with pattern-based extractions:popularity and productivity. Intuitively, a candidate ispopular if it was discovered many times by other instances in the hyponym pattern. A candidate is productive if it frequently leads to the discovery of other instances. Together, these two measures capture not only frequency of occurrence, but also cross-checking that the candidate occurs both near the class name and near other class members. We developed two algorithms that begin with just a class name and one seed instance and then automatically generate a ranked list of new class instances. We conducted experiments on four semantic classes and consistently achieved high accuracies.", "title": "" } ]
[ { "docid": "9256277615e0016992d007b29a2bcf21", "text": "Three experiments explored how words are learned from hearing them across contexts. Adults watched 40-s videotaped vignettes of parents uttering target words (in sentences) to their infants. Videos were muted except for a beep or nonsense word inserted where each \"mystery word\" was uttered. Participants were to identify the word. Exp. 1 demonstrated that most (90%) of these natural learning instances are quite uninformative, whereas a small minority (7%) are highly informative, as indexed by participants' identification accuracy. Preschoolers showed similar information sensitivity in a shorter experimental version. Two further experiments explored how cross-situational information helps, by manipulating the serial ordering of highly informative vignettes in five contexts. Response patterns revealed a learning procedure in which only a single meaning is hypothesized and retained across learning instances, unless disconfirmed. Neither alternative hypothesized meanings nor details of past learning situations were retained. These findings challenge current models of cross-situational learning which assert that multiple meaning hypotheses are stored and cross-tabulated via statistical procedures. Learners appear to use a one-trial \"fast-mapping\" procedure, even under conditions of referential uncertainty.", "title": "" }, { "docid": "b0bae633eb8b54a8a0a174da8eb59b26", "text": " Advancement in payment technologies have an important impact on the quality of life. The emerging payment technologies create both opportunities and challenges for future. Being a quick and convenient process, contactless payment gained its momentum, especially in merchants, where throughput is the main important parameter. However, it poses risk to issuers as no robust verification method of customer is available. Thus giving rise to quests to evolve and sustain a wellorganized, efficient, reliable and secure unified payment system, which may contribute to the smooth functioning of the market by eliminating scratch in business. This article presents an approach and module by which one card can communicate with the other using Near Field Communication (NFC) technology to transfer money from payer’s bank to payee’s bank by digital means. This approach eliminates the need of physical cash and also serves all types of payment and identity needs. Embodiments of this approach furnish a medium for cashless card-to-card transaction. The module, which is called Swing-Pay, communicates with its concerned bank via GSM. The security of this module is intensified using biometric authentication. The article also presents an app on Android platform, which works as a scanner of the proposed module to read the identity details of concerned person, the owner of the card. We have also presented the prototype of a digital card. This card can also be used as virtual identity card (ID), accumulating the information of all ID cards including electronic Passport, Voter ID, and Driving License.", "title": "" }, { "docid": "77a42190d5acf347920c11d3a3186f4f", "text": "Changes in retinal vessel diameter are an important sign of diseases such as hypertension, arteriosclerosis and diabetes mellitus. Obtaining precise measurements of vascular widths is a critical and demanding process in automated retinal image analysis as the typical vessel is only a few pixels wide. This paper presents an algorithm to measure the vessel diameter to subpixel accuracy. The diameter measurement is based on a two-dimensional difference of Gaussian model, which is optimized to fit a two-dimensional intensity vessel segment. The performance of the method is evaluated against Brinchmann-Hansen's half height, Gregson's rectangular profile and Zhou's Gaussian model. Results from 100 sample profiles show that the presented algorithm is over 30% more precise than the compared techniques and is accurate to a third of a pixel.", "title": "" }, { "docid": "50852a76077de92c7e602e8ad43418f7", "text": "A key element to designing software architectures of good quality is the systematic handling of contradicting quality requirements and the structuring principles that support them. The theory of inventive problem solving (TRIZ) by Altshuller offers tools that can be used to define such a systematic way. This paper describes the idea and preliminary results of using inventive principles and the contradiction matrix for the resolution of contradictions in the design of software architectures. By rearchitecting a flight simulation system these tools are analysed and their further development is proposed. © 2010 Published by Elsevier Ltd.", "title": "" }, { "docid": "a4268c77c3f51ca8d05fa0d108682883", "text": "In this paper, we propose a locality-constrained and sparsity-encouraged manifold fitting approach, aiming at capturing the locally sparse manifold structure into neighborhood graph construction by exploiting a principled optimization model. The proposed model formulates neighborhood graph construction as a sparse coding problem with the locality constraint, therefore achieving simultaneous neighbor selection and edge weight optimization. The core idea underlying our model is to perform a sparse manifold fitting task for each data point so that close-by points lying on the same local manifold are automatically chosen to connect and meanwhile the connection weights are acquired by simple geometric reconstruction. We term the novel neighborhood graph generated by our proposed optimization model M-Fitted Graph since such a graph stems from sparse manifold fitting. To evaluate the robustness and effectiveness of M-fitted graphs, we leverage graph-based semisupervised learning as the testbed. Extensive experiments carried out on six benchmark datasets validate that the proposed M-fitted graph is superior to state-of-the-art neighborhood graphs in terms of classification accuracy using popular graph-based semi-supervised learning methods.", "title": "" }, { "docid": "23e5520226bc76f67d0a1e9ef98a4bb2", "text": "This report analyzes the modelling of default intensities and probabilities in single-firm reduced-form models, and reviews the three main approaches to incorporating default dependencies within the framework of reduced models. The first approach, the conditionally independent defaults (CID), introduces credit risk dependence between firms through the dependence of the firms’ intensity processes on a common set of state variables. Contagion models extend the CID approach to account for the empirical observation of default clustering. There exist periods in which the firms’ credit risk is increased and in which the majority of the defaults take place. Finally, default dependencies can also be accounted for using copula functions. The copula approach takes as given the marginal default probabilities of the different firms and plugs them into a copula function, which provides the model with the default dependence structure. After a description of copulas, we present two different approaches of using copula functions in intensity models, and discuss the issues of the choice and calibration of the copula function. ∗This report is a revised version of the Master’s Thesis presented in partial fulfillment of the 2002-2003 MSc in Financial Mathematics at King’s College London. I thank my supervisor Lane P. Hughston and everyone at the Financial Mathematics Group at King’s College, particularly Giulia Iori and Mihail Zervos. Financial support by Banco de España is gratefully acknowledged. Any errors are the exclusive responsibility of the author. CEMFI, Casado del Alisal 5, 28014 Madrid, Spain. Email: elizalde@cemfi.es.", "title": "" }, { "docid": "8047032f0ef24d5d32ae3a5eae3e4bf3", "text": "BACKGROUND\nFox-Fordyce disease (FFD) is a relatively rare entity with a typical clinical presentation. Numerous studies have described unifying histopathological features of FFD, which together suggest a defect in the follicular infundibulum resulting in follicular dilation with keratin plugging, subsequent apocrine duct obstruction, and apocrine gland dilation, with eventual extravasation of the apocrine secretions as the primary histopathogenic events in the evolution of the disease.\n\n\nOBSERVATIONS\nWe describe a case of FFD that developed in a 41-year-old woman 3 months after completing a series of axillary laser hair removal treatments, and we detail the clinical and histopathological changes typical for FFD.\n\n\nCONCLUSION\nBecause defective infundibular maturation has been suggested to play a central role in the evolution of FFD, the close temporal relationship of laser hair therapy with the development of FFD suggests a causal role, which we continue to explore.", "title": "" }, { "docid": "d67c9703ee45ad306384bbc8fe11b50e", "text": "Approximately thirty-four percent of people who experience acute low back pain (LBP) will have recurrent episodes. It remains unclear why some people experience recurrences and others do not, but one possible cause is a loss of normal control of the back muscles. We investigated whether the control of the short and long fibres of the deep back muscles was different in people with recurrent unilateral LBP from healthy participants. Recurrent unilateral LBP patients, who were symptom free during testing, and a group of healthy volunteers, participated. Intramuscular and surface electrodes recorded the electromyographic activity (EMG) of the short and long fibres of the lumbar multifidus and the shoulder muscle, deltoid, during a postural perturbation associated with a rapid arm movement. EMG onsets of the short and long fibres, relative to that of deltoid, were compared between groups, muscles, and sides. In association with a postural perturbation, short fibre EMG onset occurred later in participants with recurrent unilateral LBP than in healthy participants (p=0.022). The short fibres were active earlier than long fibres on both sides in the healthy participants (p<0.001) and on the non-painful side in the LBP group (p=0.045), but not on the previously painful side in the LBP group. Activity of deep back muscles is different in people with a recurrent unilateral LBP, despite the resolution of symptoms. Because deep back muscle activity is critical for normal spinal control, the current results provide the first evidence of a candidate mechanism for recurrent episodes.", "title": "" }, { "docid": "3bca8a53611f295a30df946f6a301eb5", "text": "A current assisted photonic demodulator for use as a pixel in a 3-D time-of-flight imager shows nearly 100% static demodulator contrast and is operable beyond 30 MHz. An integrated tunable sensitivity control is also presented for increasing the distance measurement range and avoiding unwanted saturation during integration periods. This is achieved by application of a voltage on a dedicated drain tap showing a quenching of sensor sensitivity to below 1%", "title": "" }, { "docid": "9c85f1543c688d4fda2124f9d282264f", "text": "Many modern sensors used for mapping produce 3D point clouds, which are typically registered together using the iterative closest point (ICP) algorithm. Because ICP has many variants whose performances depend on the environment and the sensor, hundreds of variations have been published. However, no comparison frameworks are available, leading to an arduous selection of an appropriate variant for particular experimental conditions. The first contribution of this paper consists of a protocol that allows for a comparison between ICP variants, taking into account a broad range of inputs. The second contribution is an open-source ICP library, which is fast enough to be usable in multiple real-world applications, while being modular enough to ease comparison of multiple solutions. This paper presents two examples of these field applications. The last contribution is the comparison of two baseline ICP variants using data sets that cover a rich variety of environments. Besides demonstrating the need for improved ICP methods for natural, unstructured and information-deprived environments, these baseline variants also provide a solid basis to which novel solutions could be compared. The combination of our protocol, software, and baseline results demonstrate convincingly how open-source software can push forward the research in mapping and navigation. F. Pomerleau (B) · F. Colas · R. Siegwart · S. Magnenat Autonomous System Lab, ETH Zurich, Tannenstrasse 3, 8092 Zurich, Switzerland e-mail: f.pomerleau@gmail.com F. Colas e-mail: francis.colas@mavt.ethz.ch R. Siegwart e-mail: rsiegwart@ethz.ch S. Magnenat e-mail: stephane@magnenat.net", "title": "" }, { "docid": "7d62ae437a6b77e19f0d3292954a8471", "text": "A numerical tool for the optimisation of the scantlings of a ship is extended by considering production cost, weight and moment of inertia in the objective function. A multi-criteria optimisation of a passenger ship is conducted to illustrate the analysis process. Pareto frontiers are obtained and results are verified with Bureau Veritas rules.", "title": "" }, { "docid": "2576eee3ef35717ac70e5ce302c0853c", "text": "Management of lumbar burst fractures remains controversial. Surgical reduction/stabilization is becoming more popular; however, the functional impact of operative intervention is not clear. The purpose of this study was to assess health-related quality of life and functional outcome after posterior fixation of lumbar burst fractures with either posterolateral or intrabody bone grafting. Twenty-four subjects were included. Radiographs and computed tomography scans were evaluated for deformity (kyphosis, vertebral compression, lateral angulation, lateral body height, and canal compromise) postoperatively, at 1 year, and at final follow-up (mean 3.2 years). Patients completed the SF 36 Health Survey and the Oswestry Low Back Pain Disability Questionnaire at final follow-up. Significant improvement was noted in midsagittal diameter compromise, vertebral compression, and kyphosis. The difference observed between the respondents mean scores on the SF 36 was not significantly different from those presented as the U.S. national average (p = 0.053). Data from the Oswestry questionnaire indicated a similarly high level of function. Overall, we found posterior spinal instrumentation to correlate with positive functional outcome based on both general health (SF 36) and joint-specific outcome scales (Oswestry). Posterior instrumentation provides sound canal decompression, kyphotic reduction, and maintains vertebral height with minimal transgression and long-term sequelae. In cases of severe initial deformity and neurologic compromise, intrabody bone grafting is most certainly indicated; the additional support provided by a posterolateral graft may also prove beneficial as an adjunct.", "title": "" }, { "docid": "98b603ed5be37165cc22da7650023d7d", "text": "One reason that word learning presents a challenge for children is because pairings between word forms and meanings are arbitrary conventions that children must learn via observation - e.g., the fact that \"shovel\" labels shovels. The present studies explore cases in which children might bypass observational learning and spontaneously infer new word meanings: By exploiting the fact that many words are flexible and systematically encode multiple, related meanings. For example, words like shovel and hammer are nouns for instruments, and verbs for activities involving those instruments. The present studies explored whether 3- to 5-year-old children possess semantic generalizations about lexical flexibility, and can use these generalizations to infer new word meanings: Upon learning that dax labels an activity involving an instrument, do children spontaneously infer that dax can also label the instrument itself? Across four studies, we show that at least by age four, children spontaneously generalize instrument-activity flexibility to new words. Together, our findings point to a powerful way in which children may build their vocabulary, by leveraging the fact that words are linked to multiple meanings in systematic ways.", "title": "" }, { "docid": "e053e9be9d0f216a101387e9e3837908", "text": "In this paper we present our UI development environment based on components. The UI is considered as a technical service of a business component just like security or persistence. The dialog between UI and business components is managed by an interaction/coordination service that allows the reconfiguration of components without modifying them. A UI component merging service handles dynamic assembly of corresponding UI components.", "title": "" }, { "docid": "2088fcfb9651e2dfcbaa123b723ef8aa", "text": "Head pose estimation is not only a crucial preprocessing task in applications such as facial expression and face recognition, but also the core task for many others, e.g. gaze; driver focus of attention; head gesture recognitions. In real scenarios, the fine location and scale of a processed face patch should be consistently and automatically obtained. To this end, we propose a depth-based face spotting technique in which the face is cropped with respect to its depth data, and is modeled by its appearance features. By employing this technique, the localization rate was gained. additionally, by building a head pose estimator on top of it, we achieved more accurate pose estimates and better generalization capability. To estimate the head pose, we exploit Support Vector (SV) regressors to map Histogram of oriented Gradient (HoG) features extracted from the spotted face patches in both depth and RGB images to the head rotation angles. The developed pose estimator compared favorably to state-of-the-art approaches on two challenging DRGB databases.", "title": "" }, { "docid": "b80df19e67d2bbaabf4da18d7b5af4e2", "text": "This paper presents a data-driven approach for automatically generating cartoon faces in different styles from a given portrait image. Our stylization pipeline consists of two steps: an offline analysis step to learn about how to select and compose facial components from the databases; a runtime synthesis step to generate the cartoon face by assembling parts from a database of stylized facial components. We propose an optimization framework that, for a given artistic style, simultaneously considers the desired image-cartoon relationships of the facial components and a proper adjustment of the image composition. We measure the similarity between facial components of the input image and our cartoon database via image feature matching, and introduce a probabilistic framework for modeling the relationships between cartoon facial components. We incorporate prior knowledge about image-cartoon relationships and the optimal composition of facial components extracted from a set of cartoon faces to maintain a natural, consistent, and attractive look of the results. We demonstrate generality and robustness of our approach by applying it to a variety of portrait images and compare our output with stylized results created by artists via a comprehensive user study.", "title": "" }, { "docid": "5bb98a6655f823b38c3866e6d95471e9", "text": "This article describes the HR Management System in place at Sears. Key emphases of Sears' HR management infrastructure include : (1) formulating and communicating a corporate mission, vision, and goals, (2) employee education and development through the Sears University, (3) performance management and incentive compensation systems linked closely to the firm's strategy, (4) validated employee selection systems, and (5) delivering the \"HR Basics\" very competently. Key challenges for the future include : (1) maintaining momentum in the performance improvement process, (2) identifying barriers to success, and (3) clearly articulating HR's role in the change management process . © 1999 John Wiley & Sons, Inc .", "title": "" }, { "docid": "e0c76b882508b02f9eedbd8b4ec01379", "text": "Fine-grained image classification is challenging due to the large intra-class variance and small inter-class variance, aiming at recognizing hundreds of sub-categories belonging to the same basic-level category. Since two different subcategories is distinguished only by the subtle differences in some specific parts, semantic part localization is crucial for fine-grained image classification. Most previous works improve the accuracy by looking for the semantic parts, but rely heavily upon the use of the object or part annotations of images whose labeling are costly. Recently, some researchers begin to focus on recognizing sub-categories via weakly supervised part detection instead of using the expensive annotations. However, these works ignore the spatial relationship between the object and its parts as well as the interaction of the parts, both of them are helpful to promote part selection. Therefore, this paper proposes a weakly supervised part selection method with spatial constraints for fine-grained image classification, which is free of using any bounding box or part annotations. We first learn a whole-object detector automatically to localize the object through jointly using saliency extraction and co-segmentation. Then two spatial constraints are proposed to select the distinguished parts. The first spatial constraint, called box constraint, defines the relationship between the object and its parts, and aims to ensure that the selected parts are definitely located in the object region, and have the largest overlap with the object region. The second spatial constraint, called parts constraint, defines the relationship of the object’s parts, is to reduce the parts’ overlap with each other to avoid the information redundancy and ensure the selected parts are the most distinguishing parts from other categories. Combining two spatial constraints promotes parts selection significantly as well as achieves a notable improvement on fine-grained image classification. Experimental results on CUB-200-2011 dataset demonstrate the superiority of our method even compared with those methods using expensive annotations.", "title": "" }, { "docid": "57df952cda0133c4a90167dd6cd045f5", "text": "Previous research about sensor based attacks on Android platform focused mainly on accessing or controlling over sensitive components, such as camera, microphone and GPS. These approaches obtain data from sensors directly and need corresponding sensor invoking permissions.\n This paper presents a novel approach (GVS-Attack) to launch permission bypassing attacks from a zero-permission Android application (VoicEmployer) through the phone speaker. The idea of GVS-Attack is to utilize an Android system built-in voice assistant module -- Google Voice Search. With Android Intent mechanism, VoicEmployer can bring Google Voice Search to foreground, and then plays prepared audio files (like \"call number 1234 5678\") in the background. Google Voice Search can recognize this voice command and perform corresponding operations. With ingenious design, our GVS-Attack can forge SMS/Email, access privacy information, transmit sensitive data and achieve remote control without any permission. Moreover, we found a vulnerability of status checking in Google Search app, which can be utilized by GVS-Attack to dial arbitrary numbers even when the phone is securely locked with password.\n A prototype of VoicEmployer has been implemented to demonstrate the feasibility of GVS-Attack. In theory, nearly all Android (4.1+) devices equipped with Google Services Framework can be affected by GVS-Attack. This study may inspire application developers and researchers to rethink that zero permission doesn't mean safety and the speaker can be treated as a new attack surface.", "title": "" }, { "docid": "6dc6d3be0cbfd280efc81adef6182d0d", "text": "This paper aims to trace the development of management accounting systems (MAS) in a Portuguese bank, where an activity based costing system (ABC) is being trialled for implementation, as a means to improving the economy, efficiency and effectiveness of employee activity. The culture of banking in Portugal has changed significantly over the last 25 years, but at the same time there are older traditions which remain powerful. It will therefore be significant to study how an imported MAS like ABC is developed and disseminated within a Portuguese banking context. The research can be classified as a longitudinal study of organisational change using a single case study. It draws on Morgan and Sturdy’s (2000) critical framework for exploring change through three lenses – changing structures, changing discourses and the effect of both these processes on power and inequality. The study provides new insights into how management accounting practices, along with other organisational systems, play an important role questioning, visualising, analysing, and measuring implemented strategies. These practices have an important influence on strategic decision-making, and help legitimate action. As the language and practice of management have shifted towards strategy and marketing discourses, patterns of work, organisation and career are being restructured.", "title": "" } ]
scidocsrr
366616775ac8ff8a3836593a9785eab6
Community Specific Temporal Topic Discovery from Social Media
[ { "docid": "2ee0647fd07ad5cb2bb881cea1081d89", "text": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks.", "title": "" }, { "docid": "6b855b55f22de3e3f65ce56a69c35876", "text": "This paper presents an LDA-style topic model that captures not only the low-dimensional structure of data, but also how the structure changes over time. Unlike other recent work that relies on Markov assumptions or discretization of time, here each topic is associated with a continuous distribution over timestamps, and for each generated document, the mixture distribution over topics is influenced by both word co-occurrences and the document's timestamp. Thus, the meaning of a particular topic can be relied upon as constant, but the topics' occurrence and correlations change significantly over time. We present results on nine months of personal email, 17 years of NIPS research papers and over 200 years of presidential state-of-the-union addresses, showing improved topics, better timestamp prediction, and interpretable trends.", "title": "" } ]
[ { "docid": "4285d9b4b9f63f22033ce9a82eec2c76", "text": "To ease large-scale realization of agent applications there is an urgent need for frameworks, methodologies and toolkits that support the effective development of agent systems. Moreover, since one of the main tasks for which agent systems were invented is the integration between heterogeneous software, independently developed agents should be able to interact successfully. In this paper, we present JADE (Java Agent Development Environment), a software framework to build agent systems for the management of networked information resources in compliance with the FIPA specifications for inter-operable intelligent multi-agent systems. The goal of JADE is to simplify development while ensuring standard compliance through a comprehensive set of system services and agents. JADE can then be considered to be an agent middle-ware that implements an efficient agent platform and supports the development of multi-agent systems. It deals with all the aspects that are not peculiar to agent internals and that are independent of the applications, such as message transport, encoding and parsing, or agent life-cycle management. Copyright  2001 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "2e9786cfe8e7a759ed1e1481d59624ba", "text": "Global path planning for mobile robot using genetic algorithm and A* algorithm is investigated in this paper. The proposed algorithm includes three steps: the MAKLINK graph theory is adopted to establish the free space model of mobile robots firstly, then Dijkstra algorithm is utilized for finding a feasible collision-free path, finally the global optimal path of mobile robots is obtained based on the hybrid algorithm of A* algorithm and genetic algorithm. Experimental results indicate that the proposed algorithm has better performance than Dijkstra algorithm in term of both solution quality and computational time, and thus it is a viable approach to mobile robot global path planning.", "title": "" }, { "docid": "8038e56b44b4f554dc8fed075910a6dc", "text": "In this paper, we describe improved alignment models for statistical machine translation. The statistical translation approach uses two types of information: a translation model and a language model. The language model used is a bigram or general m-gram model. The translation model is decomposed into a lexical and an alignment model. We describe two different approaches for statistical translation and present experimental results. The first approach is based on dependencies between single words, the second approach explicitly takes shallow phrase structures into account, using two different alignment levels: a phrase level alignment between phrases and a word level alignment between single words. We present results using the Verbmobil task (German-English, 6000word vocabulary) which is a limited-domain spoken-language task. The experimental tests were performed on both the text transcription and the speech recognizer output. 1 S t a t i s t i c a l M a c h i n e T r a n s l a t i o n The goal of machine translation is the translation of a text given in some source language into a target language. We are given a source string f / = fl...fj...fJ, which is to be translated into a target string e{ = el...ei...ex. Among all possible target strings, we will choose the string with the highest probability: = argmax {Pr(ezIlflJ)}", "title": "" }, { "docid": "f9c6330080218f9b3a38b692630b9639", "text": "A 4.8GHz LC voltage controlled oscillator (VCO) for Wireless Sensor Network (WSN) SoC RFIC chipset is designed based on SMIC 0.18 μm 1P6M RF CMOS process. The core circuit adopts complementary differential negative resistance structure with resistor biasing which achieves good phase noise performance. The 2 bit switched capacitor array provides extra tuning range. The chip size is 600μm×475μm with testing pads. With a 1.8V supply voltage, the post-simulation and chipset measured results show that the achieved maximum 40% tuning range can perfectly compensating the deviation due to process corners. And the measured phase noise is −96dBc/Hz@3MHz with the carrier be 4.8GHz. Besides, the operating current of the whole circuit is less than 7mA.", "title": "" }, { "docid": "609fa8716f97a1d30683997d778e4279", "text": "The role of behavior for the acquisition of sensory representations has been underestimated in the past. We study this question for the task of learning vergence eye movements allowing proper fixation of objects. We model the development of this skill with an artificial neural network based on reinforcement learning. A biologically plausible reward mechanism that is responsible for driving behavior and learning of the representation of disparity is proposed. The network learns to perform vergence eye movements between natural images of objects by receiving a reward whenever an object is fixated with both eyes. Disparity tuned neurons emerge robustly in the hidden layer during development. The characteristics of the cells' tuning curves depend strongly on the task: if mostly small vergence movements are to be performed, tuning curves become narrower at small disparities, as has been measured experimentally in barn owls. Extensive training to discriminate between small disparities leads to an effective enhancement of sensitivity of the tuning curves.", "title": "" }, { "docid": "2a6d5ff1fe3d97c9a01556dfc3984b98", "text": "Server virtualization influences all aspects of IT service management, and is a key enabling technology for cloud computing. In this paper we focus on the impact of server virtualization on service delivery and service support as described by ITIL. We identify advantages, disadvantages, and risks of server virtualization for capacity, management, availability, costs, and security of IT services, and relate these aspects to the ITIL processes. We validated our results using an empirical test within four different organizations. Our main conclusion is that server virtualization does not change the ITIL processes themselves, but it does change the way the processes are executed. Server virtualization is no silver bullet for solving problems in IT operations and management. If server virtualization has been properly introduced, it can offer faster and better execution of the ITIL processes. The impact is most significant on the Financial Management process, while also Service Level Management, Incident Management, Change Management, IT Service Continuity Management and Availability Management are affected considerably. The impact is less prominent for Application Management, Software Asset Management, Release Management, Configuration Management and Security Management.", "title": "" }, { "docid": "ec2702db7dd7f2641aa7195feb3d1c29", "text": "We present Najm, a set of tools built on the axioms of absolute geometry for exploring the design space of Islamic star patterns. Our approach makes use of a novel family of tilings, called \"inflation tilings,\" which are particularly well suited as guides for creating star patterns. We describe a method for creating a parameterized set of motifs that can be used to fill the many regular polygons that comprise these tilings, as well as an algorithm to infer geometry for any irregular polygons that remain. Erasing the underlying tiling and joining together the inferred motifs produces the star patterns. By choice, Najm is build upon the subset of geometry that makes no assumption about the behavior of parallel lines. As a consequence, star patterns created by Najm can be designed equally well to fit the Euclidean plane, the hyperbolic plane, or the surface of a sphere.", "title": "" }, { "docid": "03f99359298276cb588eb8fa85f1e83e", "text": "In recent years, there has been a growing interest in the wireless sensor networks (WSN) for a variety of applications such as the localization and real time positioning. Different approaches based on artificial intelligence are applied to solve common issues in WSN and improve network performance. This paper addresses a survey on machine learning techniques for localization in WSNs using Received Signal Strength Indicator.", "title": "" }, { "docid": "dc66c67cb33e405a548b0ec665df547f", "text": "This paper presents a deep learning method for faster magnetic resonance imaging (MRI) by reducing k-space data with sub-Nyquist sampling strategies and provides a rationale for why the proposed approach works well. Uniform subsampling is used in the time-consuming phase-encoding direction to capture high-resolution image information, while permitting the image-folding problem dictated by the Poisson summation formula. To deal with the localization uncertainty due to image folding, a small number of low-frequency k-space data are added. Training the deep learning net involves input and output images that are pairs of the Fourier transforms of the subsampled and fully sampled k-space data. Our experiments show the remarkable performance of the proposed method; only 29[Formula: see text] of the k-space data can generate images of high quality as effectively as standard MRI reconstruction with the fully sampled data.", "title": "" }, { "docid": "6cb80327849cc796c4b2e34f368488e5", "text": "We present novel experiments in modeling the rise and fall of story characteristics within narrative, leading up to the Most Reportable Event (MRE), the compelling event that is the nucleus of the story. We construct a corpus of personal narratives from the bulletin board website Reddit, using the organization of Reddit content into topic-specific communities to automatically identify narratives. Leveraging the structure of Reddit comment threads, we automatically label a large dataset of narratives. We present a change-based model of narrative that tracks changes in formality, affect, and other characteristics over the course of a story, and we use this model in distant supervision and selftraining experiments that achieve significant improvements over the baselines at the task of identifying MREs.", "title": "" }, { "docid": "d6b213889ba6073b0987852e31b98c6a", "text": "Nowadays, large volumes of multimedia data are outsourced to the cloud to better serve mobile applications. Along with this trend, highly correlated datasets can occur commonly, where the rich information buried in correlated data is useful for many cloud data generation/dissemination services. In light of this, we propose to enable a secure and efficient cloud-assisted image sharing architecture for mobile devices, by leveraging outsourced encrypted image datasets with privacy assurance. Different from traditional image sharing, we aim to provide a mobile-friendly design that saves the transmission cost for mobile clients, by directly utilizing outsourced correlated images to reproduce the image of interest inside the cloud for immediate dissemination. First, we propose a secure and efficient index design that allows the mobile client to securely find from encrypted image datasets the candidate selection pertaining to the image of interest for sharing. We then design two specialized encryption mechanisms that support secure image reproduction from encrypted candidate selection. We formally analyze the security strength of the design. Our experiments explicitly show that both the bandwidth and energy consumptions at the mobile client can be saved, while achieving all service requirements and security guarantees.", "title": "" }, { "docid": "f87e8f9d733ed60cedfda1cbfe176cbf", "text": "Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods.", "title": "" }, { "docid": "4ab56fd51a4f6fb9d04fdc4409b311af", "text": "This paper describes an improved version of the Tenca-Koc unified scalable radix-2 Montgomery multiplier with half the latency for small and moderate precision operands and half the queue memory requirement. Like the Tenca-Koc multiplier, this design is reconfigurable to accept any input precision in either GF(p) or GF(2/sup n/) up to the size of the on-chip memory. An FPGA implementation can perform 1024-bit modular exponentiation in 16 ms using 5598 4-input lookup tables, making it the fastest unified scalable design yet reported.", "title": "" }, { "docid": "06372546b3dedb8a1af4324ce57d56f3", "text": "Twitter1, the microblog site started in 2006, has become a social phenomenon. More than 340 million Tweets are sent out every day2. While a majority of posts are conversational or not particularly meaningful, about 3.6% of the posts concern topics of mainstream news3. Twitter has been credited with providing the most current news about many important events before traditional media, such as the attacks in Mumbai in November 2008. Twitter also played a prominent role in the unfolding of the troubles in Iran in 2009 subsequent to a disputed election, and the so-called Twitter Revolutions4 in Tunisia and Egypt in 2010-11. To help people who read Twitter posts or tweets, Twitter provides two interesting features: an API that allows users to search for posts that contain a topic phrase and a short list of popular topics called Trending Topics. A user can perform a search for a topic and retrieve a list of most recent posts that contain the topic phrase. The di culty in interpreting the results is that the returned posts are only sorted by recency, not relevancy. Therefore, the user is forced to manually read through the posts in order to understand what users are primarily saying about a particular topic. A website called WhatTheTrend5 attempts to provide definitions of trending topics by allowing users to manually enter descriptions of why a topic is trending. Here is an example of a definition from WhatTheTrend:", "title": "" }, { "docid": "6a0f60881dddc5624787261e0470b571", "text": "Title of Dissertation: AUTOMATED STRUCTURAL AND SPATIAL COMPREHENSION OF DATA TABLES Marco David Adelfio, Doctor of Philosophy, 2015 Dissertation directed by: Professor Hanan Samet Department of Computer Science Data tables on the Web hold large quantities of information, but are difficult to search, browse, and merge using existing systems. This dissertation presents a collection of techniques for extracting, processing, and querying tables that contain geographic data, by harnessing the coherence of table structures for retrieval tasks. Data tables, including spreadsheets, HTML tables, and those found in rich document formats, are the standard way of communicating structured data for typical computer users. Notably, geographic tables (i.e., those containing names of locations) constitute a large fraction of publicly-available data tables and are ripe for exposure to Internet users who are increasingly comfortable interacting with geographic data using web-based maps. Of particular interest is the creation of a large repository of geographic data tables that would enable novel queries such as “find vacation itineraries geographically similar to mine” for use in trip planning or “find demographic datasets that cover regions X, Y, and Z” for sociological research. In support of these goals, this dissertation identifies several methods for using the structure and context of data tables to improve the interpretation of the contents, even in the presence of ambiguity. First, a method for identifying functional components of data tables is presented, capitalizing on techniques for sequence labeling that are used in natural language processing. Next, a novel automated method for converting place references to physical latitude/longitude values, a process known as geotagging, is applied to tables with high accuracy. A classification procedure for identifying a specific class of geographic table, the travel itinerary, is also described, which borrows inspiration from optimization techniques for the traveling salesman problem (TSP). Finally, methods for querying spatially similar tables are introduced and several mechanisms for visualizing and interacting with the extracted geographic data are explored. AUTOMATED STRUCTURAL AND SPATIAL COMPREHENSION OF DATA TABLES", "title": "" }, { "docid": "88033862d9fac08702977f1232c91f3a", "text": "Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.", "title": "" }, { "docid": "7b5f0c88eaf8c23b8e2489e140d0022f", "text": "Deep learning has been integrated into several existing left ventricle (LV) endocardium segmentation methods to yield impressive accuracy improvements. However, challenges remain for segmentation of LV epicardium due to its fuzzier appearance and complications from the right ventricular insertion points. Segmenting the myocardium collectively (i.e., endocardium and epicardium together) confers the potential for better segmentation results. In this work, we develop a computational platform based on deep learning to segment the whole LV myocardium simultaneously from a cardiac magnetic resonance (CMR) image. The deep convolutional network is constructed using Caffe platform, which consists of 6 convolutional layers, 2 pooling layers, and 1 de-convolutional layer. A preliminary result with Dice metric of 0.75±0.04 is reported on York MR dataset. While in its current form, our proposed one-step deep learning method cannot compete with state-of-art myocardium segmentation methods, it delivers promising first pass segmentation results.", "title": "" }, { "docid": "91b386ef617f75dd480e44708eb5a521", "text": "The recent rise of interest in Virtual Reality (VR) came with the availability of commodity commercial VR products, such as the Head Mounted Displays (HMD) created by Oculus and other vendors. To accelerate the user adoption of VR headsets, content providers should focus on producing high quality immersive content for these devices. Similarly, multimedia streaming service providers should enable the means to stream 360 VR content on their platforms. In this study, we try to cover different aspects related to VR content representation, streaming, and quality assessment that will help establishing the basic knowledge of how to build a VR streaming system.", "title": "" }, { "docid": "920a505d8f5ed9b638c268c3a1022b9e", "text": "As the Internet of Things (IoT) matures in commercial sectors, the promise of diverse new technologies such as data-driven applications, intelligent adaptive systems, and embedded optimized automation will be realized in every environment. An immediate research question is whether contemporary IoT concepts can be applied also to military battlefield environments and can realize benefits similar to those in industry. Military environments, especially those that depend on tactical communications, are much more challenging than commercial environments. Thus it is likely many commercial IoT architectures and technologies may not translate into the military domain and others will require additional research to enable deployment and efficient implementation. This paper investigates these issues and describes potential military operational activities that could benefit from commercial IoT technologies, including logistics, sensing/surveillance, and situation awareness. In addition, the paper lays out a roadmap for future research necessary to leverage IoT and apply it to the tactical battlefield environment.", "title": "" }, { "docid": "c87b1a9633a3068e80d023604ed843f3", "text": "Activities of a clinical staff in healthcare environments must regularly be adapted to new treatment methods, medications, and technologies. This constant evolution requires the monitoring of the workflow, or the sequence of actions from actors involved in a procedure, to ensure quality of medical services. In this context, recent advances in sensing technologies, including Real-time Location Systems and Computer Vision, enable high-precision tracking of actors and equipment. The current state-of-the-art about healthcare workflow monitoring typically focuses on a single technology and does not discuss its integration with others. Such an integration can lead to better solutions to evaluate medical workflows. This study aims to fill the gap regarding the analysis of monitoring technologies with a systematic literature review about sensors for capturing the workflow of healthcare environments. Its main scientific contribution is to identify both current technologies used to track activities in a clinical environment and gaps on their combination to achieve better results. It also proposes a taxonomy to classify work regarding sensing technologies and methods. The literature review does not present proposals that combine data obtained from Real-time Location Systems and Computer Vision sensors. Further analysis shows that a multimodal analysis is more flexible and could yield better results.", "title": "" } ]
scidocsrr
cf5ab68076bb38b7a2333b8e1fec4e91
Face morphing using critical point filters
[ { "docid": "2d997b25227266eddba3da5f728d078b", "text": "Image morphing has received much attention in recent years. It has proven to be a powerful tool for visual effects in film and television, enabling the fluid transformation of one digital image into another. This paper surveys the growth of this field and describes recent advances in image morphing in terms of feature specification, warp generation methods, and transition control. These areas relate to the ease of use and quality of results. We describe the role of radial basis functions, thin plate splines, energy minimization, and multilevel free-form deformations in advancing the state-of-the-art in image morphing. Recent work on a generalized framework for morphing among multiple images is described.", "title": "" } ]
[ { "docid": "4c2108f46571303e64b568647e70171e", "text": "This paper proposes a cross modal retrieval system that leverages on image and text encoding. Most multimodal architectures employ separate networks for each modality to capture the semantic relationship between them. However, in our work image-text encoding can achieve comparable results in terms of cross modal retrieval without having to use separate network for each modality. We show that text encodings can capture semantic relationships between multiple modalities. In our knowledge, this work is the first of its kind in terms of employing a single network and fused image-text embedding for cross modal retrieval. We evaluate our approach on two famous multimodal datasets: MS-COCO and Flickr30K.", "title": "" }, { "docid": "cb71e8b2bb1eeaad91a2036a9d3828ac", "text": "This paper surveys methods for simplifying and approximating polygonal surfaces. A polygonal surface is a piecewiselinear surface in 3-D defined by a set of polygons; typically a set of triangles. Methods from computer graphics, computer vision, cartography, computational geometry, and other fields are classified, summarized, and compared both practically and theoretically. The surface types range from height fields (bivariate functions), to manifolds, to nonmanifold self-intersecting surfaces. Piecewise-linear curve simplification is also briefly surveyed. This work was supported by ARPA contract F19628-93-C-0171 and NSF Young Investigator award CCR-9357763. Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number.", "title": "" }, { "docid": "8d4bf1b8b45bae6c506db5339e6d9025", "text": "Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrixmatrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.", "title": "" }, { "docid": "e648fb690dae270c4e63442a49aacaa9", "text": "It is argued that the concept of free will, like the concept of truth in formal languages, requires a separation between an object level and a meta-level for being consistently defined. The Jamesian two-stage model, which deconstructs free will into the causally open “free” stage with its closure in the “will” stage, is implicitly a move in this direction. However, to avoid the dilemma of determinism, free will additionally requires an infinite regress of causal meta-stages, making free choice a hypertask. We use this model to define free will of the rationalist-compatibilist type. This is shown to provide a natural three-way distinction between quantum indeterminism, freedom and free will, applicable respectively to artificial intelligence (AI), animal agents and human agents. We propose that the causal hierarchy in our model corresponds to a hierarchy of Turing uncomputability. Possible neurobiological and behavioral tests to demonstrate free will experimentally are suggested. Ramifications of the model for physics, evolutionary biology, neuroscience, neuropathological medicine and moral philosophy are briefly outlined.", "title": "" }, { "docid": "7979bd1fca3e705837547aea5d1a0eb4", "text": "In court interpreting, the law distinguishes between the prescribed activity of what it considers translation – defined as an objective, mechanistic, transparent process in which the interpreter acts as a mere conduit of words – and the proscribed activity of interpretation, which involves interpreters decoding and attempting to convey their understanding of speaker meanings and intentions. This article discusses the practicability of this cut-and-dried legal distinction between translation and interpretation and speculates on the reasons for its existence. An attempt is made to illustrate some of the moral dilemmas that confront court interpreters, and an argument is put forward for a more realist understanding of their role and a major improvement in their professional status; as recognized professionals, court interpreters can more readily assume the latitude they need in order to ensure effective communication in the courtroom. Among members of the linguistic professions, the terms interpretation and interpreting are often used interchangeably to refer to the oral transfer of meaning between languages, as opposed to translation, which is reserved for the written exercise. Interpretation, however, becomes a potentially charged and ambiguous term in the judicial context, where it refers to a specific judicial process. This process is performed intralingually, in the language of the relevant legal system, and effected in accordance with a number of rules and presumptions for determining the ‘true’ meaning of a written document. Hence the need to adopt a rigorous distinction between interpreting as an interlingual process and interpretation as the act of conveying one’s understanding of meanings and intentions within the same language in order to avoid misunderstanding in the judicial context. Morris (1993a) discusses the attitude of members of the legal community to the activities and status of court interpreters, with particular reference to English-speaking countries. The discussion is based on an extensive survey of both historical and modern English-language law reports of cases in which issues of interlinguistic interpreting were addressed explicitly. The comments in these reports record the beliefs, ISSN 1355-6509 © St. Jerome Publishing, Manchester The Moral Dilemmas of Court Interpreting 26 attitudes and arguments of legal practitioners, mainly lawyers and judges, at different periods in history and in various jurisdictions. By and large, they reflect negative judicial views of the interpreting process and of those who perform it, in the traduttore traditore tradition, spanning the gamut from annoyance to venom, with almost no understanding of the linguistic issues and dilemmas involved. Legal practitioners, whose own performance, like that of translators and interpreters, relies on the effective use and manipulation of language, were found to deny interpreters the same latitude in understanding and expressing concepts that they themselves enjoy. Thus they firmly state that, when rendering meaning from one language to another, court interpreters are not to interpret – this being an activity which only lawyers are to perform, but to translate – a term which is defined, sometimes expressly and sometimes by implication, as rendering the speaker’s words verbatim. When it comes to court interpreting, then, the law distinguishes between the prescribed activity of what it calls translation – defined as an objective, mechanistic, transparent process in which the interpreter acts as a mere conduit of words – and the proscribed activity of interpretation, which involves interpreters decoding and attempting to convey their understanding of speaker meanings and intentions. In the latter case, the interpreter is perceived as assuming an active role in the communication process, something that is anathema to lawyers and judges. The law’s attitude to interpreters is at odds with the findings of current research in communication which recognizes the importance of context in the effective exchange of messages: it simply does not allow interpreters to use their discretion or act as mediators in the judicial process. The activity of interpretation, as distinct from translation, is held by the law to be desirable and acceptable for jurists, but utterly inappropriate and prohibited for linguists. The law continues to proscribe precisely those aspects of the interpreting process which enable it to be performed with greater accuracy because they have two undesirable side effects from the legal point of view: one is to highlight the interpreter’s presence and contribution, the other is to challenge and potentially undermine the performance of the judicial participants in forensic activities. 1. Interpretation as a communicative process The contemporary view of communication, of which interlingual interpretation is but one particularly salient form, sees all linguistic acts of communication as involving (or indeed, as being tantamount to) acts of translation, whether or not they involve different linguistic systems. Similarly, modern translation theorists see all interlingual translation as", "title": "" }, { "docid": "6b03b9e8fdc1b5d9f01b3a9426e0ab3a", "text": "We consider the problem of weakly supervised object localization. For an object of interest (e.g. “car”), an image is weakly labeled when its label only indicates the presence/absence of this object, but not the exact location of the object in the image. Given a collection of weakly labeled images for an object, our goal is to localize the object of interest in each image. We propose a novel architecture called the regularized attention network for this problem. Our work builds upon the attention network proposed in [1]. We extend the standard attention network by incorporating a regularization term that encourages the attention scores of object proposals to mimic the scoring distribution of a strong fully supervised object detector. Despite of the simplicity of our approach, our proposed architecture achieves the state-of-the-art results on several benchmark datasets.", "title": "" }, { "docid": "c0dbd6356ead3a9542c9ec20dd781cc7", "text": "This paper aims to address the importance of supportive teacher–student interactions within the learning environment. This will be explored through the three elements of the NSW Quality Teaching Model; Intellectual Quality, Quality Learning Environment and Significance. The paper will further observe the influences of gender on the teacher–student relationship, as well as the impact that this relationship has on student academic outcomes and behaviour. Teacher–student relationships have been found to have immeasurable effects on students’ learning and their schooling experience. This paper examines the ways in which educators should plan to improve their interactions with students, in order to allow for quality learning. This journal article is available in Journal of Student Engagement: Education Matters: http://ro.uow.edu.au/jseem/vol2/iss1/2 Journal of Student Engagement: Education matters 2012, 2 (1), 2–9 Lauren Liberante 2 The importance of teacher–student relationships, as explored through the lens of the NSW Quality Teaching Model", "title": "" }, { "docid": "bf8f46e4c85f7e45879cee4282444f78", "text": "Influence of culture conditions such as light, temperature and C/N ratio was studied on growth of Haematococcus pluvialis and astaxanthin production. Light had significant effect on astaxanthin production and it varied with its intensity and direction of illumination and effective culture ratio (ECR, volume of culture medium/volume of flask). A 6-fold increase in astaxanthin production (37 mg/L) was achieved with 5.1468·107 erg·m−2·s−1 light intensity (high light, HL) at effective culture ratio of 0.13 compared to that at 0.52 ECR, while the difference in the astaxanthin production was less than 2 — fold between the effective culture ratios at 1.6175·107 erg·m−2·s−1 light intensity (low light, LL). Multidirectional (three-directional) light illumination considerably enhanced the astaxanthin production (4-fold) compared to unidirectional illumination. Cell count was high at low temperature (25 °C) while astaxanthin content was high at 35 °C in both autotrophic and heterotrophic media. In a heterotrophic medium at low C/N ratio H. pluvialis growth was higher with prolonged vegetative phase, while high C/N ratio favoured early encystment and higher astaxanthin formation.", "title": "" }, { "docid": "23afce4680f31eae8ae63d9aa722ce33", "text": "Judith Jacobi, PharmD, FCCM, BCPS; Gilles L. Fraser, PharmD, FCCM; Douglas B. Coursin, MD; Richard R. Riker, MD; Dorrie Fontaine, RN, DNSc, FAAN; Eric T. Wittbrodt, PharmD; Donald B. Chalfin, MD, MS, FCCM; Michael F. Masica, MD, MPH; H. Scott Bjerke, MD; William M. Coplin, MD; David W. Crippen, MD, FCCM; Barry D. Fuchs, MD; Ruth M. Kelleher, RN; Paul E. Marik, MDBCh, FCCM; Stanley A. Nasraway, Jr, MD, FCCM; Michael J. Murray, MD, PhD, FCCM; William T. Peruzzi, MD, FCCM; Philip D. Lumb, MB, BS, FCCM. Developed through the Task Force of the American College of Critical Care Medicine (ACCM) of the Society of Critical Care Medicine (SCCM), in collaboration with the American Society of Health-System Pharmacists (ASHP), and in alliance with the American College of Chest Physicians; and approved by the Board of Regents of ACCM and the Council of SCCM and the ASHP Board of Directors", "title": "" }, { "docid": "6c11bb11540719ad64e98bb67cd9a798", "text": "Opium poppy (Papaver somniferum) produces a large number of benzylisoquinoline alkaloids, including the narcotic analgesics morphine and codeine, and has emerged as one of the most versatile model systems to study alkaloid metabolism in plants. As summarized in this review, we have taken a holistic strategy—involving biochemical, cellular, molecular genetic, genomic, and metabolomic approaches—to draft a blueprint of the fundamental biological platforms required for an opium poppy cell to function as an alkaloid factory. The capacity to synthesize and store alkaloids requires the cooperation of three phloem cell types—companion cells, sieve elements, and laticifers—in the plant, but also occurs in dedifferentiated cell cultures. We have assembled an opium poppy expressed sequence tag (EST) database based on the attempted sequencing of more than 30,000 cDNAs from elicitor-treated cell culture, stem, and root libraries. Approximately 23,000 of the elicitor-induced cell culture and stem ESTs are represented on a DNA microarray, which has been used to examine changes in transcript profile in cultured cells in response to elicitor treatment, and in plants with different alkaloid profiles. Fourier transform-ion cyclotron resonance mass spectrometry and proton nuclear magnetic resonance mass spectroscopy are being used to detect corresponding differences in metabolite profiles. Several new genes involved in the biosynthesis and regulation of alkaloid pathways in opium poppy have been identified using genomic tools. A biological blueprint for alkaloid production coupled with the emergence of reliable transformation protocols has created an unprecedented opportunity to alter the chemical profile of the world’s most valuable medicinal plant.", "title": "" }, { "docid": "159cd44503cb9def6276cb2b9d33c40e", "text": "In the airline industry, data analysis and data mining are a prerequisite to push customer relationship management (CRM) ahead. Knowledge about data mining methods, marketing strategies and airline business processes has to be combined to successfully implement CRM. This paper is a case study and gives an overview about distinct issues, which have to be taken into account in order to provide a first solution to run CRM processes. We do not focus on each individual task of the project; rather we give a sketch about important steps like data preparation, customer valuation and segmentation and also explain the limitation of the solutions.", "title": "" }, { "docid": "4ddb0d4bf09dc9244ee51d4b843db5f2", "text": "BACKGROUND\nMobile applications (apps) have potential for helping people increase their physical activity, but little is known about the behavior change techniques marketed in these apps.\n\n\nPURPOSE\nThe aim of this study was to characterize the behavior change techniques represented in online descriptions of top-ranked apps for physical activity.\n\n\nMETHODS\nTop-ranked apps (n=167) were identified on August 28, 2013, and coded using the Coventry, Aberdeen and London-Revised (CALO-RE) taxonomy of behavior change techniques during the following month. Analyses were conducted during 2013.\n\n\nRESULTS\nMost descriptions of apps incorporated fewer than four behavior change techniques. The most common techniques involved providing instruction on how to perform exercises, modeling how to perform exercises, providing feedback on performance, goal-setting for physical activity, and planning social support/change. A latent class analysis revealed the existence of two types of apps, educational and motivational, based on their configurations of behavior change techniques.\n\n\nCONCLUSIONS\nBehavior change techniques are not widely marketed in contemporary physical activity apps. Based on the available descriptions and functions of the observed techniques in contemporary health behavior theories, people may need multiple apps to initiate and maintain behavior change. This audit provides a starting point for scientists, developers, clinicians, and consumers to evaluate and enhance apps in this market.", "title": "" }, { "docid": "395362cb22b0416e8eca67ec58907403", "text": "This paper presents an approach for labeling objects in 3D scenes. We introduce HMP3D, a hierarchical sparse coding technique for learning features from 3D point cloud data. HMP3D classifiers are trained using a synthetic dataset of virtual scenes generated using CAD models from an online database. Our scene labeling system combines features learned from raw RGB-D images and 3D point clouds directly, without any hand-designed features, to assign an object label to every 3D point in the scene. Experiments on the RGB-D Scenes Dataset v.2 demonstrate that the proposed approach can be used to label indoor scenes containing both small tabletop objects and large furniture pieces.", "title": "" }, { "docid": "6084bf59cfd956d119692d00c442f93d", "text": "Microbial biofilms are complex, self-organized communities of bacteria, which employ physiological cooperation and spatial organization to increase both their metabolic efficiency and their resistance to changes in their local environment. These properties make biofilms an attractive target for engineering, particularly for the production of chemicals such as pharmaceutical ingredients or biofuels, with the potential to significantly improve yields and lower maintenance costs. Biofilms are also a major cause of persistent infection, and a better understanding of their organization could lead to new strategies for their disruption. Despite this potential, the design of synthetic biofilms remains a major challenge, due to the complex interplay between transcriptional regulation, intercellular signaling, and cell biophysics. Computational modeling could help to address this challenge by predicting the behavior of synthetic biofilms prior to their construction; however, multiscale modeling has so far not been achieved for realistic cell numbers. This paper presents a computational method for modeling synthetic microbial biofilms, which combines three-dimensional biophysical models of individual cells with models of genetic regulation and intercellular signaling. The method is implemented as a software tool (CellModeller), which uses parallel Graphics Processing Unit architectures to scale to more than 30,000 cells, typical of a 100 μm diameter colony, in 30 min of computation time.", "title": "" }, { "docid": "827396df94e0bca08cee7e4d673044ef", "text": "Localization in Wireless Sensor Networks (WSNs) is regarded as an emerging technology for numerous cyberphysical system applications, which equips wireless sensors with the capability to report data that is geographically meaningful for location based services and applications. However, due to the increasingly pervasive existence of smart sensors in WSN, a single localization technique that affects the overall performance is not sufficient for all applications. Thus, there have been many significant advances on localization techniques in WSNs in the past few years. The main goal in this paper is to present the state-of-the-art research results and approaches proposed for localization in WSNs. Specifically, we present the recent advances on localization techniques in WSNs by considering a wide variety of factors and categorizing them in terms of data processing (centralized vs. distributed), transmission range (range free vs. range based), mobility (static vs. mobile), operating environments (indoor vs. outdoor), node density (sparse vs dense), routing, algorithms, etc. The recent localization techniques in WSNs are also summarized in the form of tables. With this paper, readers can have a more thorough understanding of localization in sensor networks, as well as research trends and future research directions in this area.", "title": "" }, { "docid": "f1be88ab23576cadab69b0c3a03ebd47", "text": "We describe a waveguide to thin-film microstrip transition for highperformance submillimetre wave and teraherz applications. The proposed constant-radius probe couples thin-film microstrip line, to fullheight rectangular waveguide with better than 99% efficiency (VSWR ≤ 1.20) and 45% fractional bandwidth. Extensive HFSS simulations, backed by scale-model measurements, are presented in the paper. By selecting the substrate material and probe radius, any real impedance between ≈ 15-60 Ω can be achieved. The radial probe gives significantly improved performance over other designs discussed in the literature. Although our primary application is submillimetre wave superconducting mixers, we show that membrane techniques should allow broad-band waveguide components to be constructed for the THz frequency range.", "title": "" }, { "docid": "71022e2197bfb99bd081928cf162f58a", "text": "Ophthalmology and visual health research have received relatively limited attention from the personalized medicine community, but this trend is rapidly changing. Postgenomics technologies such as proteomics are being utilized to establish a baseline biological variation map of the human eye and related tissues. In this context, the choroid is the vascular layer situated between the outer sclera and the inner retina. The choroidal circulation serves the photoreceptors and retinal pigment epithelium (RPE). The RPE is a layer of cuboidal epithelial cells adjacent to the neurosensory retina and maintains the outer limit of the blood-retina barrier. Abnormal changes in choroid-RPE layers have been associated with age-related macular degeneration. We report here the proteome of the healthy human choroid-RPE complex, using reverse phase liquid chromatography and mass spectrometry-based proteomics. A total of 5309 nonredundant proteins were identified. Functional analysis of the identified proteins further pointed to molecular targets related to protein metabolism, regulation of nucleic acid metabolism, transport, cell growth, and/or maintenance and immune response. The top canonical pathways in which the choroid proteins participated were integrin signaling, mitochondrial dysfunction, regulation of eIF4 and p70S6K signaling, and clathrin-mediated endocytosis signaling. This study illustrates the largest number of proteins identified in human choroid-RPE complex to date and might serve as a valuable resource for future investigations and biomarker discovery in support of postgenomics ophthalmology and precision medicine.", "title": "" }, { "docid": "a4c6af9f76379cbeee46ba5a79b41f01", "text": "Software systems inherently contain vulnerabilities that have been exploited in the past resulting in significant revenue losses. The study of vulnerability life cycles can help in the development, deployment, and maintenance of software systems. It can also help in designing future security policies and conducting audits of past incidents. Furthermore, such an analysis can help customers to assess the security risks associated with software products of different vendors. In this paper, we conduct an exploratory measurement study of a large software vulnerability data set containing 46310 vulnerabilities disclosed since 1988 till 2011. We investigate vulnerabilities along following seven dimensions: (1) phases in the life cycle of vulnerabilities, (2) evolution of vulnerabilities over the years, (3) functionality of vulnerabilities, (4) access requirement for exploitation of vulnerabilities, (5) risk level of vulnerabilities, (6) software vendors, and (7) software products. Our exploratory analysis uncovers several statistically significant findings that have important implications for software development and deployment.", "title": "" }, { "docid": "2d86f517026d93454bb1761dd21c7e9d", "text": "This article presents a new approach to movement planning, on-line trajectory modification, and imitation learning by representing movement plans based on a set of nonlinear differential equations with well-defined attractor dynamics. In contrast to non-autonomous movement representations like splines, the resultant movement plan remains an autonomous set of nonlinear differential equations that forms a control policy (CP) which is robust to strong external perturbations and that can be modified on-line by additional perceptual variables. The attractor landscape of the control policy can be learned rapidly with a locally weighted regression technique with guaranteed convergence of the learning algorithm and convergence to the movement target. This property makes the system suitable for movement imitation and also for classifying demonstrated movement according to the parameters of the learning system. We evaluate the system with a humanoid robot simulation and an actual humanoid robot. Experiments are presented for the imitation of three types of movements: reaching movements with one arm, drawing movements of 2-D patterns, and tennis swings. Our results demonstrate (a) that multi-joint human movements can be encoded successfully by the CPs, (b) that a learned movement policy can readily be reused to produce robust trajectories towards different targets, (c) that a policy fitted for one particular target provides a good predictor of human reaching movements towards neighboring targets, and (d) that the parameter space which encodes a policy is suitable for measuring to which extent two trajectories are qualitatively similar.", "title": "" } ]
scidocsrr
16f5b78b25ff7771c18a70152cb0fbb0
ESE: Efficient Speech Recognition Engine with Compressed LSTM on FPGA
[ { "docid": "76454b3376ec556025201a2f694e1f1c", "text": "Recurrent neural networks (RNNs) provide state-of-the-art accuracy for performing analytics on datasets with sequence (e.g., language model). This paper studied a state-of-the-art RNN variant, Gated Recurrent Unit (GRU). We first proposed memoization optimization to avoid 3 out of the 6 dense matrix vector multiplications (SGEMVs) that are the majority of the computation in GRU. Then, we study the opportunities to accelerate the remaining SGEMVs using FPGAs, in comparison to 14-nm ASIC, GPU, and multi-core CPU. Results show that FPGA provides superior performance/Watt over CPU and GPU because FPGA's on-chip BRAMs, hard DSPs, and reconfigurable fabric allow for efficiently extracting fine-grained parallelisms from small/medium size matrices used by GRU. Moreover, newer FPGAs with more DSPs, on-chip BRAMs, and higher frequency have the potential to narrow the FPGA-ASIC efficiency gap.", "title": "" } ]
[ { "docid": "ac442170f4fcde6e5a7b0e9921e46f9b", "text": "To the Editor, Anencephaly can be reliably diagnosed using ultrasound late in the first trimester of pregnancy [1]. The prevalence of anencephaly in twins is higher than that in singleton pregnancies, and the prevalence of discordance in anencephaly in monochorionic twins is higher than that in dichorionic twins [2]. There have been only two reports on conventional twodimensional (2D) sonographic assessment of fetal behavior in twins discordant because of anencephaly after 20 weeks of gestation [3, 4]. However, there has been no report on four-dimensional (4D) sonographic assessment of intertwin contact in twins discordant because of anencephaly in utero. To the best of our knowledge, this is the first report on 4D sonographic assessment of inter-twin contact in a case of monochorionic diamniotic (MD) twins with acrania of one twin fetus late in the first trimester. A 28-year-old Japanese woman, gravida 3, para 1, visited our hospital because of secondary amenorrhea, and MD twin pregnancy at 7 weeks and 4 days was diagnosed. At 11 weeks and 4 days, twin pregnancy with acrania of one twin fetus was diagnosed (Fig. 1). The parents were informed about the lethality of the affected twin fetus, but they elected to continue the pregnancy. At 25 weeks and 1 day, she was admitted to our hospital because of threatened premature labor (short cervix and irritable uterine contractions). At 38 weeks and 4 days, the first of two female infants (anencephalic twin), weighing 1,948 g, and the other infant (second twin), weighing 2,372 g (Apgar score 7 at 1 min and 9 at 5 min; umbilical artery blood pH 7.089), were delivered by elective cesarean section because of two previous cesarean sections. The anencephalic twin died soon after delivery. Permission to conduct an autopsy was not granted by the parents. The second twin is doing well. Detailed descriptions of the data-collecting methods and measurement procedures used in this patient have been presented in a previous publication [5]. In brief, examinations were performed for 30 min with transabdominal 4D sonography at 11 weeks and 4 days and 13 weeks and 1 day of pregnancy, respectively. She was asked whether she would agree to a 30-min observation of fetal movements and inter-twin contact after undergoing routine sonographic examinations. This study was approved by the local ethics committee of Kagawa University School of Medicine, and standardized written informed consent was obtained from the patient. All 4D examinations were performed using Voluson 730 Expert (GE Medical Systems, Milwaukee, WI) with a transabdominal 2–5-MHz transducer. Ten types of inter-twin contact (head to head, head to arm, head to trunk, head to leg, arm to arm, arm to trunk, arm to leg, trunk to trunk, trunk to leg, and leg to leg) were analyzed during playback of the video recordings. The total number of all inter-twin contacts was determined by a single experienced observer (M.S.) and compared to the quartile range obtained from normal MD twins [5]. The frequencies of ten types of inter-twin contact were also compared to the quartile ranges obtained from normal MD twins [6]. The total number of inter-twin contacts in this patient was low compared to those of normal MD twin fetuses at 10–11 and 12–13 weeks’ gestation (Fig. 2). The frequencies of 10 types of inter-twin contact at 10–11 weeks were almost within quartile ranges (Fig. 3), T. Hata (&) K. Kanenishi U. Hanaoka M. Sasaki T. Yanagihara Department of Perinatology and Gynecology, Kagawa University School of Medicine, 1750-1 Ikenobe, Miki, Kagawa 761-0793, Japan e-mail: toshi28@med.kagawa-u.ac.jp", "title": "" }, { "docid": "d74df8673db783ff80d01f2ccc0fe5bf", "text": "The search for strategies to mitigate undesirable economic, ecological, and social effects of harmful resource consumption has become an important, socially relevant topic. An obvious starting point for businesses that wish to make value creation more sustainable is to increase the utilization rates of existing resources. Modern social Internet technology is an effective means by which to achieve IT-enabled sharing services, which make idle resource capacity owned by one entity accessible to others who need them but do not want to own them. Successful sharing services require synchronized participation of providers and users of resources. The antecedents of the participation behavior of providers and users has not been systematically addressed by the extant literature. This article therefore proposes a model that explains and predicts the participation behavior in sharing services. Our search for a theoretical foundation revealed the Theory of Planned Behavior as most appropriate lens, because this theory enables us to integrate provider behavior and user behavior as constituents of participation behavior. The model is novel for that it is the first attempt to study the interdependencies between the behavior types in sharing service participation and for that it includes both general and specific determinants of the participation behavior.", "title": "" }, { "docid": "48317f6959b4a681e0ff001c7ce3e7ee", "text": "We introduce the challenge of using machine learning effectively in space applications and motivate the domain for future researchers. Machine learning can be used to enable greater autonomy to improve the duration, reliability, cost-effectiveness, and science return of space missions. In addition to the challenges provided by the nature of space itself, the requirements of a space mission severely limit the use of many current machine learning approaches, and we encourage researchers to explore new ways to address these challenges.", "title": "" }, { "docid": "7980db78777ffe47814ec8d140c38221", "text": "This study was conducted to determine the effect of abdominal exercises versus abdominal supporting belt on abdominal efficiency and inter-recti separation following vaginal delivery.30 primiparous post-natal women participated in this study. Their age ranged from (25 35) years and their BMI < 30 Kg/m. Participants were assigned randomly into 2groups, participants of group (A) used abdominal belt from the 2 day following delivery, till the end of puerperium (6 weeks), while participants of group (B) engaged into abdominal exercises program from the 2 day following delivery for 6 weeks. The results of the present study revealed that although there was no statistical difference in waist circumference between both groups, participation in abdominal exercise program produced a pronounced reduction in waist/hip ratio, and inter-recti separation and also caused significant increase in abdominal muscles strength (peak torque, maximum repetition total work and average power) higher than the use of abdominal belt. Keywords—Abdominal exercise, Abdominal supporting belt, Postnatal abdominal weakness, Rectus Diastases.", "title": "" }, { "docid": "1326be667e3ec3aa6bf0732ef97c230a", "text": "Recognizing human activities in a sequence is a challenging area of research in ubiquitous computing. Most approaches use a fixed size sliding window over consecutive samples to extract features— either handcrafted or learned features—and predict a single label for all samples in the window. Two key problems emanate from this approach: i) the samples in one window may not always share the same label. Consequently, using one label for all samples within a window inevitably lead to loss of information; ii) the testing phase is constrained by the window size selected during training while the best window size is difficult to tune in practice. We propose an efficient algorithm that can predict the label of each sample, which we call dense labeling, in a sequence of human activities of arbitrary length using a fully convolutional network. In particular, our approach overcomes the problems posed by the sliding window step. Additionally, our algorithm learns both the features and classifier automatically. We release a new daily activity dataset based on a wearable sensor with hospitalized patients. We conduct extensive experiments and demonstrate that our proposed approach is able to outperform the state-of-the-arts in terms of classification and label misalignment measures on three challenging datasets: Opportunity, Hand Gesture, and our new dataset.", "title": "" }, { "docid": "2ce4e4d5026114739adfeee7626e2aae", "text": "-A neural network model for visual pattern recognition, called the \"neocognitron, \"' was previously proposed by the author In this paper, we discuss the mechanism of the model in detail. In order to demonstrate the ability of the neocognitron, we also discuss a pattern-recognition system which works with the mechanism of the neocognitron. The system has been implemented on a minicomputer and has been trained to recognize handwritten numerals. The neocognitron is a hierarchical network consisting of many layers of cells, and has variable connections between the cells in adjoining layers. It can acquire the ability to recognize patterns by learning, and can be trained to recognize any set of patterns. After finishing the process of learning, pattern recognition is performed on the basis of similarity in shape between patterns, and is not affected by deformation, nor by changes in size, nor by shifts in the position of the input patterns. In the hierarchical network of the neocognitron, local features of the input pattern are extracted by the cells of a lower stage, and they are gradually integrated into more global features. Finally, each cell of the highest stage integrates all the information of the input pattern, and responds only to one specific pattern. Thus, the response of the cells of the highest stage shows the final result of the pattern-recognition of the network. During this process of extracting and integrating features, errors in the relative position of local features are gradually tolerated. The operation of tolerating positional error a little at a time at each stage, rather than all in one step, plays an important role in endowing the network with an ability to recognize even distorted patterns.", "title": "" }, { "docid": "7c0b7d55abdd6cce85730dbf1cd02109", "text": "Suppose fx, h , ■ • ■ , fk are polynomials in one variable with all coefficients integral and leading coefficients positive, their degrees being h\\ , h2, •• -, A* respectively. Suppose each of these polynomials is irreducible over the field of rational numbers and no two of them differ by a constant factor. Let Q(fx ,f2, • • • ,fk ; N) denote the number of positive integers n between 1 and N inclusive such that /i(n), f2(n), • ■ ■ , fk(n) are all primes. (We ignore the finitely many values of n for which some /,(n) is negative.) Then heuristically we would expect to have for N large", "title": "" }, { "docid": "30a75bbb74bdc8a5e82ad19de7e1faff", "text": "Scattering centers of two different vehicles have been evaluated using SAR, and the combination of SAR and DBF. The results show that main scattering centers are located at the wheels, the regions around the license plate, door outer panel, windshield pillar, center pillar, rear pillar, and light units. Furthermore, it has been shown that multipath propagation enables an indirect detection of scattering centers. The analysis shows no significant differences between the scattering center locations at 24 GHz and 77 GHz. However, the increased bandwidth at 77 GHz allows resolving merged scattering centers and improves the contour determination. The comparison between the measurement results of the SAR processing and the mechanical scanning radar confirmed similar main scattering centers. Furthermore, the measurement results show that the contour and orientation of a vehicle can be determined at a distance of 10 m with a bandwidth of 2 GHz and an angular resolution of approximately 2°.", "title": "" }, { "docid": "959618d50b59ce316cebb24a18375cde", "text": "Research experiences today are limited to a privileged few at select universities. Providing open access to research experiences would enable global upward mobility and increased diversity in the scientific workforce. How can we coordinate a crowd of diverse volunteers on open-ended research? How could a PI have enough visibility into each person's contributions to recommend them for further study? We present Crowd Research, a crowdsourcing technique that coordinates open-ended research through an iterative cycle of open contribution, synchronous collaboration, and peer assessment. To aid upward mobility and recognize contributions in publications, we introduce a decentralized credit system: participants allocate credits to each other, which a graph centrality algorithm translates into a collectively-created author order. Over 1,500 people from 62 countries have participated, 74% from institutions with low access to research. Over two years and three projects, this crowd has produced articles at top-tier Computer Science venues, and participants have gone on to leading graduate programs.", "title": "" }, { "docid": "aa1071a3b5b720922fc254e1e4b9d70d", "text": "This paper presents a zero-voltage-switching (ZVS) full-bridge dc-dc converter combing resonant and pulse-width-modulation (PWM) power conversions for electric vehicle battery chargers. In the proposed converter, a half-bridge LLC resonant circuit shares the lagging leg with a phase-shift full-bridge (PSFB) dc-dc circuit to guarantee ZVS of the lagging-leg switches from zero to full load. A secondary-side hybrid-switching circuit, which is formed by the leakage inductance, output inductor of the PSFB dc-dc circuit, a small additional resonant capacitor, and two additional diodes, is integrated at the secondary side of the PSFB dc-dc circuit. With the clamp path of a hybrid-switching circuit, the voltage overshoots that arise during the turn off of the rectifier diodes are eliminated and the voltage of bridge rectifier is clamped to the minimal achievable value, which is equal to secondary-reflected input voltage of the transformer. The sum of the output voltage of LLC resonant circuit and the resonant capacitor voltage of the hybrid-switching circuit is applied between the bridge rectifier and the output inductor of the PSFB dc-dc circuit during the freewheeling phases. As a result, the primary-side circulating current of the PSFB dc-dc circuit is instantly reset to zero, achieving minimized circulating losses. The effectiveness of the proposed converter was experimentally verified using a 4-kW prototype circuit. The experimental results show 98.6% peak efficiency and high efficiency over wide load and output voltage ranges.", "title": "" }, { "docid": "00a48b2c053c5d634a3480c1543cb3d2", "text": "Interruptions and distractions due to smartphone use in healthcare settings pose potential risks to patient safety. Therefore, it is important to assess smartphone use at work, to encourage nursing students to review their relevant behaviors, and to recognize these potential risks. This study's aim was to develop a scale to measure smartphone addiction and test its validity and reliability. We investigated nursing students' experiences of distractions caused by smartphones in the clinical setting and their opinions about smartphone use policies. Smartphone addiction and the need for a scale to measure it were identified through a literature review and in-depth interviews with nursing students. This scale showed reliability and validity with exploratory and confirmatory factor analysis. In testing the discriminant and convergent validity of the selected (18) items with four factors, the smartphone addiction model explained approximately 91% (goodness-of-fit index = 0.909) of the variance in the data. Pearson correlation coefficients among addiction level, distractions in the clinical setting, and attitude toward policies on smartphone use were calculated. Addiction level and attitude toward policies of smartphone use were negatively correlated. This study suggests that healthcare organizations in Korea should create practical guidelines and policies for the appropriate use of smartphones in clinical practice.", "title": "" }, { "docid": "973426438175226bb46c39cc0a390d97", "text": "This paper proposes a methodology for the creation of specialized data sets for Textual Entailment, made of monothematic Text-Hypothesis pairs (i.e. pairs in which only one linguistic phenomenon relevant to the entailment relation is highlighted and isolated). The annotation procedure assumes that humans have knowledge about the linguistic phenomena relevant to inference, and a classification of such phenomena both into fine grained and macro categories is suggested. We experimented with the proposed methodology over a sample of pairs taken from the RTE-5 data set, and investigated critical issues arising when entailment, contradiction or unknown pairs are considered. The result is a new resource, which can be profitably used both to advance the comprehension of the linguistic phenomena relevant to entailment judgments and to make a first step towards the creation of large-scale specialized data sets.", "title": "" }, { "docid": "b6473fa890eba0fd00ac7e1999ae6fef", "text": "Memristors have extended their influence beyond memory to logic and in-memory computing. Memristive logic design, the methodology of designing logic circuits using memristors, is an emerging concept whose growth is fueled by the quest for energy efficient computing systems. As a result, many memristive logic families have evolved with different attributes, and a mature comparison among them is needed to judge their merit. This paper presents a framework for comparing logic families by classifying them on the basis of fundamental properties such as statefulness, proximity (from the memory array), and flexibility of computation. We propose metrics to compare memristive logic families using analytic expressions for performance (latency), energy efficiency, and area. Then, we provide guidelines for a holistic comparison of logic families and set the stage for the evolution of new logic families.", "title": "" }, { "docid": "4292a60a5f76fd3e794ce67d2ed6bde3", "text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.", "title": "" }, { "docid": "8172b901dca0ee5cab2a2439ec5f0376", "text": "Manually designed workflows can be error-prone and inefficient. Workflow provenance contains fine-grained data processing information that can be used to detect workflow design problems. In this paper, we propose a provenance-driven workflow analysis framework that exploits both prospective and retrospective provenance. We show how provenance information can help the user gain a deeper understanding of a workflow and provide the user with insights into how to improve workflow design.", "title": "" }, { "docid": "79ca2676dab5da0c9f39a0996fcdcfd8", "text": "Estimation of human shape from images has numerous applications ranging from graphics to surveillance. A single image provides insufficient constraints (e.g. clothing), making human shape estimation more challenging. We propose a method to simultaneously estimate a person’s clothed and naked shapes from a single image of that person wearing clothing. The key component of our method is a deformable model of clothed human shape. We learn our deformable model, which spans variations in pose, body, and clothes, from a training dataset. These variations are derived by the non-rigid surface deformation, and encoded in various low-dimension parameters. Our deformable model can be used to produce clothed 3D meshes for different people in different poses, which neither appears in the training dataset. Afterward, given an input image, our deformable model is initialized with a few user-specified 2D joints and contours of the person. We optimize the parameters of the deformable model by pose fitting and body fitting in an iterative way. Then the clothed and naked 3D shapes of the person can be obtained simultaneously. We illustrate our method for texture mapping and animation. The experimental results on real images demonstrate the effectiveness of our method.", "title": "" }, { "docid": "df3d91489c8c39ffb36f4c09a132c7d6", "text": "In this paper, we introduce a wheel-based cable climbing robot system developed for maintenance of the suspension bridges. The robot consists of three parts: a wheel based driving mechanism, adhesion mechanism, and safe landing mechanism. The driving mechanism is a combination of pantograph mechanism, and wheels driven by motors. In addition, we propose a special design of safe landing mechanism which can assure the safety of the robot on the cables when the power is lost. Finally, the proposed robotic system is manufactured and validated in the indoor experimental environments.", "title": "" }, { "docid": "5dbd99fa88cacc944874f2729cd3e4a1", "text": "This paper presents a fast algorithm for deriving the defocus map from a single image. Existing methods of defocus map estimation often include a pixel-level propagation step to spread the measured sparse defocus cues over the whole image. Since the pixel-level propagation step is time-consuming, we develop an effective method to obtain the whole-image defocus blur using oversegmentation and transductive inference. Oversegmentation produces the superpixels and hence greatly reduces the computation costs for subsequent procedures. Transductive inference provides a way to calculate the similarity between superpixels, and thus helps to infer the defocus blur of each superpixel from all other superpixels. The experimental results show that our method is efficient and able to estimate a plausible superpixel-level defocus map from a given single image.", "title": "" }, { "docid": "c4332dfb8e8117c3deac7d689b8e259b", "text": "Learning through experience is time-consuming, inefficient and often bad for your cortisol levels. To address this problem, a number of recently proposed teacherstudent methods have demonstrated the benefits of private tuition, in which a single model learns from an ensemble of more experienced tutors. Unfortunately, the cost of such supervision restricts good representations to a privileged minority. Unsupervised learning can be used to lower tuition fees, but runs the risk of producing networks that require extracurriculum learning to strengthen their CVs and create their own LinkedIn profiles1. Inspired by the logo on a promotional stress ball at a local recruitment fair, we make the following three contributions. First, we propose a novel almost no supervision training algorithm that is effective, yet highly scalable in the number of student networks being supervised, ensuring that education remains affordable. Second, we demonstrate our approach on a typical use case: learning to bake, developing a method that tastily surpasses the current state of the art. Finally, we provide a rigorous quantitive analysis of our method, proving that we have access to a calculator2. Our work calls into question the long-held dogma that life is the best teacher. Give a student a fish and you feed them for a day, teach a student to gatecrash seminars and you feed them until the day they move to Google.", "title": "" }, { "docid": "c1f7a1733193356f430be594585e4dfe", "text": "A helicopter offers the capability of hover, slow forward displacement, vertical take-off and landing while a conventional airplane has the performance of fast forward movement, long reach and superior endurance. The aim of this paper is to present the modelling and control of a tilt tri-rotor UAV's configuration that combines the advantages of both rotary wing and fixed wing vehicle.", "title": "" } ]
scidocsrr
f98e9d73909262f3a7ee12b75ffebe56
Spontaneous network formation among cooperative RNA replicators
[ { "docid": "118738ca4b870e164c7be53e882a9ab4", "text": "IA. Cause and Effect . . . . . . . . . . . . . . 465 1.2. Prerequisites of Selforganization . . . . . . . 467 1.2.3. Evolut ion Must S ta r t f rom R andom Even ts 467 1.2.2. Ins t ruc t ion Requires In format ion . . . . 467 1.2.3. In format ion Originates or Gains Value by S e l e c t i o n . . . . . . . . . . . . . . . 469 1.2.4. Selection Occurs wi th Special Substances under Special Conditions . . . . . . . . 470", "title": "" } ]
[ { "docid": "6dbb6b889a9789d14a7c37d932394b1c", "text": "I consider the issue of learning generative probabilistic models (e.g., Bayesian Networks) for the problems of classification and regression. As the generative models now serve as target-predicting functions, the learning problem can be treated differently from the traditional density estimation. Unlike the likelihood maximizing generative learning that fits a model to overall data, the discriminative learning is an alternative estimation method that optimizes the objectives that are much closely related with the prediction task (e.g., the conditional likelihood of target variables given input attributes). The contribution of this work is three-fold. First, for the family of general generative models, I provide a unifying parametric gradient-based optimization method for the discriminative learning. In the second part, not restricted to the classification problem with discrete targets, the method is applied to the continuous multivariate state domain, resulting in dynamical systems learned discriminatively. This is very appealing approach toward the structured state prediction problems such as motion tracking, in that the discriminative models in discrete domains (e.g., Conditional Random Fields or Maximum Entropy Markov Models) can be problematic to be extended to handle continuous targets properly. For the CMU motion capture data, I evaluate the generalization performance of the proposed methods on the 3D human pose tracking problem from the monocular videos. Despite the improved prediction performance of the discriminative learning, the parametric gradient-based optimization may have certain drawbacks such as the computational overhead and the sensitivity to the choice of the initial model. In the third part, I address these issues by introducing a novel recursive method for discriminative learning. The proposed method estimates a mixture of generative models, where the component to be added at each stage is selected in a greedy fashion, by the criterion maximizing the conditional likelihood of the new mixture. The approach is highly efficient as it reduces to the generative learning of the base generative models on weighted data. Moreover it is less sensitive to the initial model choice by enhancing the mixture model recursively. The improved classification performance of the proposed method is demonstrated in an extensive set of evaluations on time-series sequence data, including human motion classification problems.", "title": "" }, { "docid": "cfde63e0bb08f3b6d614bd5fe6258e65", "text": "The System Management Mode (SMM) is a highly privileged processor operating mode in x86 platforms. The goal of the SMM is to perform system management functions, such as hardware control and power management. Because of this, SMM has powerful resources. Moreover, its executive software executes unnoticed by any other component in the system, including operating systems and hypervisors. For that reason, SMM has been exploited in the past to facilitate attacks, misuse, or alternatively, building security tools capitalising on its resources. In this paper, we discuss how the use of the SMM has been contributing to the arms race between system's attackers and defenders. We analyse the main work published on attacks, misuse and implementing security tools in the SMM and how the SMM has been modified to respond to those issues. Finally, we discuss how Intel Software Guard Extensions (SGX) technology, a sort of \"hypervisor in processor\", presents a possible answer to the issue of using SMM for security purposes.", "title": "" }, { "docid": "ba58ba95879516c00d91cf75754eb131", "text": "In order to assess the current knowledge on the therapeutic potential of cannabinoids, a meta-analysis was performed through Medline and PubMed up to July 1, 2005. The key words used were cannabis, marijuana, marihuana, hashish, hashich, haschich, cannabinoids, tetrahydrocannabinol, THC, dronabinol, nabilone, levonantradol, randomised, randomized, double-blind, simple blind, placebo-controlled, and human. The research also included the reports and reviews published in English, French and Spanish. For the final selection, only properly controlled clinical trials were retained, thus open-label studies were excluded. Seventy-two controlled studies evaluating the therapeutic effects of cannabinoids were identified. For each clinical trial, the country where the project was held, the number of patients assessed, the type of study and comparisons done, the products and the dosages used, their efficacy and their adverse effects are described. Cannabinoids present an interesting therapeutic potential as antiemetics, appetite stimulants in debilitating diseases (cancer and AIDS), analgesics, and in the treatment of multiple sclerosis, spinal cord injuries, Tourette's syndrome, epilepsy and glaucoma.", "title": "" }, { "docid": "10dc52289ed1ea2f9ae6a6afd7299492", "text": "This work proposes a potentiostat circuit for multiple implantable sensor applications. Implantable sensors play a vital role in continuous in situ monitoring of biological phenomena in a real-time health care monitoring system. In the proposed work a three-electrode based electrochemical sensing system has been employed. In this system a fixed potential difference between the working and the reference electrodes is maintained using a potentiostat to generate a current signal in the counter electrode which is proportional to the concentration of the analyte. This potential difference between the working and the reference electrodes can be changed to detect different analytes. The designed low power potentiostat consumes only 66 µW with 2.5 volt power supply which is highly suitable for low-power implantable sensor applications. All the circuits are designed and fabricated in a 0.35-micron standard CMOS process.", "title": "" }, { "docid": "98533f4c358f7999ab37bda31575e68e", "text": "Predicting query execution time is useful in many database management issues including admission control, query scheduling, progress monitoring, and system sizing. Recently the research community has been exploring the use of statistical machine learning approaches to build predictive models for this task. An implicit assumption behind this work is that the cost models used by query optimizers are insufficient for query execution time prediction. In this paper we challenge this assumption and show while the simple approach of scaling the optimizer's estimated cost indeed fails, a properly calibrated optimizer cost model is surprisingly effective. However, even a well-tuned optimizer cost model will fail in the presence of errors in cardinality estimates. Accordingly we investigate the novel idea of spending extra resources to refine estimates for the query plan after it has been chosen by the optimizer but before execution. In our experiments we find that a well calibrated query optimizer model along with cardinality estimation refinement provides a low overhead way to provide estimates that are always competitive and often much better than the best reported numbers from the machine learning approaches.", "title": "" }, { "docid": "618496f6e0b1da51e1e2c81d72c4a6f1", "text": "Paid employment within clinical setting, such as externships for undergraduate student, are used locally and globally to better prepare and retain new graduates for actual practice and facilitate their transition into becoming registered nurses. However, the influence of paid employment on the post-registration experience of such nurses remains unclear. Through the use of narrative inquiry, this study explores how the experience of pre-registration paid employment shapes the post-registration experience of newly graduated registered nurses. Repeated individual interviews were conducted with 18 new graduates, and focus group interviews were conducted with 11 preceptors and 10 stakeholders recruited from 8 public hospitals in Hong Kong. The data were subjected to narrative and paradigmatic analyses. Taken-for-granted assumptions about the knowledge and performance of graduates who worked in the same unit for their undergraduate paid work experience were uncovered. These assumptions affected the quantity and quality of support and time that other senior nurses provided to these graduates for their further development into competent nurses and patient advocates, which could have implications for patient safety. It is our hope that this narrative inquiry will heighten awareness of taken-for-granted assumptions, so as to help graduates transition to their new role and provide quality patient care.", "title": "" }, { "docid": "864ab702d0b45235efe66cd9e3bc5e66", "text": "In this work we release our extensible and easily configurable neural network training software. It provides a rich set of functional layers with a particular focus on efficient training of recurrent neural network topologies on multiple GPUs. The source of the software package is public and freely available for academic research purposes and can be used as a framework or as a standalone tool which supports a flexible configuration. The software allows to train state-of-the-art deep bidirectional long short-term memory (LSTM) models on both one dimensional data like speech or two dimensional data like handwritten text and was used to develop successful submission systems in several evaluation campaigns.", "title": "" }, { "docid": "7476bbec4720e04223d56a71e6bab03e", "text": "We consider the performance analysis and design optimization of low-density parity check (LDPC) coded multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems for high data rate wireless transmission. The tools of density evolution with mixture Gaussian approximations are used to optimize irregular LDPC codes and to compute minimum operational signal-to-noise ratios (SNRs) for ergodic MIMO OFDM channels. In particular, the optimization is done for various MIMO OFDM system configurations, which include a different number of antennas, different channel models, and different demodulation schemes; the optimized performance is compared with the corresponding channel capacity. It is shown that along with the optimized irregular LDPC codes, a turbo iterative receiver that consists of a soft maximum a posteriori (MAP) demodulator and a belief-propagation LDPC decoder can perform within 1 dB from the ergodic capacity of the MIMO OFDM systems under consideration. It is also shown that compared with the optimal MAP demodulator-based receivers, the receivers employing a low-complexity linear minimum mean-square-error soft-interference-cancellation (LMMSE-SIC) demodulator have a small performance loss (< 1dB) in spatially uncorrelated MIMO channels but suffer extra performance loss in MIMO channels with spatial correlation. Finally, from the LDPC profiles that already are optimized for ergodic channels, we heuristically construct small block-size irregular LDPC codes for outage MIMO OFDM channels; as shown from simulation results, the irregular LDPC codes constructed here are helpful in expediting the convergence of the iterative receivers.", "title": "" }, { "docid": "103f4a18b4ae42756fef6ae583c4d742", "text": "The Essex intelligent dormitory, iDorm, uses embedded agents to create an ambient-intelligence environment. In a five-and-a-half-day experiment, a user occupied the iDorm, testing its ability to learn user behavior and adapt to user needs. The embedded agent discreetly controls the iDorm according to user preferences. Our work focuses on developing learning and adaptation techniques for embedded agents. We seek to provide online, lifelong, personalized learning of anticipatory adaptive control to realize the ambient-intelligence vision in ubiquitous-computing environments. We developed the Essex intelligent dormitory, or iDorm, as a test bed for this work and an exemplar of this approach.", "title": "" }, { "docid": "4bf68fb60aca11f999cdb1a9cd61e73c", "text": "This paper describes the design, implementation and evaluation of a search and rescue system called CenWits. CenWits uses several small, commonly-available RF-based sensors, and a small number of storage and processing devices. It is designed for search and rescue of people in emergency situations in wilderness areas. A key feature of CenWits is that it does not require a continuously connected sensor network for its operation. It is designed for an intermittently connected network that provides only occasional connectivity. It makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center. A prototype of CenWits has been implemented using Berkeley Mica2 motes. The paper describes this implementation and reports on the performance measured from it.", "title": "" }, { "docid": "0e14888a2399bba26ba794e241c5cc5c", "text": "This paper introduces a novel algorithm for the nonnegative matrix factorization and completion problem, which aims to find nonnegative matrices X and Y from a subset of entries of a nonnegative matrix M so that XY approximates M . This problem is closely related to the two existing problems: nonnegative matrix factorization and low-rank matrix completion, in the sense that it kills two birds with one stone. As it takes advantages of both nonnegativity and low rank, its results can be superior than those of the two problems alone. Our algorithm is applied to minimizing a non-convex constrained least-squares formulation and is based on the classic alternating direction augmented Lagrangian method. Preliminary convergence properties and numerical simulation results are presented. Compared to a recent algorithm for nonnegative random matrix factorization, the proposed algorithm yields comparable factorization through accessing only half of the matrix entries. On tasks of recovering incomplete grayscale and hyperspectral images, the results of the proposed algorithm have overall better qualities than those of two recent algorithms for matrix completion.", "title": "" }, { "docid": "0db200113ef14c8e88a3388c595148a6", "text": "Entity disambiguation is the task of mapping ambiguous terms in natural-language text to its entities in a knowledge base. It finds its application in the extraction of structured data in RDF (Resource Description Framework) from textual documents, but equally so in facilitating artificial intelligence applications, such as Semantic Search, Reasoning and Question & Answering. We propose a new collective, graph-based disambiguation algorithm utilizing semantic entity and document embeddings for robust entity disambiguation. Robust thereby refers to the property of achieving better than state-of-the-art results over a wide range of very different data sets. Our approach is also able to abstain if no appropriate entity can be found for a specific surface form. Our evaluation shows, that our approach achieves significantly (>5%) better results than all other publicly available disambiguation algorithms on 7 of 9 datasets without data set specific tuning. Moreover, we discuss the influence of the quality of the knowledge base on the disambiguation accuracy and indicate that our algorithm achieves better results than non-publicly available state-of-the-art algorithms.", "title": "" }, { "docid": "101309b306fa0b46100fc8c88ef05383", "text": "The study area is located ~50 km in the north of Tehran capital city, Iran, and is a part of central Alborz Mountain. The intrusive bodies aged post Eocene have intruded in the Eocene volcanic units causing hydrothermal alterations in these units. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images were used to map hydrothermal alteration zones. The propylitic, phyllic and argillic alteration and iron oxide minerals identified using Spectral Angle Mapper (SAM) method. Structural lineaments were extracted from ASTER images by applying automatic lineament extraction processes and visual interpretations. An exploration model was considered based on previous studies, and appropriate evidence maps were generated, weighted and reclassified. Ore Forming Potential (OFP) map was generated by applying Fuzzy SUM operator on alteration and Pb, Cu, Ag, and Au geochemical anomaly maps. Finally, Host rock, geological structures and OFP were combined using Fuzzy Gamma operator (γ ) to produce mineral prospectivity map. Eventually, the conceptual model discussed here, fairly demonstrated the known hydrothermal gold deposits in the study area and could be a source for future detailed explorations.", "title": "" }, { "docid": "981e88bd1f4187972f8a3d04960dd2dd", "text": "The purpose of this study is to examine the appropriateness and effectiveness of the assistive use of robot projector based augmented reality (AR) to children’s dramatic activity. A system that employ a mobile robot mounted with a projector-camera is used to help manage children’s dramatic activity by projecting backdrops and creating a synthetic video imagery, where e.g. children’s faces is replaced with graphic characters. In this Delphi based study, a panel consist of 33 professionals include 11children education experts (college professors majoring in early childhood education), children field educators (kindergarten teachers and principals), and 11 AR and robot technology experts. The experts view the excerpts from the video taken from the actual usage situation. In the first stage of survey, we collect the panel's perspectives on applying the latest new technologies for instructing dramatic activity to children using an open ended questionnaire. Based on the results of the preliminary survey, the subsequent questionnaires (with 5 point Likert scales) are developed for the second and third in-depth surveys. In the second survey, 36 questions is categorized into 5 areas: (1) developmental and educational values, (2) impact on the teacher's role, (3) applicability and special considerations in the kindergarten, (4) external environment and required support, and (5) criteria for the selection of the story in the drama activity. The third survey mainly investigate how AR or robots can be of use in children’s dramatic activity in other ways (than as originally given) and to other educational domains. The surveys show that experts most appreciated the use of AR and robot for positive educational and developmental effects due to the children’s keen interests and in turn enhanced immersion into the dramatic activity. Consequently, the experts recommended that proper stories, scenes and technological realizations need to be selected carefully, in the light of children’s development, while lever aging on strengths of the technologies used.", "title": "" }, { "docid": "84f5ab1dfcf6e03241fd72d3e76179f5", "text": "The goal of this work is to develop a meeting transcription system that can recognize speech even when utterances of different speakers are overlapped. While speech overlaps have been regarded as a major obstacle in accurately transcribing meetings, a traditional beamformer with a single output has been exclusively used because previously proposed speech separation techniques have critical constraints for application to real meetings. This paper proposes a new signal processing module, called an unmixing transducer, and describes its implementation using a windowed BLSTM. The unmixing transducer has a fixed number, say J, of output channels, where J may be different from the number of meeting attendees, and transforms an input multi-channel acoustic signal into J time-synchronous audio streams. Each utterance in the meeting is separated and emitted from one of the output channels. Then, each output signal can be simply fed to a speech recognition back-end for segmentation and transcription. Our meeting transcription system using the unmixing transducer outperforms a system based on a stateof-the-art neural mask-based beamformer by 10.8%. Significant improvements are observed in overlapped segments. To the best of our knowledge, this is the first report that applies overlapped speech recognition to unconstrained real meeting audio.", "title": "" }, { "docid": "d6a40f99a86b55584c52326240fc4170", "text": "In order to avoid wheel slippage or mechanical damage during the mobile robot navigation, it is necessary to smoothly change driving velocity or direction of the mobile robot. This means that dynamic constraints of the mobile robot should be considered in the design of path tracking algorithm. In the study, a path tracking problem is formulated as following a virtual target vehicle which is assumed to move exactly along the path with specified velocity. The driving velocity control law is designed basing on bang-bang control considering the acceleration bounds of driving wheels. The steering control law is designed by combining the bang-bang control with an intermediate path called the landing curve which guides the robot to smoothly land on the virtual target’s tangential line. The curvature and convergence analyses provide sufficient stability conditions for the proposed path tracking controller. A series of path tracking simulations and experiments conducted for a two-wheel driven mobile robot show the validity of the proposed algorithm.", "title": "" }, { "docid": "ce7d164774826897e9d7386ec9159bba", "text": "The homomorphic encryption problem has been an open one for three decades. Recently, Gentry has proposed a full solution. Subsequent works have made improvements on it. However, the time complexities of these algorithms are still too high for practical use. For example, Gentry’s homomorphic encryption scheme takes more than 900 seconds to add two 32 bit numbers, and more than 67000 seconds to multiply them. In this paper, we develop a non-circuit based symmetric-key homomorphic encryption scheme. It is proven that the security of our encryption scheme is equivalent to the large integer factorization problem, and it can withstand an attack with up to lnpoly chosen plaintexts for any predetermined , where is the security parameter. Multiplication, encryption, and decryption are almost linear in , and addition is linear in . Performance analyses show that our algorithm runs multiplication in 108 milliseconds and addition in a tenth of a millisecond for = 1024 and = 16. We further consider practical multiple-user data-centric applications. Existing homomorphic encryption schemes only consider one master key. To allow multiple users to retrieve data from a server, all users need to have the same key. In this paper, we propose to transform the master encryption key into different user keys and develop a protocol to support correct and secure communication between the users and the server using different user keys. In order to prevent collusion between some user and the server to derive the master key, one or more key agents can be added to mediate the interaction.", "title": "" }, { "docid": "efe279fbc7307bc6a191ebb397b01823", "text": "Real-time traffic sign detection and recognition has been receiving increasingly more attention in recent years due to the popularity of driver-assistance systems and autonomous vehicles. This paper proposes an accurate and efficient traffic sign detection technique by exploring AdaBoost and support vector regression (SVR) for discriminative detector learning. Different from the reported traffic sign detection techniques, a novel saliency estimation approach is first proposed, where a new saliency model is built based on the traffic sign-specific color, shape, and spatial information. By incorporating the saliency information, enhanced feature pyramids are built to learn an AdaBoost model that detects a set of traffic sign candidates from images. A novel iterative codeword selection algorithm is then designed to generate a discriminative codebook for the representation of sign candidates, as detected by the AdaBoost, and an SVR model is learned to identify the real traffic signs from the detected sign candidates. Experiments on three public data sets show that the proposed traffic sign detection technique is robust and obtains superior accuracy and efficiency.", "title": "" }, { "docid": "9892b1c48afb42443e7957fe85f5cb27", "text": "In this paper, we propose a new adaptive rendering method to improve the performance of Monte Carlo ray tracing, by reducing noise contained in rendered images while preserving high-frequency edges. Our method locally approximates an image with polynomial functions and the optimal order of each polynomial function is estimated so that our reconstruction error can be minimized. To robustly estimate the optimal order, we propose a multi-stage error estimation process that iteratively estimates our reconstruction error. In addition, we present an energy-preserving outlier removal technique to remove spike noise without causing noticeable energy loss in our reconstruction result. Also, we adaptively allocate additional ray samples to high error regions guided by our error estimation. We demonstrate that our approach outperforms state-of-the-art methods by controlling the tradeoff between reconstruction bias and variance through locally defining our polynomial order, even without need for filtering bandwidth optimization, the common approach of other recent methods.", "title": "" } ]
scidocsrr
3824a82a5ce27e373bd23c3c59a47cd2
Activity-Conditioned Continuous Human Pose Estimation for Performance Analysis of Athletes Using the Example of Swimming
[ { "docid": "7fa9bacbb6b08065ecfe0530f082a391", "text": "This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.", "title": "" }, { "docid": "963b6b2b337541fd741d31b2c8addc8d", "text": "I. Unary terms • Body part detection candidates • Capture distribution of scores over all part classes II. Pairwise terms • Capture part relationships within/across people – proximity: same body part class (c = c) – kinematic relations: different part classes (c!= c) III. Integer Linear Program (ILP) • Substitute zdd cc = xdc xd c ydd ′ to linearize objective • NP-Hard problem solved via branch-and-cut (1% gap) • Linear constraints on 0/1 labelings: plausible poses – uniqueness", "title": "" }, { "docid": "91b0f32a1cc2aeb6c174364e6dd3a30b", "text": "Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation.", "title": "" } ]
[ { "docid": "8689b038c62d96adf1536594fcc95c07", "text": "We present an interactive system that allows users to design original pop-up cards. A pop-up card is an interesting form of papercraft consisting of folded paper that forms a three-dimensional structure when opened. However, it is very difficult for the average person to design pop-up cards from scratch because it is necessary to understand the mechanism and determine the positions of objects so that pop-up parts do not collide with each other or protrude from the card. In the proposed system, the user interactively sets and edits primitives that are predefined in the system. The system simulates folding and opening of the pop-up card using a mass–spring model that can simply simulate the physical movement of the card. This simulation detects collisions and protrusions and illustrates the movement of the pop-up card. The results of the present study reveal that the user can design a wide range of pop-up cards using the proposed system.", "title": "" }, { "docid": "9eca9a069f8d1e7bf7c0f0b74e3129f0", "text": "With increasing use of GPS devices more and more location-based information is accessible. Thus not only more movements of people are tracked but also POI (point of interest) information becomes available in increasing geo-spatial density. To enable analysis of movement behavior, we present an approach to enrich trajectory data with semantic POI information and show how additional insights can be gained. Using a density-based clustering technique we extract 1.215 frequent destinations of ~150.000 user movements from a large e-mobility database. We query available context information from Foursquare, a popular location-based social network, to enrich the destinations with semantic background. As GPS measurements can be noisy, often more then one possible destination is found and movement patterns vary over time. Therefore we present highly interactive visualizations that enable an analyst to cope with the inherent geospatial and behavioral uncertainties.", "title": "" }, { "docid": "8b5e07e3203cf38fc2db6c08874a70be", "text": "UAV operations are examined from a performance and logistic flexibility point of view in order to set up requirements to be input for the multiobjective optimization of a two component simple rotation flap slotted airfoil with high thickness to chord ratio. The airfoil selected among a wide range of geometries optimizing the two design points has been investigated using CFD for stability at low Reynolds numbers and sensitivity to parameters like free stream turbulence and in-flight icing. Wind tunnel tests have been performed for the two dimensional wing section and for a complete UAV aircraft configuration in order to confirm theoretical previsions. Finally, flight tests of the prototype aircraft have been executed with results in good agreement with the previous design and test work. NOMENCLATURE A/C Aircraft ALSWT Alenia Low Speed Wind Tunnel A/P Autopilot AR Aspect Ratio CD Drag Coefficient CFD Computational Fluid Dynamics CMIC Continuous Maximum Icing Conditions C.I.R.A. Centro Italiano Ricerche Aerospaziali (Italian Aerospace Research Center) CL Lift Coefficient EAS Equivalent Air Speed EASA European Aviation Safety Agency G.A. Galileo Avionica GA-ASI General Atomics – Aeronautical Systems Inc. Cistriani, L. (2007) Falco UAV Low Reynolds Airfoil Design and Testing at Galileo Avionica. In UAV Design Processes / Design Criteria for Structures (pp. 3.3-1 – 3.3-24). Meeting Proceedings RTO-MP-AVT-145, Paper 3.3. Neuilly-sur-Seine, France: RTO. Available from: http://www.rto.nato.int/abstracts.asp. Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 01 NOV 2007 2. REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE Falco UAV Low Reynolds Airfoil Design and Testing at Galileo Avionica 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Dr.Ing. Luca Cistriani UAV Design Engineer Galileo Avionica Simulators and UAV Business Unit Via Mario Stoppani, 21 34077, Ronchi Dei Legionari (Gorizia) ITALY 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release, distribution unlimited 13. SUPPLEMENTARY NOTES See also ADM202420., The original document contains color images.", "title": "" }, { "docid": "dbda28573269e3f87c520fa34395e533", "text": "The requirements for dielectric measurements on polar liquids lie largely in two areas. First there is scientific interest in revealing the structure of and interactions between the molecules - this can be studied through dielectric spectroscopy. Secondly, polar liquids are widely used as dielectric reference and tissue equivalent materials for biomedical studies and for mobile telecommunications, health and safety related measurements. This review discusses these roles for polar liquids and surveys the techniques available for the measurement of their complex permittivity at RF and Microwave frequencies. One aim of the review is to guide researchers and metrologists in the choice of measurement methods and in their optimization. Particular emphasis is placed on the importance of traceability in these measurements to international standards", "title": "" }, { "docid": "946330bdcc96711090f15dbaf772edf6", "text": "This paper deals with the estimation of the channel impulse response (CIR) in orthogonal frequency division multiplexed (OFDM) systems. In particular, we focus on two pilot-aided schemes: the maximum likelihood estimator (MLE) and the Bayesian minimum mean square error estimator (MMSEE). The advantage of the former is that it is simpler to implement as it needs no information on the channel statistics. On the other hand, the MMSEE is expected to have better performance as it exploits prior information about the channel. Theoretical analysis and computer simulations are used in the comparisons. At SNR values of practical interest, the two schemes are found to exhibit nearly equal performance, provided that the number of pilot tones is sufficiently greater than the CIRs length. Otherwise, the MMSEE is superior. In any case, the MMSEE is more complex to implement.", "title": "" }, { "docid": "ec6c1bab77a149c55c63ec7414b9128a", "text": "Recently, many different types of artificial neural networks (ANNs) have been applied to forecast stock price and good performance is obtained. However, most of these models use only a small number of features as input and there may not be enough information to make prediction due to the complexity of stock market. If having a larger number of features, the run time of training would be increased and the generalization performance would be deteriorated due to the curse of dimension. Therefore, an effective tool to extract highly discriminative low-dimensional features from the high-dimensional raw input would be a great help in improving the generalization performance of the regression model. Restricted Boltzmann Machine (RBM) is a new type of machine learning tool with strong power of representation, which has been utilized as the feature extractor in a large variety of classification problems. In this paper, we use the RBM to extract discriminative low-dimensional features from raw data with dimension up to 324, and then use the extracted features as the input of Support Vector Machine (SVM) for regression. Experimental results indicate that our approach for stock price prediction has great improvement in terms of low forecasting errors compared with SVM using raw data.", "title": "" }, { "docid": "25dcc8e71b878bfed01e95160d9b82ef", "text": "Wireless Sensor Networks (WSN) has been a focus for research for several years. WSN enables novel and attractive solutions for information gathering across the spectrum of endeavour including transportation, business, health-care, industrial automation, and environmental monitoring. Despite these advances, the exponentially increasing data extracted from WSN is not getting adequate use due to the lack of expertise, time and money with which the data might be better explored and stored for future use. The next generation of WSN will benefit when sensor data is added to blogs, virtual communities, and social network applications. This transformation of data derived from sensor networks into a valuable resource for information hungry applications will benefit from techniques being developed for the emerging Cloud Computing technologies. Traditional High Performance Computing approaches may be replaced or find a place in data manipulation prior to the data being moved into the Cloud. In this paper, a novel framework is proposed to integrate the Cloud Computing model with WSN. Deployed WSN will be connected to the proposed infrastructure. Users request will be served via three service layers (IaaS, PaaS, SaaS) either from the archive, archive is made by collecting data periodically from WSN to Data Centres (DC), or by generating live query to corresponding sensor network.", "title": "" }, { "docid": "e34ba302c8d4310cc64305a3329eada9", "text": "The aim of this study was to examine the validity of vertical jump (VJ) performance variables in elite-standard male and female Italian soccer players. One hundred eighteen national team soccer players (n = 56 men and n = 62 women) were tested for countermovement (CMJ) and squatting jump (SJ) heights. The stretch-shortening cycle efficiency (SSCE) was assessed as percentage of CMJ gain over SJ ([INCREMENT]CMJ-SJ), difference (CMJ-SJ), and ratio (CMJ:SJ). Results showed significant sex difference in SJ and CMJ. Differences in SSCE were mainly in the absolute variables between sexes. Cutoff values for CMJ and SJ using sex as construct were 34.4 and 32.9 cm, respectively. No competitive level differences in VJ performance were detected in the male players. Female national team players showed VJ performance higher than the under 17 counterpart. The results of this study showed that VJ performance could not discriminate between competitive levels in male national team-selected soccer players. However, the use of CMJ and SJ normative data may help strength and conditioning coaches in prescribing lower limb explosive strength training in elite soccer players. In this, variations in VJ performance in the range of approximately 1 cm may be regarded as of interest in tracking noncasual variation in elite-standard soccer players.", "title": "" }, { "docid": "f05cb5a3aeea8c4151324ad28ad4dc93", "text": "With the discovery of induced pluripotent stem (iPS) cells, it is now possible to convert differentiated somatic cells into multipotent stem cells that have the capacity to generate all cell types of adult tissues. Thus, there is a wide variety of applications for this technology, including regenerative medicine, in vitro disease modeling, and drug screening/discovery. Although biological and biochemical techniques have been well established for cell reprogramming, bioengineering technologies offer novel tools for the reprogramming, expansion, isolation, and differentiation of iPS cells. In this article, we review these bioengineering approaches for the derivation and manipulation of iPS cells and focus on their relevance to regenerative medicine.", "title": "" }, { "docid": "8ea5ed93c3c162c99fe329d243906712", "text": "This paper describes the design, simulation and measurement of a dual-band slotted waveguide antenna array for adaptive 5G networks, operating in the millimeter wave frequency range. Its structure is composed by two groups of slots milled onto the opposite faces of a rectangular waveguide, enabling antenna operation over two different frequency bands, namely 28 and 38 GHz. Measured and numerical results, obtained using ANSYS HFSS, demonstrate two bandwidths of approximately 26.36% and 9.78% for 28 GHz and 38 GHz, respectively. The antenna gain varies from 12.6 dBi for the lower frequency band to 15.6dBi for the higher one.", "title": "" }, { "docid": "d026ebfc24e3e48d0ddb373f71d63162", "text": "The claustrum has been proposed as a possible neural candidate for the coordination of conscious experience due to its extensive ‘connectome’. Herein we propose that the claustrum contributes to consciousness by supporting the temporal integration of cortical oscillations in response to multisensory input. A close link between conscious awareness and interval timing is suggested by models of consciousness and conjunctive changes in meta-awareness and timing in multiple contexts and conditions. Using the striatal beatfrequency model of interval timing as a framework, we propose that the claustrum integrates varying frequencies of neural oscillations in different sensory cortices into a coherent pattern that binds different and overlapping temporal percepts into a unitary conscious representation. The proposed coordination of the striatum and claustrum allows for time-based dimensions of multisensory integration and decision-making to be incorporated into consciousness.", "title": "" }, { "docid": "7c87ec9ac7e5170e0ddaccadf992ea3f", "text": "Social computational systems emerge in the wild on popular social networking sites like Facebook and Twitter, but there remains confusion about the relationship between social interactions and the technical traces of interaction left behind through use. Twitter interactions and social experience are particularly challenging to make sense of because of the wide range of tools used to access Twitter (text message, website, iPhone, TweetDeck and others), and the emergent set of practices for annotating message context (hashtags, reply to's and direct messaging). Further, Twitter is used as a back channel of communication in a wide range of contexts, ranging from disaster relief to watching television. Our study examines Twitter as a transport protocol that is used differently in different socio-technical contexts, and presents an analysis of how researchers might begin to approach studies of Twitter interactions with a more reflexive stance toward the application programming interfaces (APIs) Twitter provides. We conduct a careful review of existing literature examining socio-technical phenomena on Twitter, revealing a collective inconsistency in the description of data gathering and analysis methods. In this paper, we present a candidate architecture and methodological approach for examining specific parts of the Twittersphere. Our contribution begins a discussion among social media researchers on the topic of how to systematically and consistently make sense of the social phenomena that emerge through Twitter. This work supports the comparative analysis of Twitter studies and the development of social media theories.", "title": "" }, { "docid": "591426a345e030cec904084c08609a12", "text": "Following Max Weber, many theories have hypothesized that Protestantism should have favored economic development. With its religious heterogeneity, the Holy Roman Empire presents an ideal testing ground for this hypothesis. Using population gures of 272 cities in the years 1300–1900, I nd no effects of Protestantism on economic growth. The nding is precisely estimated, robust to the inclusion of various controls, and does not depend on data selection or small sample size. Denominational differences in fertility behavior and literacy are unlikely to be major confounding factors. Protestantism has no effect when interacted with other likely determinants of economic development. Instrumental variables estimates, considering the potential endogeneity of religious choice, are similar to the OLS results. (JEL: N13, N33, O11, Z12) The editor in charge of this paper was Fabrizio Zilibotti. Acknowledgments: I thank Daron Acemoglu, Regina Baar-Cantoni, Robert Barro, Jeremiah Dittmar, Camilo Garcia-Jimeno, Claudia Goldin, Tim Guinnane, Martin Hellwig, Elhanan Helpman, James Robinson, Holger Spamann, Eike Wolgast and Noam Yuchtman for helpful comments and suggestions, as well as seminar audiences at Bocconi, Brown, the EEA Annual Meeting (Barcelona), IIES Stockholm, Harvard, Mannheim, MPI Bonn, Munich, Regensburg, UPF, and Yale. Financial support by the Economic History Association, the Minda de Gunzburg Center for European Studies and the Studienstiftung des Deutschen Volkes is gratefully acknowledged. Jessica Fronk, Eda Karesin, Niklas Neckel, and Annekathrin Schmitt provided excellent research assistance. E-mail: cantoni@lmu.de Journal of the European Economic Association Preprint prepared on 24 June 2014 using jeea.cls v1.0.", "title": "" }, { "docid": "483a349f65e1524916ea0190ecf4e18b", "text": "Physical library collections are valuable and long standing resources for knowledge and learning. However, managing books in a large bookshelf and finding books on it often leads to tedious manual work, especially for large book collections where books might be missing or misplaced. Recently, deep neural models, such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have achieved great success for scene text detection and recognition. Motivated by these recent successes, we aim to investigate their viability in facilitating book management, a task that introduces further challenges including large amounts of cluttered scene text, distortion, and varied lighting conditions. In this paper, we present a library inventory building and retrieval system based on scene text reading methods. We specifically design our scene text recognition model using rich supervision to accelerate training and achieve state-of-the-art performance on several benchmark datasets. Our proposed system has the potential to greatly reduce the amount of human labor required in managing book inventories as well as the space needed to store book information.", "title": "" }, { "docid": "cbe26f489e3a5cd196913e3996284bae", "text": "The max-product \"belief propagation\" algorithm is an iterative, local, message passing algorithm for finding the maximum a posteriori (MAP) assignment of a discrete probability distribution specified by a graphical model. Despite the spectacular success of the algorithm in many application areas such as iterative decoding and computer vision which involve graphs with many cycles, theoretical convergence results are only known for graphs which are tree-like or have a single cycle. In this paper, we consider a weighted complete bipartite graph and define a probability distribution on it whose MAP assignment corresponds to the maximum weight matching (MWM) in that graph. We analyze the fixed points of the max-product algorithm when run on this graph and prove the surprising result that even though the underlying graph has many short cycles, the maxproduct assignment converges to the correct MAP assignment. We also provide a bound on the number of iterations required by the algorithm", "title": "" }, { "docid": "6e6655838474fdd7d6b0f989c5727c07", "text": "We introduce G-CNN, an object detection technique based on CNNs which works without proposal algorithms. G-CNN starts with a multi-scale grid of fixed bounding boxes. We train a regressor to move and scale elements of the grid towards objects iteratively. G-CNN models the problem of object detection as finding a path from a fixed grid to boxes tightly surrounding the objects. G-CNN with around 180 boxes in a multi-scale grid performs comparably to Fast R-CNN which uses around 2K bounding boxes generated with a proposal technique. This strategy makes detection faster by removing the object proposal stage as well as reducing the number of boxes to be processed.", "title": "" }, { "docid": "124c649cc8dc2d04e28043257ed8ddd4", "text": "TECSAR satellite is part of a spaceborne synthetic-aperture-radar (SAR) satellite technology demonstration program. The purpose of this program is to develop and evaluate the technologies required to achieve high-resolution images combined with large-area coverage. These requirements can be fulfilled by designing a satellite with multimode operation. The TECSAR satellite is developed by the MBT Space Division, Israel Aerospace Industries, acting as a prime contractor, which develops the satellite bus, and by ELTA Systems Ltd., which develops the SAR payload. This paper reviews the TECSAR radar system design, which enables to perform a variety of operational modes. It also describes the unique hardware components: deployable parabolic mesh antenna, multitube transmitter, and data-link transmission unit. The unique mosaic mode is presented. It is shown that this mode is the spot version of the scan mode.", "title": "" }, { "docid": "64d3ecaa2f9e850cb26aac0265260aff", "text": "The case of the Frankfurt Airport attack in 2011 in which a 21-year-old man shot several U.S. soldiers, murdering 2 U.S. airmen and severely wounding 2 others, is assessed with the Terrorist Radicalization Assessment Protocol (TRAP-18). The study is based on an extensive qualitative analysis of investigation and court files focusing on the complex interconnection among offender personality, specific opportunity structures, and social contexts. The role of distal psychological factors and proximal warning behaviors in the run up to the deed are discussed. Although in this case the proximal behaviors of fixation on a cause and identification as a “soldier” for the cause developed over years, we observed only a very brief and accelerated pathway toward the violent act. This represents an important change in the demands placed upon threat assessors.", "title": "" }, { "docid": "52e36a3910d9782f60cd8fcb3dc54c60", "text": "INTRODUCTION\nCognitive behavioural therapy (CBT) with trauma focus is the most evidence supported psychotherapeutic treatment of PTSD, but few CBT treatments for traumatized refugees have been described in detail.\n\n\nPURPOSE\nTo describe and evaluate a manualized cognitive behavioral therapy for traumatized refugees incorporating exposure therapy, mindfulness and acceptance and commitment therapy.\n\n\nMATERIAL AND METHODS\n85 patients received six months' treatment at a Copenhagen Trauma Clinic for Refugees and completed self-ratings before and after treatment. The treatment administered to each patient was monitored in detail. The changes in mental state and the treatment components associated with change in state were analyzed statistically.\n\n\nRESULTS\nDespite the low level of functioning and high co-morbidity of patients, 42% received highly structured CBT, which was positively associated with all treatment outcomes. The more methods used and the more time each method was used, the better the outcome. The majority of patients were able to make homework assignments and this was associated with better treatment outcome. Correlation analysis showed no association between severity of symptoms at baseline and the observed change.\n\n\nCONCLUSION\nThe study suggests that CBT treatment incorporating mindfulness and acceptance and commitment therapy is promising for traumatized refugees and punctures the myth that this group of patients are unable to participate fully in structured CBT. However, treatment methods must be adapted to the special needs of refugees and trauma exposure should be further investigated.", "title": "" } ]
scidocsrr
ebccc47a395be74dabf23227578ec62a
Reduction Otoplasty: Correction of the Large or Asymmetric Ear
[ { "docid": "36acc76d232f2f58fcb6b65a1d4027aa", "text": "Surface measurements of the ear are needed to assess damage in patients with disfigurement or defects of the ears and face. Population norms are useful in calculating the amount of tissue needed to rebuild the ear to adequate size and natural position. Anthropometry proved useful in defining grades of severe, moderate, and mild microtia in 73 patients with various facial syndromes. The division into grades was based on the amount of tissue lost and the degree of asymmetry in the position of the ears. Within each grade the size and position of the ears varied greatly. In almost one-third, the nonoperated microtic ears were symmetrically located, promising the best aesthetic results with the least demanding surgical procedures. In slightly over one-third, the microtic ears were associated with marked horizontal and vertical asymmetries. In cases of horizontal and vertical dislocation exceeding 20 mm, surgical correction of the defective facial framework should precede the building up of a new ear. Data on growth and age of maturation of the ears in the normal population can be useful in choosing the optimal time for ear reconstruction.", "title": "" } ]
[ { "docid": "7d840ba451a7783aaa1abb040264e411", "text": "The latest developments in mobile computing technology have changed user preferences for computing. However, in spite of all the advancements in the recent years, Smart Mobile Devices (SMDs) are still low potential computing devices which are limited in memory capacity, CPU speed and battery power lifetime. Therefore, Mobile Cloud Computing (MCC) employs computational offloading for enabling computationally intensive mobile applications on SMDs. However, state-of-the-art computational offloading frameworks lack of considering the additional overhead of components migration at runtime. Therefore resources intensive and energy consuming distributed application execution platform is established. This paper proposes a novel distributed Energy Efficient Computational Offloading Framework (EECOF) for the processing of intensive mobile applications in MCC. The framework focuses on leveraging application processing services of cloud datacenters with minimal instances of computationally intensive component migration at runtime. As a result, the size of data transmission and energy consumption cost is reduced in computational offloading for MCC. We evaluate the proposed framework by benchmarking prototype application in the real MCC environment. Analysis of the results show that by employing EECOF the size of data transmission over the wireless network medium is reduced by 84 % and energy consumption cost is reduced by 69.9 % in offloading different components of the prototype application. Hence, EECOF provides an energy efficient application layer solution for computational offloading in MCC.", "title": "" }, { "docid": "da36a172f042ff9ef1a4fdf9ccc0f0a8", "text": "The Human Brain Project (HBP) is a candidate project in the European Union’s FET Flagship Program, funded by the ICT Program in the Seventh Framework Program. The project will develop a new integrated strategy for understanding the human brain and a novel research platform that will integrate all the data and knowledge we can acquire about the structure and function of the brain and use it to build unifying models that can be validated by simulations running on supercomputers. The project will drive the development of supercomputing for the life sciences, generate new neuroscientific data as a benchmark for modeling, develop radically new tools for informatics, modeling and simulation, and build virtual laboratories for collaborative basic and clinical studies, drug simulation and virtual prototyping of neuroprosthetic, neuromorphic, and robotic devices. © 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of [name organizer]", "title": "" }, { "docid": "bb9fd3e54d8d5ce32147b437ed5f52d4", "text": "OBJECTIVE\nTo assess the association between bullying (both directly and indirectly) and indicators of psychosocial health for boys and girls separately.\n\n\nSTUDY DESIGN\nA school-based questionnaire survey of bullying, depression, suicidal ideation, and delinquent behavior.\n\n\nSETTING\nPrimary schools in Amsterdam, The Netherlands.\n\n\nPARTICIPANTS\nA total of 4811 children aged 9 to 13.\n\n\nRESULTS\nDepression and suicidal ideation are common outcomes of being bullied in both boys and girls. These associations are stronger for indirect than direct bullying. After correction, direct bullying had a significant effect on depression and suicidal ideation in girls, but not in boys. Boy and girl offenders of bullying far more often reported delinquent behavior. Bullying others directly is a much greater risk factor for delinquent behavior than bullying others indirectly. This was true for both boys and girls. Boy and girl offenders of bullying also more often reported depressive symptoms and suicidal ideation. However, after correction for both sexes only a significant association still existed between bullying others directly and suicidal ideation.\n\n\nCONCLUSIONS\nThe association between bullying and psychosocial health differs notably between girls and boys as well as between direct and indirect forms of bullying. Interventions to stop bullying must pay attention to these differences to enhance effectiveness.", "title": "" }, { "docid": "d2eb6c8dc6a3dd475248582361e89284", "text": "In the last few years, uncertainty management has come to be recognized as a fundamental aspect of data integration. It is now accepted that it may not be possible to remove uncertainty generated during data integration processes and that uncertainty in itself may represent a source of relevant information. Several issues, such as the aggregation of uncertain mappings and the querying of uncertain mediated schemata, have been addressed by applying well-known uncertainty management theories. However, several problems lie unresolved. This article sketches an initial picture of this highly active research area; it details existing works in the light of a homogeneous framework, and identifies and discusses the leading issues awaiting solutions.", "title": "" }, { "docid": "d6e178e87601b2a7d442b97e42c34350", "text": "BACKGROUND\nNo systematic review and narrative synthesis on personal recovery in mental illness has been undertaken.\n\n\nAIMS\nTo synthesise published descriptions and models of personal recovery into an empirically based conceptual framework.\n\n\nMETHOD\nSystematic review and modified narrative synthesis.\n\n\nRESULTS\nOut of 5208 papers that were identified and 366 that were reviewed, a total of 97 papers were included in this review. The emergent conceptual framework consists of: (a) 13 characteristics of the recovery journey; (b) five recovery processes comprising: connectedness; hope and optimism about the future; identity; meaning in life; and empowerment (giving the acronym CHIME); and (c) recovery stage descriptions which mapped onto the transtheoretical model of change. Studies that focused on recovery for individuals of Black and minority ethnic (BME) origin showed a greater emphasis on spirituality and stigma and also identified two additional themes: culturally specific facilitating factors and collectivist notions of recovery.\n\n\nCONCLUSIONS\nThe conceptual framework is a theoretically defensible and robust synthesis of people's experiences of recovery in mental illness. This provides an empirical basis for future recovery-oriented research and practice.", "title": "" }, { "docid": "5804eb5389b02f2f6c5692fe8f427501", "text": "reflection-type phase shifter with constant insertion loss over a wide relative phase-shift range is presented. This important feature is attributed to the salient integration of an impedance-transforming quadrature coupler with equalized series-resonated varactors. The impedance-transforming quadrature coupler is used to increase the maximal relative phase shift for a given varactor with a limited capacitance range. When the phase is tuned, the typical large insertion-loss variation of the phase shifter due to the varactor parasitic effect is minimized by shunting the series-resonated varactor with a resistor Rp. A set of closed-form equations for predicting the relative phase shift, insertion loss, and insertion-loss variation with respect to the quadrature coupler and varactor parameters is derived. Three phase shifters were implemented with a silicon varactor of a restricted capacitance range of Cv,min = 1.4 pF and Cv,max = 8 pF, wherein the parasitic resistance is close to 2 Omega. The measured insertion-loss variation is 0.1 dB over the relative phase-shift tuning range of 237deg at 2 GHz and the return losses are better than 20 dB, excellently agreeing with the theoretical and simulated results.", "title": "" }, { "docid": "0cc25de8ea70fe1fd85824e8f3155bf7", "text": "When integrating information from multiple websites, the same data objects can exist in inconsistent text formats across sites, making it difficult to identify matching objects using exact text match. We have developed an object identification system called Active Atlas, which compares the objects’ shared attributes in order to identify matching objects. Certain attributes are more important for deciding if a mapping should exist between two objects. Previous methods of object identification have required manual construction of object identification rules or mapping rules for determining the mappings between objects. This manual process is time consuming and error-prone. In our approach, Active Atlas learns to tailor mapping rules, through limited user input, to a specific application domain. The experimental results demonstrate that we achieve higher accuracy and require less user involvement than previous methods across various application domains.", "title": "" }, { "docid": "50c0f3cdccc1fe63f3fcb4cb3c983617", "text": "Junho Yang Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: yang125@illinois.edu Ashwin Dani Department of Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: adani@illinois.edu Soon-Jo Chung Department of Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: sjchung@illinois.edu Seth Hutchinson Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: seth@illinois.edu", "title": "" }, { "docid": "9bc56456f770a1b928d97b8877682a82", "text": "Submodular optimization has found many applications in machine learning and beyond. We carry out the first systematic investigation of inference in probabilistic models defined through submodular functions, generalizing regular pairwise MRFs and Determinantal Point Processes. In particular, we present L-FIELD, a variational approach to general log-submodular and log-supermodular distributions based on suband supergradients. We obtain both lower and upper bounds on the log-partition function, which enables us to compute probability intervals for marginals, conditionals and marginal likelihoods. We also obtain fully factorized approximate posteriors, at the same computational cost as ordinary submodular optimization. Our framework results in convex problems for optimizing over differentials of submodular functions, which we show how to optimally solve. We provide theoretical guarantees of the approximation quality with respect to the curvature of the function. We further establish natural relations between our variational approach and the classical mean-field method. Lastly, we empirically demonstrate the accuracy of our inference scheme on several submodular models.", "title": "" }, { "docid": "da1f4117851762bfb5ef80c0893248c3", "text": "The recently-developed WaveNet architecture (van den Oord et al., 2016a) is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today’s massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, a 1000x speed up relative to the original WaveNet, and capable of serving multiple English and Japanese voices in a production setting.", "title": "" }, { "docid": "98b3f17de080aed8bce62e1c00f66605", "text": "While strong progress has been made in image captioning recently, machine and human captions are still quite distinct. This is primarily due to the deficiencies in the generated word distribution, vocabulary size, and strong bias in the generators towards frequent captions. Furthermore, humans – rightfully so – generate multiple, diverse captions, due to the inherent ambiguity in the captioning task which is not explicitly considered in today's systems. To address these challenges, we change the training objective of the caption generator from reproducing ground-truth captions to generating a set of captions that is indistinguishable from human written captions. Instead of handcrafting such a learning target, we employ adversarial training in combination with an approximate Gumbel sampler to implicitly match the generated distribution to the human one. While our method achieves comparable performance to the state-of-the-art in terms of the correctness of the captions, we generate a set of diverse captions that are significantly less biased and better match the global uni-, bi- and tri-gram distributions of the human captions.", "title": "" }, { "docid": "a669bebcbb6406549b78f365cf352008", "text": "Digital currencies have emerged as a new fascinating phenomenon in the financial markets. Recent events on the most popular of the digital currencies--BitCoin--have risen crucial questions about behavior of its exchange rates and they offer a field to study dynamics of the market which consists practically only of speculative traders with no fundamentalists as there is no fundamental value to the currency. In the paper, we connect two phenomena of the latest years--digital currencies, namely BitCoin, and search queries on Google Trends and Wikipedia--and study their relationship. We show that not only are the search queries and the prices connected but there also exists a pronounced asymmetry between the effect of an increased interest in the currency while being above or below its trend value.", "title": "" }, { "docid": "bdcd0cad7a2abcb482b1a0755a2e7af4", "text": "We present a novel attribute learning framework named Hypergraph-based Attribute Predictor (HAP). In HAP, a hypergraph is leveraged to depict the attribute relations in the data. Then the attribute prediction problem is casted as a regularized hypergraph cut problem, in which a collection of attribute projections is jointly learnt from the feature space to a hypergraph embedding space aligned with the attributes. The learned projections directly act as attribute classifiers (linear and kernelized). This formulation leads to a very efficient approach. By considering our model as a multi-graph cut task, our framework can flexibly incorporate other available information, in particular class label. We apply our approach to attribute prediction, Zero-shot and N-shot learning tasks. The results on AWA, USAA and CUB databases demonstrate the value of our methods in comparison with the state-of-the-art approaches.", "title": "" }, { "docid": "680d3951a7280f78a7a9b6abef8cb65e", "text": "BACKGROUND\nTo determine the factors affecting the prevalence of depression and also to present some pertinent comments concerning prevention of depression among high school students. This study was deemed important and relevant due to the increasing importance of depression among high school students.\n\n\nMETHODS\nA sample of students aged 14-19 years from the 6 high schools of 1 district of western Turkey were surveyed. The students selected were all attending the school during March and April 2006. The Beck Depression Inventory was used as a screening test.\n\n\nRESULTS\nDuring the study, a total of 846 students completed the survey. Of the study group, 51.9% (439) were male and 48.1% (407) female, with an age average of 16.3 +/- 1.1 years. According to the scale, the prevalence of depression was 30.7% (n = 260), 22.6% for males (n = 99) and 39.6% for females (n = 161). The most depression was seen in males (22.6%), those with any kind of physical problem (37.3%), those with diseases necessitating the use of medication (51.1%), those with acne vulgaris (35.2%), and those having previously experienced any kind of problem (47.3%).\n\n\nCONCLUSIONS\nThese results highlight not only the need for students' parents and teachers to be well informed on the subject of depression in terms of students' health but also the need for more education programs to be aimed at students relating to the problems they may experience during the period of adolescence. Furthermore, these results show that students identified as depressed should be referred for an appropriate diagnosis to specialized psychiatry centers.", "title": "" }, { "docid": "f945b645e492e2b5c6c2d2d4ea6c57ae", "text": "PURPOSE\nThe aim of this review was to look at relevant data and research on the evolution of ventral hernia repair.\n\n\nMETHODS\nResources including books, research, guidelines, and online articles were reviewed to provide a concise history of and data on the evolution of ventral hernia repair.\n\n\nRESULTS\nThe evolution of ventral hernia repair has a very long history, from the recognition of ventral hernias to its current management, with significant contributions from different authors. Advances in surgery have led to more cases of ventral hernia formation, and this has required the development of new techniques and new materials for ventral hernia management. The biocompatibility of prosthetic materials has been important in mesh development. The functional anatomy and physiology of the abdominal wall has become important in ventral hernia management. New techniques in abdominal wall closure may prevent or reduce the incidence of ventral hernia in the future.\n\n\nCONCLUSION\nThe management of ventral hernia is continuously evolving as it responds to new demands and new technology in surgery.", "title": "" }, { "docid": "3482354f79c4185ad9d63412184ddce4", "text": "In this paper we address the problem of learning the Markov blanket of a quantity from data in an efficient manner Markov blanket discovery can be used in the feature selection problem to find an optimal set of features for classification tasks, and is a frequently-used preprocessing phase in data mining, especially for high-dimensional domains. Our contribution is a novel algorithm for the induction of Markov blankets from data, called Fast-IAMB, that employs a heuristic to quickly recover the Markov blanket. Empirical results show that Fast-IAMB performs in many cases faster and more reliably than existing algorithms without adversely affecting the accuracy of the recovered Markov blankets.", "title": "" }, { "docid": "8bd0c280a95f549bd5596fb1f7499e44", "text": "Mobile devices are becoming ubiquitous. People take pictures via their phone cameras to explore the world on the go. In many cases, they are concerned with the picture-related information. Understanding user intent conveyed by those pictures therefore becomes important. Existing mobile applications employ visual search to connect the captured picture with the physical world. However, they only achieve limited success due to the ambiguity nature of user intent in the picture-one picture usually contains multiple objects. By taking advantage of multitouch interactions on mobile devices, this paper presents a prototype of interactive mobile visual search, named TapTell, to help users formulate their visual intent more conveniently. This kind of search leverages limited yet natural user interactions on the phone to achieve more effective visual search while maintaining a satisfying user experience. We make three contributions in this work. First, we conduct a focus study on the usage patterns and concerned factors for mobile visual search, which in turn leads to the interactive design of expressing visual intent by gesture. Second, we introduce four modes of gesture-based interactions (crop, line, lasso, and tap) and develop a mobile prototype. Third, we perform an in-depth usability evaluation on these different modes, which demonstrates the advantage of interactions and shows that lasso is the most natural and effective interaction mode. We show that TapTell provides a natural user experience to use phone camera and gesture to explore the world. Based on the observation and conclusion, we also suggest some design principles for interactive mobile visual search in the future.", "title": "" }, { "docid": "bb14516966d027b70c3633550b3ee567", "text": "This study examined the extent to which sexual offenders present an enduring risk for sexual recidivism over a 20-year follow-up period. Using an aggregated sample of 7,740 sexual offenders from 21 samples, the yearly recidivism rates were calculated using survival analysis. Overall, the risk of sexual recidivism was highest during the first few years after release, and decreased substantially the longer individuals remained sex offense-free in the community. This pattern was particularly strong for the high-risk sexual offenders (defined by Static-99R scores). Whereas the 5-year sexual recidivism rate for high-risk sex offenders was 22% from the time of release, this rate decreased to 4.2% for the offenders in the same static risk category who remained offense-free in the community for 10 years. The recidivism rates of the low-risk offenders were consistently low (1%-5%) for all time periods. The results suggest that offense history is a valid, but time-dependent, indicator of the propensity to sexually reoffend. Further research is needed to explain the substantial rate of desistance by high-risk sexual offenders.", "title": "" }, { "docid": "1a510d931fc7ad592af6e86c0939fea7", "text": "At the core of any inference procedure in deep neural networks are dot product operations, which are the component that require the highest computational resources. For instance, deep neural networks such as VGG-16 require up to 15 gigaoperations in order to perform the dot products present in a single forward pass, which results in significant energy consumption and therefore limit their use in resource-limited environments, e.g., on embedded devices or smartphones. A common approach to reduce the cost of inference is to reduce its memory complexity by lowering the entropy of the weight matrices of the neural network, e.g., by pruning and quantizing their elements. However, the quantized weight matrices are then usually represented either by a dense or sparse matrix storage format, whose associated dot product complexity is not bounded by the entropy of the matrix. This means that the associated inference complexity ultimately depends on the implicit statistical assumptions that these matrix representations make about the weight distribution, which can be in many cases suboptimal. In this paper we address this issue and present new efficient representations for matrices with low entropy statistics. These new matrix formats have the novel property that their memory and algorithmic complexity are implicitly bounded by the entropy of the matrix, consequently implying that they are guaranteed to become more efficient as the entropy of the matrix is being reduced. In our experiments we show that performing the dot product under these new matrix formats can indeed be more energy and time efficient under practically relevant assumptions. For instance, we are able to attain up to x42 compression ratios, x5 speed ups and x90 energy savings when we convert in a lossless manner the weight matrices of state-of-the-art networks such as AlexNet, VGG-16, ResNet152 and DenseNet into the new matrix formats and benchmark their respective dot product operation. Keywords—Neural network compression, computationally efficient deep learning, data structures, sparse matrices, lossless coding.", "title": "" }, { "docid": "b9d74514b91ac160bce0b39e0872c0b2", "text": "Human falls occur very rarely; this makes it difficult to employ supervised classification techniques. Moreover, the sensing modality used must preserve the identity of those being monitored. In this paper, we investigate the use of thermal camera for fall detection, since it effectively masks the identity of those being monitored. We formulate the fall detection problem as an anomaly detection problem and aim to use autoencoders to identify falls. We also present a new anomaly scoring method to combine the reconstruction score of a frame across different video sequences. Our experiments suggests that Convolutional LSTM autoencoders perform better than convolutional and deep autoencoders in detecting unseen falls.", "title": "" } ]
scidocsrr
9e5a4fd1898b30d5c97cd6b2d29e7fe5
Improving Chain of Custody in Forensic Investigation of Electronic Digital Systems
[ { "docid": "faf0e45405b3c31135a20d7bff6e7a5a", "text": "Law enforcement is in a perpetual race with criminals in the application of digital technologies, and requires the development of tools to systematically search digital devices for pertinent evidence. Another part of this race, and perhaps more crucial, is the development of a methodology in digital forensics that encompasses the forensic analysis of all genres of digital crime scene investigations. This paper explores the development of the digital forensics process, compares and contrasts four particular forensic methodologies, and finally proposes an abstract model of the digital forensic procedure. This model attempts to address some of the shortcomings of previous methodologies, and provides the following advantages: a consistent and standardized framework for digital forensic tool development; a mechanism for applying the framework to future digital technologies; a generalized methodology that judicial members can use to relate technology to non-technical observers; and, the potential for incorporating nondigital electronic technologies within the abstractionmodel of the digital forensic procedure. This model attempts to address some of the shortcomings of previous methodologies, and provides the following advantages: a consistent and standardized framework for digital forensic tool development; a mechanism for applying the framework to future digital technologies; a generalized methodology that judicial members can use to relate technology to non-technical observers; and, the potential for incorporating nondigital electronic technologies within the abstraction Introduction The digital age can be characterized as the application of computer technology as a tool that enhances traditional methodologies. The incorporation of computer systems as a tool into private, commercial, educational, governmental, and other facets of modern life has improved", "title": "" } ]
[ { "docid": "6e051906ec3deac14acb249ea4982d2e", "text": "Recent attempts to fabricate surfaces with custom reflectance functions boast impressive angular resolution, yet their spatial resolution is limited. In this paper we present a method to construct spatially varying reflectance at a high resolution of up to 220dpi, orders of magnitude greater than previous attempts, albeit with a lower angular resolution. The resolution of previous approaches is limited by the machining, but more fundamentally, by the geometric optics model on which they are built. Beyond a certain scale geometric optics models break down and wave effects must be taken into account. We present an analysis of incoherent reflectance based on wave optics and gain important insights into reflectance design. We further suggest and demonstrate a practical method, which takes into account the limitations of existing micro-fabrication techniques such as photolithography to design and fabricate a range of reflection effects, based on wave interference.", "title": "" }, { "docid": "69a11f89a92051631e1c07f2af475843", "text": "Animal-assisted therapy (AAT) has been practiced for many years and there is now increasing interest in demonstrating its efficacy through research. To date, no known quantitative review of AAT studies has been published; our study sought to fill this gap. We conducted a comprehensive search of articles reporting on AAT in which we reviewed 250 studies, 49 of which met our inclusion criteria and were submitted to meta-analytic procedures. Overall, AAT was associated with moderate effect sizes in improving outcomes in four areas: Autism-spectrum symptoms, medical difficulties, behavioral problems, and emotional well-being. Contrary to expectations, characteristics of participants and studies did not produce differential outcomes. AAT shows promise as an additive to established interventions and future research should investigate the conditions under which AAT can be most helpful.", "title": "" }, { "docid": "4304d7ef3caaaf874ad0168ce8001678", "text": "In a path-breaking paper last year Pat and Betty O’Neil and Gerhard Weikum pro posed a self-tuning improvement to the Least Recently Used (LRU) buffer management algorithm[l5]. Their improvement is called LRU/k and advocates giving priority to buffer pages baaed on the kth most recent access. (The standard LRU algorithm is denoted LRU/l according to this terminology.) If Pl’s kth most recent access is more more recent than P2’s, then Pl will be replaced after P2. Intuitively, LRU/k for k > 1 is a good strategy, because it gives low priority to pages that have been scanned or to pages that belong to a big randomly accessed file (e.g., the account file in TPC/A). They found that LRU/S achieves most of the advantage of their method. The one problem of LRU/S is the processor *Supported by U.S. Office of Naval Research #N00014-91-E 1472 and #N99914-92-J-1719, U.S. National Science Foundation grants #CC%9103953 and IFlI-9224691, and USBA #5555-19. Part of this work was performed while Theodore Johnson was a 1993 ASEE Summer Faculty Fellow at the National Space Science Data Center of NASA Goddard Space Flight Center. t Authors’ e-mail addresses : ted@cis.ufi.edu and", "title": "" }, { "docid": "8d80bfe0015c6b867c5ad8311e45d3fa", "text": "OBJECTIVES\nIt has been argued that mixed methods research can be useful in nursing and health science because of the complexity of the phenomena studied. However, the integration of qualitative and quantitative approaches continues to be one of much debate and there is a need for a rigorous framework for designing and interpreting mixed methods research. This paper explores the analytical approaches (i.e. parallel, concurrent or sequential) used in mixed methods studies within healthcare and exemplifies the use of triangulation as a methodological metaphor for drawing inferences from qualitative and quantitative findings originating from such analyses.\n\n\nDESIGN\nThis review of the literature used systematic principles in searching CINAHL, Medline and PsycINFO for healthcare research studies which employed a mixed methods approach and were published in the English language between January 1999 and September 2009.\n\n\nRESULTS\nIn total, 168 studies were included in the results. Most studies originated in the United States of America (USA), the United Kingdom (UK) and Canada. The analytic approach most widely used was parallel data analysis. A number of studies used sequential data analysis; far fewer studies employed concurrent data analysis. Very few of these studies clearly articulated the purpose for using a mixed methods design. The use of the methodological metaphor of triangulation on convergent, complementary, and divergent results from mixed methods studies is exemplified and an example of developing theory from such data is provided.\n\n\nCONCLUSION\nA trend for conducting parallel data analysis on quantitative and qualitative data in mixed methods healthcare research has been identified in the studies included in this review. Using triangulation as a methodological metaphor can facilitate the integration of qualitative and quantitative findings, help researchers to clarify their theoretical propositions and the basis of their results. This can offer a better understanding of the links between theory and empirical findings, challenge theoretical assumptions and develop new theory.", "title": "" }, { "docid": "19269e78ef1aee1f4921230b42b6c4b6", "text": "Traditional methods of motion segmentation use powerful geometric constraints to understand motion, but fail to leverage the semantics of high-level image understanding. Modern CNN methods of motion analysis, on the other hand, excel at identifying well-known structures, but may not precisely characterize well-known geometric constraints. In this work, we build a new statistical model of rigid motion flow based on classical perspective projection constraints. We then combine piecewise rigid motions into complex deformable and articulated objects, guided by semantic segmentation from CNNs and a second \"object-level\" statistical model. This combination of classical geometric knowledge combined with the pattern recognition abilities of CNNs yields excellent performance on a wide range of motion segmentation benchmarks, from complex geometric scenes to camouflaged animals.", "title": "" }, { "docid": "c30a60cdcdc894594692bd730cd09846", "text": "Healthcare sector is totally different from other industry. It is on high priority sector and people expect highest level of care and services regardless of cost. It did not achieve social expectation even though it consume huge percentage of budget. Mostly the interpretations of medical data is being done by medical expert. In terms of image interpretation by human expert, it is quite limited due to its subjectivity, complexity of the image, extensive variations exist across different interpreters, and fatigue. After the success of deep learning in other real world application, it is also providing exciting solutions with good accuracy for medical imaging and is seen as a key method for future applications in health secotr. In this chapter, we discussed state of the art deep learning architecture and its optimization used for medical image segmentation and classification. In the last section, we have discussed the challenges deep learning based methods for medical imaging and open research issue.", "title": "" }, { "docid": "b7dfec026a9fe18eb2cd8bdfd6cfa416", "text": "Based on the hypothesis that frame-semantic parsing and event extraction are structurally identical tasks, we retrain SEMAFOR, a stateof-the-art frame-semantic parsing system to predict event triggers and arguments. We describe how we change SEMAFOR to be better suited for the new task and show that it performs comparable to one of the best systems in event extraction. We also describe a bias in one of its models and propose a feature factorization which is better suited for this model.", "title": "" }, { "docid": "fc387da4792896b1c85d18e4bd5f7376", "text": "It is generally understood that building software systems with components has many advantages but the difficulties of this approach should not be ignored. System evolution, maintenance, migration and compatibilities are some of the challenges met with when developing a component-based software system. Since most systems evolve over time, components must be maintained or replaced. The evolution of requirements affects not only specific system functions and particular components but also component-based architecture on all levels. Increased complexity is a consequence of different components and systems having different life cycles. In component-based systems it is easier to replace part of system with a commercial component. This process is however not straightforward and different factors such as requirements management, marketing issues, etc., must be taken into consideration. In this paper we discuss the issues and challenges encountered when developing and using an evolving component-based software system. An industrial control system has been used as a case study.", "title": "" }, { "docid": "692174cc5dd763333cebbea576c8930b", "text": "The Histograms of Oriented Gradients (HOG) descriptor represents shape information by storing the local gradients in an image. The Haar wavelet transform is a simple yet powerful technique that can separately enhance the horizontal and vertical local features in an image. In this paper, we enhance the HOG descriptor by subjecting the image to the Haar wavelet transform and then computing HOG from the result in a manner that enriches the shape information encoded in the descriptor. First, we define the novel HaarHOG descriptor for grayscale images and extend this idea for color images. Second, we compare the image recognition performance of the HaarHOG descriptor with the traditional HOG descriptor in four different color spaces and grayscale. Finally, we compare the image classification performance of the HaarHOG descriptor with some popular descriptors used by other researchers on four grand challenge datasets.", "title": "" }, { "docid": "eaad298fce83ade590a800d2318a2928", "text": "Space vector modulation (SVM) is the best modulation technique to drive 3-phase load such as 3-phase induction motor. In this paper, the pulse width modulation strategy with SVM is analyzed in detail. The modulation strategy uses switching time calculator to calculate the timing of voltage vector applied to the three-phase balanced-load. The principle of the space vector modulation strategy is performed using Matlab/Simulink. The simulation result indicates that this algorithm is flexible and suitable to use for advance vector control. The strategy of the switching minimizes the distortion of load current as well as loss due to minimize number of commutations in the inverter.", "title": "" }, { "docid": "fe8d20422454f095c5a14bce3523748d", "text": "This paper Put forward a glass crack detection algorithm based on digital image processing technology, obtain identification information of glass surface crack image by making use of pre-processing, image segmentation, feature extraction on the glass crack image, calculate the target area and perimeter of the roundness index to judge whether this image with a crack, make use of Visual Basic6.0 programming language to impolder the crack detection system, achieve the function of each part in crack detection process.", "title": "" }, { "docid": "3a3470d13c9c63af1a62ee7bc57a96ef", "text": "Cloud computing is a distributed computing model that still faces problems. New ideas emerge to take advantage of its features and among the research challenges found in the cloud, we can highlight Identity and Access Management. The main problems of the application of access control in the cloud are the necessary flexibility and scalability to support a large number of users and resources in a dynamic and heterogeneous environment, with collaboration and information sharing needs. This paper proposes the use of risk-based dynamic access control for cloud computing. The proposal is presented as an access control model based on an extension of the XACML standard with three new components: the Risk Engine, the Risk Quantification Web Services and the Risk Policies. The risk policies present a method to describe risk metrics and their quantification, using local or remote functions. The risk policies allow users and cloud service providers to define how to handle risk-based access control for their resources, using different quantification and aggregation methods. The model reaches the access decision based on a combination of XACML decisions and risk analysis. A prototype of the model is implemented, showing it has enough expressivity to describe the models of related work. In the experimental results, the prototype takes between 2 and 6 milliseconds to reach access decisions using a risk policy. A discussion on the security aspects of the model is also presented.", "title": "" }, { "docid": "22e5e34e2df4c02a4df3255dbabf1fcb", "text": "In this paper, we propose to show how video data available in standard CCTV transportation systems can represent a useful source of information for transportation infrastructure management, optimization and planning if adequately analyzed (e.g. to facilitate equipment usage understanding, to ease diagnostic and planning for system managers). More precisely, we present two algorithms allowing to estimate the number of people in a camera view and to measure the platform time-occupancy by trains. A statistical analysis of the results of each algorithm provide interesting insights regarding station usage. It is also shown that combining information from the algorithms in different views provide a finer understanding of the station usage. An end-user point of view confirms the interest of the proposed analysis.", "title": "" }, { "docid": "6445e510d1e3806b878ae07288d2578b", "text": "The functionalization of polymeric substances is of great interest for the development of 15 innovative materials for advanced applications. For many decades, the functionalization of 16 chitosan has been a convenient way to improve its properties with the aim to prepare new 17 materials with specialized characteristics. In the present article, we summarize the latest methods 18 for the modification and derivatization of chitin and chitosan, trying to introduce specific 19 functional groups under experimental conditions, which allow a control over the macromolecular 20 architecture. This is motivated because an understanding of the interdependence between chemical 21 structure and properties is an important condition for proposing innovative materials. New 22 advances in methods and strategies of functionalization such as click chemistry approach, grafting 23 onto copolymerization, coupling with cyclodextrins and reactions in ionic liquids are discussed. 24", "title": "" }, { "docid": "4c12d04ce9574aab071964e41f0c5f4e", "text": "The complete genome sequence of Treponema pallidum was determined and shown to be 1,138,006 base pairs containing 1041 predicted coding sequences (open reading frames). Systems for DNA replication, transcription, translation, and repair are intact, but catabolic and biosynthetic activities are minimized. The number of identifiable transporters is small, and no phosphoenolpyruvate:phosphotransferase carbohydrate transporters were found. Potential virulence factors include a family of 12 potential membrane proteins and several putative hemolysins. Comparison of the T. pallidum genome sequence with that of another pathogenic spirochete, Borrelia burgdorferi, the agent of Lyme disease, identified unique and common genes and substantiates the considerable diversity observed among pathogenic spirochetes.", "title": "" }, { "docid": "f14515c943b95e5e47c7f4f95b93f6fe", "text": "The Architecture, Engineering & Construction (AEC) sector is a highly fragmented, data intensive, project based industry, involving a number of very different professions and organisations. Projects carried out within this sector involve collaboration between various people, using a variety of different systems. This, along with the industry’s strong data sharing and processing requirements, means that the management of building data is complex and challenging. This paper presents a solution to data sharing requirements of the AEC sector by utilising Cloud Computing. Our solution presents two key contributions, first a governance model for building data, based on extensive research and industry consultation. Second, a prototype implementation of this governance model, utilising the CometCloud autonomic Cloud Computing engine based on the Master/Worker paradigm. We have integrated our prototype with the 3D modelling software Google Sketchup. The approach and prototype presented has applicability in a number of other eScience related applications involving multi-disciplinary, collaborative working using Cloud Computing infrastructure.", "title": "" }, { "docid": "f4ea679d2c09107b1313a4795c749ca2", "text": "Math word problems form a natural abstraction to a range of quantitative reasoning problems, such as understanding financial news, sports results, and casualties of war. Solving such problems requires the understanding of several mathematical concepts such as dimensional analysis, subset relationships, etc. In this paper, we develop declarative rules which govern the translation of natural language description of these concepts to math expressions. We then present a framework for incorporating such declarative knowledge into word problem solving. Our method learns to map arithmetic word problem text to math expressions, by learning to select the relevant declarative knowledge for each operation of the solution expression. This provides a way to handle multiple concepts in the same problem while, at the same time, supporting interpretability of the answer expression. Our method models the mapping to declarative knowledge as a latent variable, thus removing the need for expensive annotations. Experimental evaluation suggests that our domain knowledge based solver outperforms all other systems, and that it generalizes better in the realistic case where the training data it is exposed to is biased in a different way than the test data.", "title": "" }, { "docid": "820b38bf53b58c0557a574a3c210955a", "text": "Modern students will work in the “Industry 4.0” and create digital economy of Russia. Digital economy is based on the infrastructure organization of production, which is based on the network interaction of production and technology. The world infrastructure of transport, trading, finance provides a technological organization of production and consumers. Industry 4.0 begins with the creation of redundant infrastructure networks - Industrial Internet of Things (IIoT). Today, university education corresponds to the processing form of the technological organization. The infrastructure form of the organization of educational process is necessary. Information technologies supporting the entire training process and representing it on the Internet, do not exist. A new methodology of educational process is suggested. Training in the training process, lectures, seminars, laboratory works are organized according to the logical fulfillment of the target works. The task of the teacher is the design of the target works and the analysis of the result. The simulation training system is a form of the activity organization that is directed to the development of participants and obtaining a result. The result will be a course work as a part of the project in the simulation system. Students act in specially developed problem, training and developing situations. Practical formation in the field of wireless communications technologies is carried out on the basis of equipment of software defined radio (SDR) or universal software radio peripheral (USRP) systems as a combination of hardware and software platforms for creating prototypes of real radio systems. Laboratory and research works are performed on the basis of this firmware radio system (SDR) in the remote Internet access Mode. This conforms to the principles of the Industry4.0. The project is carried out under the control of the teacher. The activity of each student is monitored. The Internet provides individual activities and simulation, communication, scheduling, and group activity analysis. As a result, there is no knowledge of the sum of knowledge, and the integral technical culture, which is a criterion of the education.", "title": "" }, { "docid": "c89347bd4819678592699b1cc982436f", "text": "Online tracking is evolving from browserand devicetracking to people-tracking. As users are increasingly accessing the Internet from multiple devices this new paradigm of tracking—in most cases for purposes of advertising—is aimed at crossing the boundary between a user’s individual devices and browsers. It establishes a person-centric view of a user across devices and seeks to combine the input from various data sources into an individual and comprehensive user profile. By its very nature such cross-device tracking can principally reveal a complete picture of a person and, thus, become more privacy-invasive than the siloed tracking via HTTP cookies or other traditional and more limited tracking mechanisms. In this study we are exploring cross-device tracking techniques as well as their privacy implications. Particularly, we demonstrate a method to detect the occurrence of cross-device tracking, and, based on a cross-device tracking dataset that we collected from 126 Internet users, we explore the prevalence of cross-device trackers on mobile and desktop devices. We show that the similarity of IP addresses and Internet history for a user’s devices gives rise to a matching rate of F-1 = 0.91 for connecting a mobile to a desktop device in our dataset. This finding is especially noteworthy in light of the increase in learning power that cross-device companies may achieve by leveraging user data from more than one device. Given these privacy implications of cross-device tracking we also examine compliance with applicable self-regulation for 40 cross-device companies and find that some are not transparent about their practices.", "title": "" }, { "docid": "8979ac412e25cf842611dcb257836cea", "text": "Tensors or <italic>multiway arrays</italic> are functions of three or more indices <inline-formula> <tex-math notation=\"LaTeX\">$(i,j,k,\\ldots)$</tex-math></inline-formula>—similar to matrices (two-way arrays), which are functions of two indices <inline-formula><tex-math notation=\"LaTeX\">$(r,c)$</tex-math></inline-formula> for (row, column). Tensors have a rich history, stretching over almost a century, and touching upon numerous disciplines; but they have only recently become ubiquitous in signal and data analytics at the confluence of signal processing, statistics, data mining, and machine learning. This overview article aims to provide a good starting point for researchers and practitioners interested in learning about and working with tensors. As such, it focuses on fundamentals and motivation (using various application examples), aiming to strike an appropriate balance of breadth <italic>and depth</italic> that will enable someone having taken first graduate courses in matrix algebra and probability to get started doing research and/or developing tensor algorithms and software. Some background in applied optimization is useful but not strictly required. The material covered includes tensor rank and rank decomposition; basic tensor factorization models and their relationships and properties (including fairly good coverage of identifiability); broad coverage of algorithms ranging from alternating optimization to stochastic gradient; statistical performance analysis; and applications ranging from source separation to collaborative filtering, mixture and topic modeling, classification, and multilinear subspace learning.", "title": "" } ]
scidocsrr
a41b0a596c6a2201096c220174420981
Bandwidth calendaring: Dynamic services scheduling over Software Defined Networks
[ { "docid": "9d18884689c3a9decf536b7e7512fbf2", "text": "Software Defined Networking (SDN) is an emerging networking paradigm that separates the network control plane from the data forwarding plane with the promise to dramatically improve network resource utilization, simplify network management, reduce operating cost, and promote innovation and evolution. Although traffic engineering techniques have been widely exploited in the past and current data networks, such as ATM networks and IP/ MPLS networks, to optimize the performance of communication networks by dynamically analyzing, predicting, and regulating the behavior of the transmitted data, the unique features of SDN require new traffic engineering techniques that exploit the global network view, status, and flow patterns/characteristics available for better traffic control and management. This paper surveys the state-of-the-art in traffic engineering for SDNs, and mainly focuses on four thrusts including flow management, fault tolerance, topology update, and traffic analysis/characterization. In addition, some existing and representative traffic engineering tools from both industry and academia are explained. Moreover, open research issues for the realization of SDN traffic engineering solutions are discussed in detail. 2014 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "9873c52bd1e8c073ca8c74fbcec6970f", "text": "In situations where a seller has surplus stock and another seller is stocked out, it may be desirable to transfer surplus stock from the former to the latter. We examine how the possibility of such transshipments between two independent locations affects the optimal inventory orders at each location. If each location aims to maximize its own profits—we call this local decision making—their inventory choices will not, in general, maximize joint profits. We find transshipment prices which induce the locations to choose inventory levels consistent with joint-profit maximization. (Transshipments; Newsvendor Model; Nash Equilibrium)", "title": "" }, { "docid": "34cb931b042cfee566b2e6067c0b4abb", "text": "Convolutional Neural Networks (CNNs) have shown great success in solving key artificial vision challenges such as image segmentation. Training these networks, however, normally requires plenty of labeled data, while data labeling is an expensive and time-consuming task, due to the significant human effort involved. In this paper we propose two pixel-level domain adaptation methods, introducing a training model for CNN based iris segmentation. Based on our experiments, the proposed methods can effectively transfer the domains of source databases to those of the targets, producing new adapted databases. The adapted databases then are used to train CNNs for segmentation of iris texture in the target databases, eliminating the need for the target labeled data. We also indicate that training a specific CNN for a new iris segmentation task, maintaining optimal segmentation scores, is possible using a very low number of training samples.", "title": "" }, { "docid": "637197b6deaa42f96b6ed7b28f947b4d", "text": "A wideband omnidirectional circularly polarized (CP) antenna is developed. The omnidirectional CP antenna includes four tilted dipoles that are wrapped around a cylinder for omnidirectional CP pattern. Each tilted dipole is coupled with a parasitic element for bandwidth enhancement. It is shown that the wideband omnidirectional CP antenna achieves an axial ratio (AR) bandwidth of 44% (1.69-2.64 GHz) for AR <; 3 dB and an impedance bandwidth of 61% (1.35-2.54 GHz) for return loss >10 dB. The antenna gain is around 1 dBi with an omnidirectionality of less than 1.5 dB.", "title": "" }, { "docid": "d1f3961959f11ce553237ef8941da86a", "text": "Inspired by recent successes of deep learning in computer vision and speech recognition, we propose a novel framework to encode time series data as different types of images, namely, Gramian Angular Fields (GAF) and Markov Transition Fields (MTF). This enables the use of techniques from computer vision for classification. Using a polar coordinate system, GAF images are represented as a Gramian matrix where each element is the trigonometric sum (i.e., superposition of directions) between different time intervals. MTF images represent the first order Markov transition probability along one dimension and temporal dependency along the other. We used Tiled Convolutional Neural Networks (tiled CNNs) on 12 standard datasets to learn high-level features from individual GAF, MTF, and GAF-MTF images that resulted from combining GAF and MTF representations into a single image. The classification results of our approach are competitive with five stateof-the-art approaches. An analysis of the features and weights learned via tiled CNNs explains why the approach works.", "title": "" }, { "docid": "370c728b64c8cf6c63815729f4f9b03e", "text": "Previous researchers studying baseball pitching have compared kinematic and kinetic parameters among different types of pitches, focusing on the trunk, shoulder, and elbow. The lack of data on the wrist and forearm limits the understanding of clinicians, coaches, and researchers regarding the mechanics of baseball pitching and the differences among types of pitches. The purpose of this study was to expand existing knowledge of baseball pitching by quantifying and comparing kinematic data of the wrist and forearm for the fastball (FA), curveball (CU) and change-up (CH) pitches. Kinematic and temporal parameters were determined from 8 collegiate pitchers recorded with a four-camera system (200 Hz). Although significant differences were observed for all pitch comparisons, the least number of differences occurred between the FA and CH. During arm cocking, peak wrist extension for the FA and CH pitches was greater than for the CU, while forearm supination was greater for the CU. In contrast to the current study, previous comparisons of kinematic data for trunk, shoulder, and elbow revealed similarities between the FA and CU pitches and differences between the FA and CH pitches. Kinematic differences among pitches depend on the segment of the body studied.", "title": "" }, { "docid": "18ee965b96c72dbbfc8ce833548a4f72", "text": "With the inverse synthetic aperture radar (ISAR) imaging model, targets should move smoothly during the coherent processing interval (CPI). Since the CPI is quite long, fluctuations of a target's velocity and gesture will deteriorate image quality. This paper presents a multiple-input-multiple-output (MIMO)-ISAR imaging method by combining MIMO techniques and ISAR imaging theory. By using a special M-transmitter N-receiver linear array, a group of M orthogonal phase-code modulation signals with identical bandwidth and center frequency is transmitted. With a matched filter set, every target response corresponding to the orthogonal signals can be isolated at each receiving channel, and range compression is completed simultaneously. Based on phase center approximation theory, the minimum entropy criterion is used to rearrange the echo data after the target's velocity has been estimated, and then, the azimuth imaging will finally finish. The analysis of imaging and simulation results show that the minimum CPI of the MIMO-ISAR imaging method is 1/MN of the conventional ISAR imaging method under the same azimuth-resolution condition. It means that most flying targets can satisfy the condition that targets should move smoothly during CPI; therefore, the applicability and the quality of ISAR imaging will be improved.", "title": "" }, { "docid": "9a5be4452928d80d6be8e8e0267dafa5", "text": "degeneration of the basal layer in the epidermis. In the dermis, perivascular or lichenoid infiltrate and the presence of melanin incontinence were the predominant changes noted. A recently developed lesion tends to show more predominant band-like lymphocytic infiltration and epidermal vacuolization rather than epidermal atrophy. Linear lesions can frequently occur at sites of scratching or trauma in patients with LP as a result of Koebner’s phenomenon, or, as in our case, they may appear spontaneously within the lines of Blaschko on the face. In acquired Blaschko linear inflammatory dermatosis, cutaneous antigenic mosaicism could be responsible for the susceptibility to induce mosaic T-cell responses. Because drugs had not been changed in type or dosage over several years of treatment, and underlying medical diseases had been well controlled, the possibility of drug-related reaction was thought to be low. Considering the clinical features in our patient, and the fact that exposed sites were frequently the first to be involved, it can be suggested that exposure to sunlight (even in a casual dose) may be a kind of stimuli to induce the lesion of LPP in a genetically susceptible patient. Usually the course is chronic and treatments are less effective for follicular LP or LPP than for classical LP. Topical tacrolimus, a member of the immunosuppressive macrolide family that suppresses T-cell activation, has been shown to be effective in the treatment of some mucosal and follicular LP. There is only one article about the successful treatment of LPP with topical tacrolimus. Although they showed over 50% improvement in seven of 13 patients after 4 months of treatment, the authors did not mention any case of complete clearance in their article. Moreover, the other six of the 13 patients did not show improvement in pigmentation. Therefore, in the present case, 1064-nm QSNY with low fluence treatment was chosen for treating pigmentation. The 1064-nm QSNY in nanosecond (ns) domain is strongly absorbed by the finely distributed melanin in dermal pigmented lesions. Moreover, 1064-nm QSNY with low fluence, which in a ‘‘top-hat’’ beam mode can evenly distribute energy density throughout the whole spot, is now widely used when treating darker skin types, because it greatly reduces the risk of epidermal injury and post-therapy dyschromia. In our patient, because of poor response to topical steroid, we started tacrolimus ointment for mainly targeting T cells, and for the treatment of pigmentation, we added QSNY treatment. It suggests that the combination treatment of 1064-nm low fluenced QSNY with topical tacrolimus may be a good therapeutic option for patients with recalcitrant facial LPP in dark-skinned individuals.", "title": "" }, { "docid": "e74b0d0c76b7ed7c6025d8773347ea23", "text": "In this paper we introduce a comprehensive survey of wearable systems designed to assist the visual impaired users navigation in everyday life outdoor scenarios. We focus on presenting the main advantages and limitations of each technique in effort to inform the scientific community about the progress in the area of assistive devices and also offer users a review about the capabilities of each system. Various performance parameters are introduced in order to classify different systems by giving qualitative and quantitative measures for evaluation. At the end of the study conclusions are presented along with some perspectives for future work and development.", "title": "" }, { "docid": "228c59c9bf7b4b2741567bffb3fcf73f", "text": "This paper presents a new PSO-based optimization DBSCAN space clustering algorithm with obstacle constraints. The algorithm introduces obstacle model and simplifies two-dimensional coordinates of the cluster object coding to one-dimensional, then uses the PSO algorithm to obtain the shortest path and minimum obstacle distance. At the last stage, this paper fulfills spatial clustering based on obstacle distance. Theoretical analysis and experimental results show that the algorithm can get high-quality clustering result of space constraints with more reasonable and accurate quality.", "title": "" }, { "docid": "97a4202d9dd2fe645e5d118449c92319", "text": "In present scenario, the Indian government has announced the demonetization of all Rs 500 and Rs 1000, in reserve bank notes of Mahatma Gandhi series. Indian government has introduced a new Rs 500 and Rs 2000, to reduce fund illegal activity in India. Even then the new notes of fake or bogus currency are circulated in the society. The main objective of this work is used to identify fake currencies among the real. From the currency, the strip lines or continuous lines are detected from real and fake note by using edge detection techniques. HSV techniques are used to saturate the value of an input image. To achieve the enhance reliability and dynamic way in detecting the counterfeit currency.", "title": "" }, { "docid": "ae7c8771bd38fddf031a46587d7a9ee5", "text": "The Workflow Patterns Initiative was established with the aim of delineating the fundamental requirements that arise during business process modelling on a recurring basis and describe them in an imperative way. The first deliverable of this research project was a set of twenty patterns describing the control-flow perspective of workflow systems. Since their release, these patterns have been widely used by practitioners, vendors and academics alike in the selection, design and development of workflow systems [vdAtHKB03]. This paper presents the first systematic review of the original twenty control-flow patterns and provides a formal description of each of them in the form of a Coloured Petri-Net (CPN) model. It also identifies twenty three new patterns relevant to the control-flow perspective. Detailed context conditions and evaluation criteria are presented for each pattern and their implementation is assessed in fourteen commercial offerings including workflow and case handling systems, business process modelling formalisms and business process execution languages.", "title": "" }, { "docid": "089ef4e4469554a4d4ef75089fe9c7be", "text": "The attention of software vendors has moved recently to SMEs (smallto medium-sized enterprises), offering them a vast range of enterprise systems (ES), which were formerly adopted by large firms only. From reviewing information technology innovation adoption literature, it can be argued that IT innovations are highly differentiated technologies for which there is not necessarily a single adoption model. Additionally, the question of why one SME adopts an ES while another does not is still understudied. This study intends to fill this gap by investigating the factors impacting SME adoption of ES. A qualitative approach was adopted in this study involving key decision makers in nine SMEs in the Northwest of England. The contribution of this study is twofold: it provides a framework that can be used as a theoretical basis for studying SME adoption of ES, and it empirically examines the impact of the factors within this framework on SME adoption of ES. The findings of this study confirm that factors impacting the adoption of ES are different from factors impacting SME adoption of other previously studied IT innovations. Contrary to large companies that are mainly affected by organizational factors, this study shows that SMEs are not only affected by environmental factors as previously established, but also affected by technological and organizational factors.", "title": "" }, { "docid": "4e0ac68997acd5fdc7276ba80ae04fe3", "text": "In this work, substrate integrated waveguide (SIW) bandpass filters were designed and fabricated using LTCC process. The proposed scheme consists of SIW cavities with coupling slots and coplanar waveguide (CPW) transitions. To reduce the component size and surface occupation, three SIW cavities are laminated vertically. Both horizontal transition and vertical transition are used between the two CPW transmission lines. Based on the SICCAS-K70D LTCC material (εr = 66, tanδ = 0.002 @3.5 GHz), an S-band bandpass filter with a center frequency of 2.59 GHz was designed and fabricated using the in-house developed LTCC material and process.", "title": "" }, { "docid": "5e871d5f94884456b017f268d3f6206d", "text": "Over the last ten years, argumentation has come to be increasingly central as a core study within Artificial Intelligence (AI). The articles forming this volume reflect a variety of important trends, developments, and applications covering a range of current topics relating to the theory and applications of argumentation. Our aims in this introduction are, firstly, to place these contributions in the context of the historical foundations of argumentation in AI and, subsequently, to discuss a number of themes that have emerged in recent years resulting in a significant broadening of the areas in which argumentation based methods are used. We begin by presenting a brief overview of the issues of interest within the classical study of argumentation: in particular, its relationship— in terms of both similarities and important differences—to traditional concepts of logical reasoning and mathematical proof. We continue by outlining how a number of foundational contributions provided the basis for the formulation of argumentation models and their promotion in AI related settings and then consider a number of new themes that have emerged in recent years, many of which provide the principal topics of the research presented in this volume. © 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d1048297794d59687d3cf33eafbf0af3", "text": "Voids are one of the major defects in solder balls and their detection and assessment can help in reducing unit and board yield issues caused by excessive or very large voids. Voids are difficult to detect using manual inspection alone. 2-D X-ray machines are often used to make voids visible to an operator for manual inspection. Automated methods do not give good accuracy in void detection and measurement because of a number of challenges present in 2-D X-ray images. Some of these challenges include vias, plated-through holes, reflections from the the plating or vias, inconsistent lighting, background traces, noise, void-like artifacts, and parallax effects. None of the existing methods that has been researched or utilized in equipment could accurately and repeatably detect voids in the presence of these challenges. This paper proposes a robust automatic void detection algorithm that detects voids accurately and repeatably in the presence of the aforementioned challenges. The proposed method operates on the 2-D X-ray images by first segregating each individual solder ball, including balls that are overshadowed by components, in preparation for treating each ball independently for void detection. Feature parameters are extracted through different classification steps to classify each artifact detected inside the solder ball as a candidate or phantom void. Several classification steps are used to tackle the challenges exhibited in the 2-D X-ray images. The proposed method is able to detect different-sized voids inside the solder balls under different brightness conditions and voids that are partially obscured by vias. Results show that the proposed method achieves a correlation squared of 86% when compared with manually measured and averaged data from experienced operators from both 2-D and 3-D X-ray tools. The proposed algorithm is fully automated and benefits the manufacturing process by reducing operator inspection time and removing the manual measurement variability from the results, thus providing a cost-effective solution to improve output product quality.", "title": "" }, { "docid": "f3295f975adac19269bd0c35fc49483f", "text": "This meta-analysis integrates 296 effect sizes reported in eye-tracking research on expertise differences in the comprehension of visualizations. Three theories were evaluated: Ericsson and Kintsch’s (Psychol Rev 102:211–245, 1995) theory of long-term working memory, Haider and Frensch’s (J Exp Psychol Learn Mem Cognit 25:172–190, 1999) information-reduction hypothesis, and the holistic model of image perception of Kundel et al. (Radiology 242:396–402, 2007). Eye movement and performance data were cumulated from 819 experts, 187 intermediates, and 893 novices. In support of the evaluated theories, experts, when compared with non-experts, had shorter fixation durations, more fixations on task-relevant areas, and fewer fixations on task-redundant areas; experts also had longer saccades and shorter times to first fixate relevant information, owing to superiority in parafoveal processing and selective attention allocation. Eye movements, reaction time, and performance accuracy were moderated by characteristics of visualization (dynamics, realism, dimensionality, modality, and text annotation), task (complexity, time-on-task, and task control), and domain (sports, medicine, transportation, other). These findings are discussed in terms of their implications for theories of visual expertise in professional domains and their significance for the design of learning environments.", "title": "" }, { "docid": "5d170dcd5d2c9c1f4e5645217444fd98", "text": "In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks. MT-DNN not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations to help adapt to new tasks and domains. MTDNN extends the model proposed in Liu et al. (2015) by incorporating a pre-trained bidirectional transformer language model, known as BERT (Devlin et al., 2018). MT-DNN obtains new state-of-the-art results on ten NLU tasks, including SNLI, SciTail, and eight out of nine GLUE tasks, pushing the GLUE benchmark to 82.2% (1.8% absolute improvement). We also demonstrate using the SNLI and SciTail datasets that the representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations. Our code and pre-trained models will be made publicly available.", "title": "" }, { "docid": "74fd21dccc9e883349979c8292c5f450", "text": "Stack Overflow (SO) has been a great source of natural language questions and their code solutions (i.e., question-code pairs), which are critical for many tasks including code retrieval and annotation. In most existing research, question-code pairs were collected heuristically and tend to have low quality. In this paper, we investigate a new problem of systematically mining question-code pairs from Stack Overflow (in contrast to heuristically collecting them). It is formulated as predicting whether or not a code snippet is a standalone solution to a question. We propose a novel Bi-View Hierarchical Neural Network which can capture both the programming content and the textual context of a code snippet (i.e., two views) to make a prediction. On two manually annotated datasets in Python and SQL domain, our framework substantially outperforms heuristic methods with at least 15% higher F1 and accuracy. Furthermore, we present StaQC (Stack Overflow Question-Code pairs), the largest dataset to date of ∼148K Python and ∼120K SQL question-code pairs, automatically mined from SO using our framework. Under various case studies, we demonstrate that StaQC can greatly help develop data-hungry models for associating natural language with programming language1.", "title": "" }, { "docid": "e5fc30045f458f84435363349d22204d", "text": "Today, root cause analysis of failures in data centers is mostly done through manual inspection. More often than not, cus- tomers blame the network as the culprit. However, other components of the system might have caused these failures. To troubleshoot, huge volumes of data are collected over the entire data center. Correlating such large volumes of diverse data collected from different vantage points is a daunting task even for the most skilled technicians. In this paper, we revisit the question: how much can you infer about a failure in the data center using TCP statistics collected at one of the endpoints? Using an agent that cap- tures TCP statistics we devised a classification algorithm that identifies the root cause of failure using this information at a single endpoint. Using insights derived from this classi- fication algorithm we identify dominant TCP metrics that indicate where/why problems occur in the network. We val- idate and test these methods using data that we collect over a period of six months in a production data center.", "title": "" }, { "docid": "222b060b4235b0d31199a74fbc630a0d", "text": "Online bookings of hotels have increased drastically throughout recent years. Studies in tourism and hospitality have investigated the relevance of hotel attributes influencing choice but did not yet explore them in an online booking setting. This paper presents findings about consumers’ stated preferences for decision criteria from an adaptive conjoint study among 346 respondents. The results show that recommendations of friends and online reviews are the most important factors that influence online hotel booking. Partitioning the importance values of the decision criteria reveals group-specific differences indicating the presence of market segments.", "title": "" } ]
scidocsrr
ad24d1fda8db493d27618b9ab298d284
Gated-Attention Architectures for Task-Oriented Language Grounding
[ { "docid": "87a4e88a41ede7edfac027f898a39651", "text": "We introduce a general and simple structural design called “Multiplicative Integration” (MI) to improve recurrent neural networks (RNNs). MI changes the way in which information from difference sources flows and is integrated in the computational building block of an RNN, while introducing almost no extra parameters. The new structure can be easily embedded into many popular RNN models, including LSTMs and GRUs. We empirically analyze its learning behaviour and conduct evaluations on several tasks using different RNN models. Our experimental results demonstrate that Multiplicative Integration can provide a substantial performance boost over many of the existing RNN models.", "title": "" }, { "docid": "95a845c61fd1e98d62f1ab175d167276", "text": "The ability to transfer knowledge from previous experiences is critical for an agent to rapidly adapt to different environments and effectively learn new tasks. In this paper we conduct an empirical study of Deep Q-Networks (DQNs) where the agent is evaluated on previously unseen environments. We show that we can train a robust network for navigation in 3D environments and demonstrate its effectiveness in generalizing to unknown maps with unknown background textures. We further investigate the effectiveness of pretraining and finetuning for transferring knowledge between various scenarios in 3D environments. In particular, we show that the features learnt by the navigation network can be effectively utilized to transfer knowledge between a diverse set of tasks, such as object collection, deathmatch, and self-localization.", "title": "" }, { "docid": "1dcae3f9b4680725d2c7f5aa1736967c", "text": "Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new goals, and (2) data inefficiency, i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization. To address the second issue, we propose the AI2-THOR framework, which provides an environment with high-quality 3D scenes and a physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment.", "title": "" } ]
[ { "docid": "6b8942948b3f23971254ba7b90dac6f0", "text": "An important preprocess in computer-aided orthodontics is to segment teeth from the dental models accurately, which should involve manual interactions as few as possible. But fully automatic partition of all teeth is not a trivial task, since teeth occur in different shapes and their arrangements vary substantially from one individual to another. The difficulty is exacerbated when severe teeth malocclusion and crowding problems occur, which is a common occurrence in clinical cases. Most published methods in this area either are inaccurate or require lots of manual interactions. Motivated by the state-of-the-art general mesh segmentation methods that adopted the theory of harmonic field to detect partition boundaries, this paper proposes a novel, dental-targeted segmentation framework for dental meshes. With a specially designed weighting scheme and a strategy of a priori knowledge to guide the assignment of harmonic constraints, this method can identify teeth partition boundaries effectively. Extensive experiments and quantitative analysis demonstrate that the proposed method is able to partition high-quality teeth automatically with robustness and efficiency.", "title": "" }, { "docid": "ac4c584379ad2fac9b5e28b550e02b67", "text": "Primary cilium dysfunction underlies the pathogenesis of Bardet-Biedl syndrome (BBS), a genetic disorder whose symptoms include obesity, retinal degeneration, and nephropathy. However, despite the identification of 12 BBS genes, the molecular basis of BBS remains elusive. Here we identify a complex composed of seven highly conserved BBS proteins. This complex, the BBSome, localizes to nonmembranous centriolar satellites in the cytoplasm but also to the membrane of the cilium. Interestingly, the BBSome is required for ciliogenesis but is dispensable for centriolar satellite function. This ciliogenic function is mediated in part by the Rab8 GDP/GTP exchange factor, which localizes to the basal body and contacts the BBSome. Strikingly, Rab8(GTP) enters the primary cilium and promotes extension of the ciliary membrane. Conversely, preventing Rab8(GTP) production blocks ciliation in cells and yields characteristic BBS phenotypes in zebrafish. Our data reveal that BBS may be caused by defects in vesicular transport to the cilium.", "title": "" }, { "docid": "7641d1576250ed1a7d559cc1ad5ee439", "text": "Considerados como la base evolutiva vertebrada tras su radiación adaptativa en el Devónico, los peces constituyen en la actualidad el grupo más exitoso y diversificado de vertebrados. Como grupo, este conjunto heterogéneo de organismos representa una aparente encrucijada entre la respuesta inmunitaria innata y la aparición de una respuesta inmunitaria adaptativa. La mayoría de órganos inmunitarios de los mamíferos tienen sus homólogos en los peces. Sin embargo, su eventual menor complejidad estructural podría potencialmente limitar la capacidad para generar una respuesta inmunitaria completamente funcional frente a la invasión de patógenos. Se discute aquí la capacidad de los peces para generar respuestas inmunitarias exitosas, teniendo en cuenta la robustez aparente de la respuesta innata de los peces, en comparación con la observada en vertebrados superiores.", "title": "" }, { "docid": "799912616c6978f63938bfac6b21b1ec", "text": "Friction stir welding is a solid state joining process. High strength aluminum alloys are widely used in aircraft and marine industries. Generally, the mechanical properties of fusion welded aluminum joints are poor. As friction stir welding occurs in solid state, no solidification structures are created thereby eliminating the brittle and eutectic phases common in fusion welding of high strength aluminum alloys. In this review the process parameters, microstructural evolution, and effect of friction stir welding on the properties of weld specific to aluminum alloys have been discussed. Keywords—Aluminum alloys, Friction stir welding (FSW), Microstructure, Properties.", "title": "" }, { "docid": "378f881bb955777e69b5aeff090c53fe", "text": "Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques.", "title": "" }, { "docid": "e31fd6ce6b78a238548e802d21b05590", "text": "Machine learning techniques have long been used for various purposes in software engineering. This paper provides a brief overview of the state of the art and reports on a number of novel applications I was involved with in the area of software testing. Reflecting on this personal experience, I draw lessons learned and argue that more research should be performed in that direction as machine learning has the potential to significantly help in addressing some of the long-standing software testing problems.", "title": "" }, { "docid": "6aac9516916fc651e016c0bc3b1b2d90", "text": "Change your habit to hang or waste the time to only chat with your friends. It is done by your everyday, don't you feel bored? Now, we will show you the new habit that, actually it's a very old habit to do that can make your life more qualified. When feeling bored of always chatting with your friends all free time, you can find the book enPDF ontology engineering in a networked world and then read it.", "title": "" }, { "docid": "6c504c7a69dba18e8cbc6a3678ab4b09", "text": "This letter presents a compact model for flexible analog/RF circuits design with amorphous indium-gallium-zinc oxide thin-film transistors (TFTs). The model is based on the MOSFET LEVEL=3 SPICE model template, where parameters are fitted to measurements for both dc and ac characteristics. The proposed TFT compact model shows good scalability of the drain current for device channel lengths ranging from 50 to 3.6 μm. The compact model is validated by comparing measurements and simulations of various TFT amplifier circuits. These include a two-stage cascode amplifier showing 10 dB of voltage gain and 2.9 MHz of bandwidth.", "title": "" }, { "docid": "d1f74ff7e9736eb0b2f684fc37b50d7c", "text": "The simultaneous improvement in the erase and retention characteristics in a TANOS (TaN-Al<sub>2</sub>O<sub>3</sub>-Si<sub>3</sub>N<sub>4</sub>-SiO<sub>2</sub>-Si) flash memory transistor by utilizing the band-engineered and compositionally graded SiN<sub>x</sub> trap layer is demonstrated. With the process optimizations, a > 4V memory window and excellent 150 degC 24-h retention (0.1-0.5 V charge loss) for a programmed DeltaV<sub>t</sub> = 4V with respect to the initial state are obtained. The band-engineered SiN<sub>x</sub> charge storage layer enables flash scaling beyond the floating-gate technology with a promise for improved erase speed, retention, lower supply voltages, and multilevel cell applications.", "title": "" }, { "docid": "1700821e3c9ec22ec151d151f3ac7925", "text": "This review provides a comprehensive examination of the literature surrounding the current state of K–12 distance education. The growth in K–12 distance education follows in the footsteps of expanded learning opportunities at all levels of public education and training in corporate environments. Implementation has been accomplished with a limited research base, often drawing from studies in adult distance education and policies adapted from traditional learning environments. This review of literature provides an overview of the field of distance education with a focus on the research conducted in K–12 distance education environments. (", "title": "" }, { "docid": "cc0114e5365dadd1b95d54b3debbf735", "text": "Detecting events from social media data has important applications in public security, political issues, and public health. Many studies have focused on detecting specific or unspecific events from Twitter streams. However, not much attention has been paid to detecting changes, and their impact, in online conversations related to an event. We propose methods for detecting such changes, using clustering of temporal profiles of hashtags, and three change point detection algorithms. The methods were tested on two Twitter datasets: one covering the 2014 Ottawa shooting event, and one covering the Sochi winter Olympics. We compare our approach to a baseline consisting of detecting change from raw counts in the conversation. We show that our method produces large gains in change detection accuracy on both datasets.", "title": "" }, { "docid": "179b76942747f8a90f9036ea8d2377e7", "text": "CNN (Convolution Neural Network) is widely used in visual analysis and achieves exceptionally high performances in image classification, face detection, object recognition, image recoloring, and other learning jobs. Using deep learning frameworks, such as Torch and Tensorflow, CNN can be efficiently computed by leveraging the power of GPU. However, one drawback of GPU is its limited memory which prohibits us from handling large images. Passing a 4K resolution image to the VGG network will result in an exception of out-of-memory for Titan-X GPU. In this paper, we propose a new approach that adopts the BSP (bulk synchronization parallel) model to compute CNNs for images of any size. Before fed to a specific CNN layer, the image is split into smaller pieces which go through the neural network separately. Then, a specific padding and normalization technique is adopted to merge sub-images back into one image. Our approach can be easily extended to support distributed multi-GPUs. In this paper, we use neural style network as our example to illustrate the effectiveness of our approach. We show that using one Titan-X GPU, we can transfer the style of an image with 10,000×10,000 pixels within 1 minute.", "title": "" }, { "docid": "bb2c7c7d064eebcef527efe93a7c873b", "text": "We have proposed and verified an efficient architecture for a high-speed I/O transceiver design that implements far-end crosstalk (FEXT) cancellation. In this design, TX pre-emphasis, used traditionally to reduce ISI, is combined with FEXT cancellation at the transmitter to remove crosstalk-induced jitter and interference. The architecture has been verified via simulation models based on channel measurement. A prototype implementation of a 12.8Gbps source-synchronous serial link transmitter has been developed in TSMC's 0.18mum CMOS technology. The proposed design consists of three 12.8Gbps data lines that uses a half-rate PLL clock of 6.4GHz. The chip includes a PRBS generator to simplify multi-lane testing. Simulation results show that, even with a 2times reduction in line separation, FEXT cancellation can successfully reduce jitter by 51.2 %UI and widen the eye by 14.5%. The 2.5 times 1.5 mm2 core consumes 630mW per lane at 12.8Gbps with a 1.8V supply", "title": "" }, { "docid": "2ecdf4a4d7d21ca30f3204506a91c22c", "text": "Because of the transition from analog to digital technologies, content owners are seeking technologies for the protection of copyrighted multimedia content. Encryption and watermarking are two major tools that can be used to prevent unauthorized consumption and duplication. In this paper, we generalize an idea in a recent paper that embeds a binary pattern in the form of a binary image in the LL and HH bands at the second level of Discrete Wavelet Transform (DWT) decomposition. Our generalization includes all four bands (LL, HL, LH, and HH), and a comparison of embedding a watermark at first and second level decompositions. We tested the proposed algorithm against fifteen attacks. Embedding the watermark in lower frequencies is robust to a group of attacks, and embedding the watermark in higher frequencies is robust to another set of attacks. Only for rewatermarking and collusion attacks, the watermarks extracted from all four bands are identical. Our experiments indicate that first level decomposition appear advantageous for two reasons: The area for watermark embedding is maximized, and the extracted watermarks are more textured with better visual quality.", "title": "" }, { "docid": "fbdb8df8bfb46db664723cd255c56a5a", "text": "In this paper we present an analysis of a 280 GB AltaVista Sear ch Engine query log consisting of approximately 1 billion entries for search requests over a period of six weeks. This represents approximately 28 5 million user sessions, each an attempt to fill a single information need. We present an analysis of individual queries, query duplica tion, and query sessions. Furthermore we present results of a correlation a nalysis of the log entries, studying the interaction of terms within queries. Our data supports the conjecture that web users differ signi fica tly from the user assumed in the standard information retrieval lite rature. Specifically, we show that web users type in short queries, mostly look at th e first 10 results only, and seldom modify the query. This suggests that t raditional information retrieval techniques might not work well for answeri ng web search requests. The correlation analysis showed that the most highly correl ated items are constituents of phrases. This result indicates it may be useful for search engines to consider search terms as parts of phrases even if t he user did not explicitly specify them as such.", "title": "" }, { "docid": "f85b08a0e3f38c1471b3c7f05e8a17ba", "text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. A state tracking module is primarily meant to act as support for a dialog policy but it can also be used as support for dialog corpus summarization and other kinds of information extraction from transcription of dialogs. From a probabilistic view, this is achieved by maintaining a posterior distribution over hidden dialog states composed, in the simplest case, of a set of context dependent variables. Once a dialog policy is defined, deterministic or learnt, it is in charge of selecting an optimal dialog act given the estimated dialog state and a defined reward function. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset that has been converted for the occasion in order to fit the relaxed assumption of a machine reading formulation where the true state is only provided at the very end of each dialog instead of providing the state updates at the utterance level. We show that the proposed tracker gives encouraging results. Finally, we propose to extend the DSTC-2 dataset with specific reasoning capabilities requirement like counting, list maintenance, yes-no question answering and indefinite knowledge management.", "title": "" }, { "docid": "a1c5e4fd16f129f9e7d36054a2b5355c", "text": "Previous research has attempted to identify a deterrent effect of capital punishment. We argue that the quality of life in prison is likely to have a greater impact on criminal behavior than the death penalty. Using state-level panel data covering the period 1950±90, we demonstrate that the death rate among prisoners (the best available proxy for prison conditions) is negatively correlated with crime rates, consistent with deterrence. This finding is shown to be quite robust. In contrast, there is little systematic evidence that the execution rate influences crime rates in this time period.", "title": "" }, { "docid": "df27a47597a20d0de2a001d2819ccbf7", "text": "This paper proposes an electronically reconfigurable Doherty amplifier capable of efficiently amplifying multi-standard multi-band wireless signals centered at widely spaced frequencies. The paper outlines closed form equations for an effective design methodology of frequency agile Doherty amplifiers driven with multi-mode signals using a small number of electronically tunable devices. As a proof of concept, a reconfigurable Doherty prototype is designed and fabricated to operate at 1.9, 2.14, and 2.6 GHz meant to efficiently amplify signals with peak-to-average power ratio equal to 6, 9 and 12 dB. The measurement results obtained using continuous wave signals reveal drain efficiencies of about 67% and 42% at the peak power and 12 dB output back off power respectively for the three operating frequencies. In addition, the reconfigurable Doherty amplifier is successfully linearized when driven with 20 MHz wideband code-division multiple access and 20 MHz long term evolution signals, using a Volterra based digital predistrtion algorithm which exploits a pruned Volterra series.", "title": "" }, { "docid": "689b81c8d4c1c04175313c8eefe2284e", "text": "Situational awareness in cyber domain is one of the key features for quick and accurate decision making and anomaly detection. In order to provide situational awareness, certain methods have been introduced so far and attack graph is one of them. Attack graphs help the security analyst to visualize the network topology and understand typical vulnerability and exploit behaviors in cyber domain (e.g., IT asset and the network). They provide more proactive view compared to other reactive views; hence risk management and evaluation can be done in an efficient and interactive fashion. Attack trees can be used for various purposes since they can map network assets, network attacks and possible vulnerabilities which may exist in the IT assets. This study introduces an integrated cyber security capability called, BSGS, which can help analysts to create attack trees, identify vulnerabilities and have effective risk assessment procedures. In this way, the cyber security specialists will have a more efficient and holistic way to assess their environments and take the most effective precautions to minimize cyber risks.", "title": "" } ]
scidocsrr
c7872efa4800f1b6d7fd55fdb5d1a03f
1 Experiments with Unit Selection Speech Databases for Indian Languages
[ { "docid": "758e19c8e39ad9e85d17d1ab67c9ef14", "text": "In addition to ordinary words and names, real text contains non-standard “words” (NSWs), including numbers, abbreviations, dates, currency amounts and acronyms. Typically, one cannot find NSWs in a dictionary, nor can one find their pronunciation by an application of ordinary “letter-to-sound” rules. Non-standard words also have a greater propensity than ordinary words to be ambiguous with respect to their interpretation or pronunciation. In many applications, it is desirable to “normalize” text by replacing the NSWs with the contextually appropriate ordinary word or sequence of words. Typical technology for text normalization involves sets of ad hoc rules tuned to handle one or two genres of text (often newspaper-style text) with the expected result that the techniques do not usually generalize well to new domains. The purpose of the work reported here is to take some initial steps towards addressing deficiencies in previous approaches to text normalization. We developed a taxonomy of NSWs on the basis of four rather distinct text types—news text, a recipes newsgroup, a hardware-product-specific newsgroup, and real-estate classified ads. We then investigated the application of several general techniques including n-gram language models, decision trees and weighted finite-state transducers to the range of NSW types, and demonstrated that a systematic treatment can lead to better results than have been obtained by the ad hoc treatments that have typically been used in the past. For abbreviation expansion in particular, we investigated both supervised and unsupervised approaches. We report results in terms of word-error rate, which is standard in speech recognition evaluations, but which has only occasionally been used as an overall measure in evaluating text normalization systems. c © 2001 Academic Press Author for correspondence: AT&T Labs–Research, Shannon Laboratory, Room B207, 180 Park Avenue, PO Box 971, Florham Park, NJ 07932-0000, U.S.A. E-mail: rws@research.att.com 0885–2308/01/030287 + 47 $35.00/0 c © 2001 Academic Press", "title": "" } ]
[ { "docid": "006ea5f44521c42ec513edc1cbff1c43", "text": "In 2004 we published in this journal an article describing OntoLearn, one of the first systems to automatically induce a taxonomy from documents and Web sites. Since then, OntoLearn has continued to be an active area of research in our group and has become a reference work within the community. In this paper we describe our next-generation taxonomy learning methodology, which we name OntoLearn Reloaded. Unlike many taxonomy learning approaches in the literature, our novel algorithm learns both concepts and relations entirely from scratch via the automated extraction of terms, definitions, and hypernyms. This results in a very dense, cyclic and potentially disconnected hypernym graph. The algorithm then induces a taxonomy from this graph via optimal branching and a novel weighting policy. Our experiments show that we obtain high-quality results, both when building brand-new taxonomies and when reconstructing sub-hierarchies of existing taxonomies.", "title": "" }, { "docid": "a1d58b3a9628dc99edf53c1112dc99b8", "text": "Multiple criteria decision-making (MCDM) research has developed rapidly and has become a main area of research for dealing with complex decision problems. The purpose of the paper is to explore the performance evaluation model. This paper develops an evaluation model based on the fuzzy analytic hierarchy process and the technique for order performance by similarity to ideal solution, fuzzy TOPSIS, to help the industrial practitioners for the performance evaluation in a fuzzy environment where the vagueness and subjectivity are handled with linguistic values parameterized by triangular fuzzy numbers. The proposed method enables decision analysts to better understand the complete evaluation process and provide a more accurate, effective, and systematic decision support tool. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c530181b0ed858cf8c2819ff1fcda1b4", "text": "The recent explosive growth in convolutional neural network (CNN) research has produced a variety of new architectures for deep learning. One intriguing new architecture is the bilinear CNN (BCNN), which has shown dramatic performance gains on certain fine-grained recognition problems [13]. We apply this new CNN to the challenging new face recognition benchmark, the IARPA Janus Benchmark A (IJB-A) [10]. This is the first widely available public benchmark designed specifically to test face identification in real-world images. It features faces from a large number of identities in challenging real-world conditions. Because the face images were not identified automatically using a computer face detection system, it does not have the bias inherent in such a database. As a result, it includes variations in pose that are more challenging than many other popular benchmarks. In our experiments, we demonstrate the performance of the model trained only on ImageNet, then fine-tuned on the training set of IJB-A, and finally use a moderate-sized external database, FaceScrub [15]. Another feature of this benchmark is that that the testing data consists of collections of samples of a particular identity. We consider two techniques for pooling samples from these collections to improve performance over using only a single image, and we report results for both methods. Our application of this new CNN to the IJB-A results in gains over the published baselines of this new database.", "title": "" }, { "docid": "ba05d570ef1f1f7b65c94b83e2b1ec72", "text": "Convolutional Neural Networks (CNNs) are a nature-inspired model, extensively employed in a broad range of applications in computer vision, machine learning and pattern recognition. The CNN algorithm requires execution of multiple layers, commonly called convolution layers, that involve application of 2D convolution filters of different sizes over a set of input image features. Such a computation kernel is intrinsically parallel, thus significantly benefits from acceleration on parallel hardware. In this work, we propose an accelerator architecture, suitable to be implemented on mid-to high-range FPGA devices, that can be re-configured at runtime to adapt to different filter sizes in different convolution layers. We present an accelerator configuration, mapped on a Xilinx Zynq XC-Z7045 device, that achieves up to 120 GMAC/s (16 bit precision) when executing 5×5 filters and up to 129 GMAC/s when executing 3×3 filters, consuming less than 10W of power, reaching more than 97% DSP resource utilizazion at 150MHz operating frequency and requiring only 16B/cycle I/O bandwidth.", "title": "" }, { "docid": "f381cce9e26441779b2741e19875f0d9", "text": "Human affect recognition is the field of study associated with using automatic techniques to identify human emotion or human affective state. A person's affective states is often communicated non-verbally through body language. A large part of human body language communication is the use of head gestures. Almost all cultures use subtle head movements to convey meaning. Two of the most common and distinct head gestures are the head nod and the head shake gestures. In this paper we present a robust system to automatically detect head nod and shakes. We employ the Microsoft Kinect and utilise discrete Hidden Markov Models (HMMs) as the backbone to a machine learning based classifier within the system. The system achieves 86% accuracy on test datasets and results are provided.", "title": "" }, { "docid": "fcf46a98f9e77c83e4946bc75fb97849", "text": "Recent work on sequence to sequence translation using Recurrent Neural Networks (RNNs) based on Long Short Term Memory (LSTM) architectures has shown great potential for learning useful representations of sequential data. A oneto-many encoder-decoder(s) scheme allows for a single encoder to provide representations serving multiple purposes. In our case, we present an LSTM encoder network able to produce representations used by two decoders: one that reconstructs, and one that classifies if the training sequence has an associated label. This allows the network to learn representations that are useful for both discriminative and reconstructive tasks at the same time. This paradigm is well suited for semi-supervised learning with sequences and we test our proposed approach on an action recognition task using motion capture (MOCAP) sequences. We find that semi-supervised feature learning can improve state-of-the-art movement classification accuracy on the HDM05 action dataset. Further, we find that even when using only labeled data and a primarily discriminative objective the addition of a reconstructive decoder can serve as a form of regularization that reduces over-fitting and improves test set accuracy.", "title": "" }, { "docid": "72e9f82070605ca5f0467f29ad9ca780", "text": "Social media are pervaded by unsubstantiated or untruthful rumors, that contribute to the alarming phenomenon of misinformation. The widespread presence of a heterogeneous mass of information sources may affect the mechanisms behind the formation of public opinion. Such a scenario is a florid environment for digital wildfires when combined with functional illiteracy, information overload, and confirmation bias. In this essay, we focus on a collection of works aiming at providing quantitative evidence about the cognitive determinants behind misinformation and rumor spreading. We account for users’ behavior with respect to two distinct narratives: a) conspiracy and b) scientific information sources. In particular, we analyze Facebook data on a time span of five years in both the Italian and the US context, and measure users’ response to i) information consistent with one’s narrative, ii) troll contents, and iii) dissenting information e.g., debunking attempts. Our findings suggest that users tend to a) join polarized communities sharing a common narrative (echo chambers), b) acquire information confirming their beliefs (confirmation bias) even if containing false claims, and c) ignore dissenting information.", "title": "" }, { "docid": "43f3908d103ab31ab3a958c0ead9eaf8", "text": "Decision making and risk assessment are becoming a challenging task in oil and gas due to the risk related to the uncertainty and imprecision. This paper proposed a model for the risk assessment based on multi-criteria decision making (MCDM) method by integrating Fuzzy-set theory. In this model, decision makers (experts) provide their preference of risk assessment information in four categories; people, environment, asset, and reputation. A fuzzy set theory is used to evaluate likelihood, consequence and total risk level associated with each category. A case study is presented to demonstrate the proposed model. The results indicate that the proposed Fuzzy MCDM method has the potential to be used by decision makers in evaluating the risk based on multiple inputs and criteria.", "title": "" }, { "docid": "a55f78c1171b1f3b989c4993942317b3", "text": "Injecting binary code into a running program is a common form of attack. Most defenses employ a “guard the doors” approach, blocking known mechanisms of code injection. Randomized instruction set emulation (RISE) is a complementary method of defense, one that performs a hidden randomization of an application's machine code. If foreign binary code is injected into a program running under RISE, it will not be executable because it will not know the proper randomization. The paper describes and analyzes RISE, describing a proof-of-concept implementation built on the open-source Valgrind IA32-to-IA32 translator. The prototype effectively disrupts binary code injection attacks, without requiring recompilation, linking, or access to application source code. Under RISE, injected code (attacks) essentially executes random code sequences. Empirical studies and a theoretical model are reported which treat the effects of executing random code on two different architectures (IA32 and PowerPC). The paper discusses possible extensions and applications of the RISE technique in other contexts.", "title": "" }, { "docid": "78ccfdac121daaae3abe3f8f7c73482b", "text": "We present a method for constructing smooth n-direction fields (line fields, cross fields, etc.) on surfaces that is an order of magnitude faster than state-of-the-art methods, while still producing fields of equal or better quality. Fields produced by the method are globally optimal in the sense that they minimize a simple, well-defined quadratic smoothness energy over all possible configurations of singularities (number, location, and index). The method is fully automatic and can optionally produce fields aligned with a given guidance field such as principal curvature directions. Computationally the smoothest field is found via a sparse eigenvalue problem involving a matrix similar to the cotan-Laplacian. When a guidance field is present, finding the optimal field amounts to solving a single linear system.", "title": "" }, { "docid": "429ac6709131b648bb44a6ccaebe6a19", "text": "We highlight a practical yet rarely discussed problem in dialogue state tracking (DST), namely handling unknown slot values. Previous approaches generally assume predefined candidate lists and thus are not designed to output unknown values, especially when the spoken language understanding (SLU) module is absent as in many end-to-end (E2E) systems. We describe in this paper an E2E architecture based on the pointer network (PtrNet) that can effectively extract unknown slot values while still obtains state-of-the-art accuracy on the standard DSTC2 benchmark. We also provide extensive empirical evidence to show that tracking unknown values can be challenging and our approach can bring significant improvement with the help of an effective feature dropout technique.", "title": "" }, { "docid": "ae9770179419eb898f944725d8f2165c", "text": "Cloud computing adoption has represented a big challenge for all kinds of companies all over the world. The challenge involves such questions as where to start, which provider should the company choose or whether it is even worthwhile. With a constantly changing economic environment, businesses must assess current technologies and offerings to remain competitive. The possibility of migrating a company's services and infrastructure to the cloud may seem attractive. However, without proper guidance, the results may not be as expected, leading a loss of time and money. As each company has its own needs and requirements, industry-focused frameworks have been proposed (e.g. for educational or governmental institutions). Although these frameworks are useful, they are not applicable to every business. Hence, a generic, widely-applicable and implementable cloud computing adoption framework is proposed in this paper. It takes the best outcomes of previous studies, best-practice suggestions, as well as authors' additions, and sums them up into a more robust, unified framework. The framework consists of 6 detailed phases carrying the user from knowing the company's current state to successfully migrating the data, services, and infrastructure to the cloud. These steps are intended to help IT directors and other decision-makers to reduce risks and maximize benefits throughout the cloud computing adoption process. Data security risks are not discussed in this paper as other authors have already sufficiently studied them. This framework was developed from a business perspective.", "title": "" }, { "docid": "e95bef9aac5bb118109d82dec750da26", "text": "A novel microstrip circular disc monopole antenna with a reconfigurable 10-dB impedance bandwidth is proposed in this communication for cognitive radios (CRs). The antenna is fed by a microstrip line integrated with a bandpass filter based on a three-line coupled resonator (TLCR). The reconfiguration of the filter enables the monopole antenna to operate at either a wideband state or a narrowband state by using a PIN diode. For the narrowband state, two varactor diodes are employed to change the antenna operating frequency from 3.9 to 4.82 GHz continuously, which is different from previous work using PIN diodes to realize a discrete tuning. Similar radiation patterns with low cross-polarization levels are achieved for the two operating states. Measured results on tuning range, radiation patterns, and realized gains are provided, which show good agreement with numerical simulations.", "title": "" }, { "docid": "0bd80f7705539221273314742f278e81", "text": "In January 2012, MITRE performed a real-time, red team/blue team cyber-wargame experiment. This presented the opportunity to blend cyber-warfare with traditional mission planning and execution, including denial and deception tradecraft. The cyberwargame was designed to test a dynamic network defense cyber-security platform being researched in The MITRE Corporation’s Innovation Program called Blackjack, and to investigate the utility of using denial and deception to enhance the defense of information in command and control systems. The Blackjack tool failed to deny the adversary access to real information on the command and control mission system. The adversary had compromised a number of credentials without the computer network defenders’ knowledge, and thereby observed both the real command and control mission system and the fake command and control mission system. However, traditional denial and deception techniques were effective in denying the adversary access to real information on the real command and control mission system, and instead provided the adversary with access to false information on a fake command and control mission system. a 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "238aac56366875b1714284d3d963fe9b", "text": "We construct a general-purpose multi-input functional encryption scheme in the private-key setting. Namely, we construct a scheme where a functional key corresponding to a function f enables a user holding encryptions of $$x_1, \\ldots , x_t$$ x1,…,xt to compute $$f(x_1, \\ldots , x_t)$$ f(x1,…,xt) but nothing else. This is achieved starting from any general-purpose private-key single-input scheme (without any additional assumptions) and is proven to be adaptively secure for any constant number of inputs t. Moreover, it can be extended to a super-constant number of inputs assuming that the underlying single-input scheme is sub-exponentially secure. Instantiating our construction with existing single-input schemes, we obtain multi-input schemes that are based on a variety of assumptions (such as indistinguishability obfuscation, multilinear maps, learning with errors, and even one-way functions), offering various trade-offs between security assumptions and functionality. Previous and concurrent constructions of multi-input functional encryption schemes either rely on stronger assumptions and provided weaker security guarantees (Goldwasser et al. in Advances in cryptology—EUROCRYPT, 2014; Ananth and Jain in Advances in cryptology—CRYPTO, 2015), or relied on multilinear maps and could be proven secure only in an idealized generic model (Boneh et al. in Advances in cryptology—EUROCRYPT, 2015). In comparison, we present a general transformation that simultaneously relies on weaker assumptions and guarantees stronger security.", "title": "" }, { "docid": "d103d856c51a4744d563dff2eff224a7", "text": "Automotive engines is an important application for model-based diagnosis because of legislative regulations. A diagnosis system for the air-intake system of a turbo-charged engine is constructed. The design is made in a systematic way and follows a framework of hypothesis testing. Different types of sensor faults and leakages are considered. It is shown how many different types of fault models, e.g., additive and multiplicative faults, can be used within one common diagnosis system, and using the same underlying design principle. The diagnosis system is experimentally validated on a real engine using industry-standard dynamic test-cycles.", "title": "" }, { "docid": "c81bf639d65789ff488eb2188c310db0", "text": "Speechreading is a notoriously difficult task for humans to perform. In this paper we present an end-to-end model based on a convolutional neural network (CNN) for generating an intelligible acoustic speech signal from silent video frames of a speaking person. The proposed CNN generates sound features for each frame based on its neighboring frames. Waveforms are then synthesized from the learned speech features to produce intelligible speech. We show that by leveraging the automatic feature learning capabilities of a CNN, we can obtain state-of-the-art word intelligibility on the GRID dataset, and show promising results for learning out-of-vocabulary (OOV) words.", "title": "" }, { "docid": "f084a31d156d817c70d0b841767e90d2", "text": "Traffic congestion in metropolitan areas has become more and more serious. Over the past decades, many academic and industrial efforts have been made to alleviate this problem, among which providing accurate, timely and predictive traffic conditions is a promising approach. Nowadays, online open data have rich traffic related information. Typical such resources include official websites of traffic management and operations, web-based map services (like Google map), weather forecasting websites, and local events (sport games, music concerts, etc.) websites. In this paper, online open data are discussed to provide traffic related information. Traffic conditions collected from web based map services are used to demonstrate the feasibility. The stacked long short-term memory model, a kind of deep architecture, is used to learn and predict the patterns of traffic conditions. Experimental results show that the proposed model for traffic condition prediction has superior performance over multilayer perceptron model, decision tree model and support vector machine model.", "title": "" }, { "docid": "866e60129032c4e41761b7b19483c74a", "text": "The technology to immerse people in computer generated worlds was proposed by Sutherland in 1965, and realised in 1968 with a head-mounted display that could present a user with a stereoscopic 3-dimensional view slaved to a sensing device tracking the user's head movements (Sutherland 1965; 1968). The views presented at that time were simple wire frame models. The advance of computer graphics knowledge and technology, itself tied to the enormous increase in processing power and decrease in cost, together with the development of relatively efficient and unobtrusive sensing devices, has led to the emergence of participatory immersive virtual environments, commonly referred to as \"virtual reality\" (VR) (Fisher 1982; Fisher et. al. 1986; Teitel 1990; see also SIGGRAPH Panel Proceedings 1989,1990). Ellis defines virtualisation as \"the process by which a human viewer interprets a patterned sensory impression to be an extended object in an environment other than that in which it physically exists\" (Ellis, 1991). In this definition the idea is taken from geometric optics, where the concept of a \"virtual image\" is precisely defined, and is well understood. In the context of virtual reality the \"patterned sensory impressions\" are generated to the human senses through visual, auditory, tactile and kinesthetic displays, though systems that effectively present information in all such sensory modalities do not exist at present. Ellis further distinguishes between a virtual space, image and environment. An example of the first is a flat surface on which an image is rendered. Perspective depth cues, texture gradients, occlusion, and other similar aspects of the image lead to an observer perceiving", "title": "" }, { "docid": "161bfbfef048ba5d3841818278410005", "text": "Memory bandwidth has been one of the most critical system performance bottlenecks. As a result, the HMC (Hybrid Memory Cube) has recently been proposed to improve DRAM bandwidth as well as energy efficiency. In this paper, we explore different system interconnect designs with HMCs. We show that processor-centric network architectures cannot fully utilize processor bandwidth across different traffic patterns. Thus, we propose a memory-centric network in which all processor channels are connected to HMCs and not to any other processors as all communication between processors goes through intermediate HMCs. Since there are multiple HMCs per processor, we propose a distributor-based network to reduce the network diameter and achieve lower latency while properly distributing the bandwidth across different routers and providing path diversity. Memory-centric networks lead to some challenges including higher processor-to-processor latency and the need to properly exploit the path diversity. We propose a pass-through microarchitecture, which, in combination with the proper intra-HMC organization, reduces the zero-load latency while exploiting adaptive (and non-minimal) routing to load-balance across different channels. Our results show that memory-centric networks can efficiently utilize processor bandwidth for different traffic patterns and achieve higher performance by providing higher memory bandwidth and lower latency.", "title": "" } ]
scidocsrr
9ac0e92f19d4afd954d95c20abb6f9a3
Generalized Boosting Algorithms for Convex Optimization
[ { "docid": "f44fad35f68957ff27e9cfb97758cc2d", "text": "Boosting combines weak classifiers to form highly accurate predictors. Although the case of binary classification is well understood, in the multiclass setting, the “correct” requirements on the weak classifier, or the notion of the most efficient boosting algorithms are missing. In this paper, we create a broad and general framework, within which we make precise and identify the optimal requirements on the weak-classifier, as well as design the most effective, in a certain sense, boosting algorithms that assume such requirements.", "title": "" } ]
[ { "docid": "8923cd83f3283ef27fca8dd0ecf2a08f", "text": "This paper investigates when users create profiles in different social networks, whether they are redundant expressions of the same persona, or they are adapted to each platform. Using the personal webpages of 116,998 users on About.me, we identify and extract matched user profiles on several major social networks including Facebook, Twitter, LinkedIn, and Instagram. We find evidence for distinct site-specific norms, such as differences in the language used in the text of the profile self-description, and the kind of picture used as profile image. By learning a model that robustly identifies the platform given a user’s profile image (0.657–0.829 AUC) or self-description (0.608–0.847 AUC), we confirm that users do adapt their behaviour to individual platforms in an identifiable and learnable manner. However, different genders and age groups adapt their behaviour differently from each other, and these differences are, in general, consistent across different platforms. We show that differences in social profile construction correspond to differences in how formal or informal", "title": "" }, { "docid": "8f2b9981d15b8839547f56f5f1152882", "text": "In this paper we study how to discover the evolution of topics over time in a time-stamped document collection. Our approach is uniquely designed to capture the rich topology of topic evolution inherent in the corpus. Instead of characterizing the evolving topics at fixed time points, we conceptually define a topic as a quantized unit of evolutionary change in content and discover topics with the time of their appearance in the corpus. Discovered topics are then connected to form a topic evolution graph using a measure derived from the underlying document network. Our approach allows inhomogeneous distribution of topics over time and does not impose any topological restriction in topic evolution graphs. We evaluate our algorithm on the ACM corpus.\n The topic evolution graphs obtained from the ACM corpus provide an effective and concrete summary of the corpus with remarkably rich topology that are congruent to our background knowledge. In a finer resolution, the graphs reveal concrete information about the corpus that were previously unknown to us, suggesting the utility of our approach as a navigational tool for the corpus.", "title": "" }, { "docid": "862a5cd5db69ed632c12c046bb0cf9a2", "text": "One of the great societal challenges that we face today concerns the move to more sustainable patterns of energy consumption, reflecting the need to balance both individual consumer choice and societal demands. In order for this ‘energy turnaround’ to take place, however, reducing residential energy consumption must go beyond using energy-efficient devices: More sustainable behaviour and lifestyles are essential parts of future ‘energy aware’ living.Addressing this issue from an HCI perspective, this paper presents the results of a 3-year research project dealing with the co-design and appropriation of a Home Energy Management System (HEMS) that has been rolled out in a living lab setting with seven households for a period of 18 months. Our HEMS is inspired by feedback systems in Sustainable Interaction Design and allows the monitoring of energy consumption in real-time. In contrast to existing research mainly focusing on how technology can persuade people to consume less energy (‘what technology does to people’), our study focuses on the appropriation of energy feedback systems (‘what people do with technology’) and how newly developed practices can become a resource for future technology design. Therefore, we deliberately followed an open research design. In keeping with this approach, our study uncovers various responses, practices and obstacles of HEMS use. We show that HEMS use is characterized by a number of different features. Recognizing the distinctive patterns of technology use in the different households and the evolutionary character of that use within the households, we conclude with a discussion of these patterns in relation to existing research and their meaning for the design of future HEMSs.", "title": "" }, { "docid": "28b3d7fbcb20f5548d22dbf71b882a05", "text": "In this paper, we propose a novel abnormal event detection method with spatio-temporal adversarial networks (STAN). We devise a spatio-temporal generator which synthesizes an inter- frame by considering spatio-temporal characteristics with bidirectional ConvLSTM. A proposed spatio-temporal discriminator determines whether an input sequence is real-normal or not with 3D convolutional layers. These two networks are trained in an adversarial way to effectively encode spatio-temporal features of normal patterns. After the learning, the generator and the discriminator can be independently used as detectors, and deviations from the learned normal patterns are detected as abnormalities. Experimental results show that the proposed method achieved competitive performance compared to the state-of-the-art methods. Further, for the interpretation, we visualize the location of abnormal events detected by the proposed networks using a generator loss and discriminator gradients.", "title": "" }, { "docid": "1cbd768c8838660bb50908ed6b3d494f", "text": "Data mining concept is growing fast in popularity, it is a technology that involving methods at the intersection of (Artificial intelligent, Machine learning, Statistics and database system), the main goal of data mining process is to extract information from a large data into form which could be understandable for further use. Some algorithms of data mining are used to give solutions to classification problems in database. In this paper a comparison among three classification’s algorithms will be studied, these are (KNearest Neighbor classifier, Decision tree and Bayesian network) algorithms. The paper will demonstrate the strength and accuracy of each algorithm for classification in term of performance efficiency and time complexity required. For model validation purpose, twenty-four-month data analysis is conducted on a mock-up basis.", "title": "" }, { "docid": "4d44572846a0989bf4bc230b669c88b7", "text": "Application-specific integrated circuit (ASIC) ML4425 is often used for sensorless control of permanent-magnet (PM) brushless direct current (BLDC) motor drives. It integrates the terminal voltage of the unenergized winding that contains the back electromotive force (EMF) information and uses a phase-locked loop (PLL) to determine the proper commutation sequence for the BLDC motor. However, even without pulsewidth modulation, the terminal voltage is distorted by voltage pulses due to the freewheel diode conduction. The pulses, which appear very wide in an ultrahigh-speed (120 kr/min) drive, are also integrated by the ASIC. Consequently, the motor commutation is significantly retarded, and the drive performance is deteriorated. In this paper, it is proposed that the ASIC should integrate the third harmonic back EMF instead of the terminal voltage, such that the commutation retarding is largely reduced and the motor performance is improved. Basic principle and implementation of the new ASIC-based sensorless controller will be presented, and experimental results will be given to verify the control strategy. On the other hand, phase delay in the motor currents arises due to the influence of winding inductance, reducing the drive performance. Therefore, a novel circuit with discrete components is proposed. It also uses the integration of third harmonic back EMF and the PLL technique and provides controllable advanced commutation to the BLDC motor.", "title": "" }, { "docid": "1969bf5a07349cc5a9b498e0437e41fe", "text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.", "title": "" }, { "docid": "e59f3f8e0deea8b4caa32b54049ad76b", "text": "We present AD, a new algorithm for approximate maximum a posteriori (MAP) inference on factor graphs, based on the alternating directions method of multipliers. Like other dual decomposition algorithms, AD has a modular architecture, where local subproblems are solved independently, and their solutions are gathered to compute a global update. The key characteristic of AD is that each local subproblem has a quadratic regularizer, leading to faster convergence, both theoretically and in practice. We provide closed-form solutions for these AD subproblems for binary pairwise factors and factors imposing first-order logic constraints. For arbitrary factors (large or combinatorial), we introduce an active set method which requires only an oracle for computing a local MAP configuration, making AD applicable to a wide range of problems. Experiments on synthetic and real-world problems show that AD compares favorably with the state-of-the-art.", "title": "" }, { "docid": "2c33527287ead83ac4be0c14a68c7349", "text": "In object recognition, soft-assignment coding enjoys computational efficiency and conceptual simplicity. However, its classification performance is inferior to the newly developed sparse or local coding schemes. It would be highly desirable if its classification performance could become comparable to the state-of-the-art, leading to a coding scheme which perfectly combines computational efficiency and classification performance. To achieve this, we revisit soft-assignment coding from two key aspects: classification performance and probabilistic interpretation. For the first aspect, we argue that the inferiority of soft-assignment coding is due to its neglect of the underlying manifold structure of local features. To remedy this, we propose a simple modification to localize the soft-assignment coding, which surprisingly achieves comparable or even better performance than existing sparse or local coding schemes while maintaining its computational advantage. For the second aspect, based on our probabilistic interpretation of the soft-assignment coding, we give a probabilistic explanation to the magic max-pooling operation, which has successfully been used by sparse or local coding schemes but still poorly understood. This probability explanation motivates us to develop a new mix-order max-pooling operation which further improves the classification performance of the proposed coding scheme. As experimentally demonstrated, the localized soft-assignment coding achieves the state-of-the-art classification performance with the highest computational efficiency among the existing coding schemes.", "title": "" }, { "docid": "a5c67537b72e3cd184b43c0a0e7c96b2", "text": "These notes give a short introduction to Gaussian mixture models (GMMs) and the Expectation-Maximization (EM) algorithm, first for the specific case of GMMs, and then more generally. These notes assume you’re familiar with basic probability and basic calculus. If you’re interested in the full derivation (Section 3), some familiarity with entropy and KL divergence is useful but not strictly required. The notation here is borrowed from Introduction to Probability by Bertsekas & Tsitsiklis: random variables are represented with capital letters, values they take are represented with lowercase letters, pX represents a probability distribution for random variable X, and pX(x) represents the probability of value x (according to pX). We’ll also use the shorthand notation X 1 to represent the sequence X1, X2, . . . , Xn, and similarly x n 1 to represent x1, x2, . . . , xn. These notes follow a development somewhat similar to the one in Pattern Recognition and Machine Learning by Bishop.", "title": "" }, { "docid": "7917c6d9a9d495190e5b7036db92d46d", "text": "Background A precise understanding of the anatomical structures of the heart and great vessels is essential for surgical planning in order to avoid unexpected findings. Rapid prototyping techniques are used to print three-dimensional (3D) replicas of patients’ cardiovascular anatomy based on 3D clinical images such as MRI. The purpose of this study is to explore the use of 3D patient-specific cardiovascular models using rapid prototyping techniques to improve surgical planning in patients with complex congenital heart disease.", "title": "" }, { "docid": "41a14778bb7603ee2faf6d8df46e2749", "text": "Absenteeism is an issue that has grown in importance over the past few years; however, little has been done to explore the impact of presenteeism on individual and organisational performance and well-being. This article is based on interviews collected in nine case study organisations in the UK. Two sector organisations (one private and one public) were studied to examine absence management and a conceptual model of presenteeism, with further illustration provided using data from the other seven case studies. This enabled a pattern of presenteeism to emerge, along with the contextual and individual factors which impact on it. In addition to previous research, we found that presenteeism is a complex ‘problem’ and that it is not a single one-dimensional construct, but is continually being shaped by individual and organisational factors. In addition, we found that performance and well-being are more closely related to the organisational reaction to presenteeism and absenteeism, rather than the act itself. Contact: Dr Denise Baker-McClearn, Health Sciences Research Institute, Warwick Medical School, University of Warwick, Coventry CV4 7AL, UK. Email: denise.baker@warwick.ac.ukhrmj_118 311..328", "title": "" }, { "docid": "34ab20699d12ad6cca34f67cee198cd9", "text": "Such as relational databases, most graphs databases are OLTP databases (online transaction processing) of generic use and can be used to produce a wide range of solutions. That said, they shine particularly when the solution depends, first, on our understanding of how things are connected. This is more common than one may think. And in many cases it is not only how things are connected but often one wants to know something about the different relationships in our field their names, qualities, weight and so on. Briefly, connectivity is the key. The graphs are the best abstraction one has to model and query the connectivity; databases graphs in turn give developers and the data specialists the ability to apply this abstraction to their specific problems. For this purpose, in this paper one used this approach to simulate the route planner application, capable of querying connected data. Merely having keys and values is not enough; no more having data partially connected through joins semantically poor. We need both the connectivity and contextual richness to operate these solutions. The case study herein simulates a railway network railway stations connected with one another where each connection between two stations may have some properties. And one answers the question: how to find the optimized route (path) and know whether a station is reachable from one station or not and in which depth.", "title": "" }, { "docid": "a8c373535cfc4a574f0a91eca1eb10c3", "text": "Changes in the media landscape have made simultaneous usage of the computer and television increasingly commonplace, but little research has explored how individuals navigate this media multitasking environment. Prior work suggests that self-insight may be limited in media consumption and multitasking environments, reinforcing a rising need for direct observational research. A laboratory experiment recorded both younger and older individuals as they used a computer and television concurrently, multitasking across television and Internet content. Results show that individuals are attending primarily to the computer during media multitasking. Although gazes last longer on the computer when compared to the television, the overall distribution of gazes is strongly skewed toward very short gazes only a few seconds in duration. People switched between media at an extreme rate, averaging more than 4 switches per min and 120 switches over the 27.5-minute study exposure. Participants had little insight into their switching activity and recalled their switching behavior at an average of only 12 percent of their actual switching rate revealed in the objective data. Younger individuals switched more often than older individuals, but other individual differences such as stated multitasking preference and polychronicity had little effect on switching patterns or gaze duration. This overall pattern of results highlights the importance of exploring new media environments, such as the current drive toward media multitasking, and reinforces that self-monitoring, post hoc surveying, and lay theory may offer only limited insight into how individuals interact with media.", "title": "" }, { "docid": "b46b8dd33cf82d82d41f501ea87ebfc1", "text": "Repetition is a core principle in music. This is especially true for popular songs, generally marked by a noticeable repeating musical structure, over which the singer performs varying lyrics. On this basis, we propose a simple method for separating music and voice, by extraction of the repeating musical structure. First, the period of the repeating structure is found. Then, the spectrogram is segmented at period boundaries and the segments are averaged to create a repeating segment model. Finally, each time-frequency bin in a segment is compared to the model, and the mixture is partitioned using binary time-frequency masking by labeling bins similar to the model as the repeating background. Evaluation on a dataset of 1,000 song clips showed that this method can improve on the performance of an existing music/voice separation method without requiring particular features or complex frameworks.", "title": "" }, { "docid": "5691a43e4ea629e2cb2d5df928813247", "text": "Due to the inherent uncertainty involved in renewable energy forecasting, uncertainty quantification is a key input to maintain acceptable levels of reliability and profitability in power system operation. A proposal is formulated and evaluated here for the case of solar power generation, when only power and meteorological measurements are available, without sky-imaging and information about cloud passages. Our empirical investigation reveals that the distribution of forecast errors do not follow any of the common parametric densities. This therefore motivates the proposal of a nonparametric approach to generate very short-term predictive densities, i.e., for lead times between a few minutes to one hour ahead, with fast frequency updates. We rely on an Extreme Learning Machine (ELM) as a fast regression model, trained in varied ways to obtain both point and quantile forecasts of solar power generation. Four probabilistic methods are implemented as benchmarks. Rival approaches are evaluated based on a number of test cases for two solar power generation sites in different climatic regions, allowing us to show that our approach results in generation of skilful and reliable probabilistic forecasts in a computationally efficient manner.", "title": "" }, { "docid": "39fe1618fad28ec6ad72d326a1d00f24", "text": "Popular real-time public events often cause upsurge of traffic in Twitter while the event is taking place. These posts range from real-time update of the event's occurrences highlights of important moments thus far, personal comments and so on. A large user group has evolved who seeks these live updates to get a brief summary of the important moments of the event so far. However, major social search engines including Twitter still present the tweets satisfying the Boolean query in reverse chronological order, resulting in thousands of low quality matches agglomerated in a prosaic manner. To get an overview of the happenings of the event, a user is forced to read scores of uninformative tweets causing frustration. In this paper, we propose a method for multi-tweet summarization of an event. It allows the search users to quickly get an overview about the important moments of the event. We have proposed a graph-based retrieval algorithm that identifies tweets with popular discussion points among the set of tweets returned by Twitter search engine in response to a query comprising the event related keywords. To ensure maximum coverage of topical diversity, we perform topical clustering of the tweets before applying the retrieval algorithm. Evaluation performed by summarizing the important moments of a real-world event revealed that the proposed method could summarize the proceeding of different segments of the event with up to 81.6% precision and up to 80% recall.", "title": "" }, { "docid": "2997fc35a86646d8a43c16217fc8079b", "text": "During sudden onset crisis events, the presence of spam, rumors and fake content on Twitter reduces the value of information contained on its messages (or “tweets”). A possible solution to this problem is to use machine learning to automatically evaluate the credibility of a tweet, i.e. whether a person would deem the tweet believable or trustworthy. This has been often framed and studied as a supervised classification problem in an off-line (post-hoc) setting. In this paper, we present a semi-supervised ranking model for scoring tweets according to their credibility. This model is used in TweetCred , a real-time system that assigns a credibility score to tweets in a user’s timeline. TweetCred , available as a browser plug-in, was installed and used by 1,127 Twitter users within a span of three months. During this period, the credibility score for about 5.4 million tweets was computed, allowing us to evaluate TweetCred in terms of response time, effectiveness and usability. To the best of our knowledge, this is the first research work to develop a real-time system for credibility on Twitter, and to evaluate it on a user base of this size.", "title": "" }, { "docid": "bfbe4db13bfd1980aaae4cdf9e978e63", "text": "We establish in 2D, the PDE associated with a classical debluring filter, the Kramer operator and compare it with another classical shock filter.", "title": "" }, { "docid": "9001def80e94598f1165a867f3f6a09b", "text": "Microbial polyhydroxyalkanoates (PHA) have been developed as biodegradable plastics for the past many years. However, PHA still have only a very limited market. Because of the availability of large amount of shale gas, petroleum will not raise dramatically in price, this situation makes PHA less competitive compared with low cost petroleum based plastics. Therefore, two strategies have been adopted to meet this challenge: first, the development of a super PHA production strain combined with advanced fermentation processes to produce PHA at a low cost; second, the construction of functional PHA production strains with technology to control the precise structures of PHA molecules, this will allow the resulting PHA with high value added applications. The recent systems and synthetic biology approaches allow the above two strategies to be implemented. In the not so distant future, the new technology will allow PHA to be produced with a competitive price compared with petroleum-based plastics.", "title": "" } ]
scidocsrr
d2b9433e3e0b417dec522ab35e0e8295
Forensic analysis of social networks (case study)
[ { "docid": "63c842f58bdbbeecabaf6c61d8f891c4", "text": "iii Acknowledgements iv List of Tables viii List of Figures ix Chapters", "title": "" } ]
[ { "docid": "adb02577e7fba530c2406fbf53571d14", "text": "Event-related potentials (ERPs) recorded from the human scalp can provide important information about how the human brain normally processes information and about how this processing may go awry in neurological or psychiatric disorders. Scientists using or studying ERPs must strive to overcome the many technical problems that can occur in the recording and analysis of these potentials. The methods and the results of these ERP studies must be published in a way that allows other scientists to understand exactly what was done so that they can, if necessary, replicate the experiments. The data must then be analyzed and presented in a way that allows different studies to be compared readily. This paper presents guidelines for recording ERPs and criteria for publishing the results.", "title": "" }, { "docid": "1eee4b9b835eafe2948b96d0612805a1", "text": "Virtual Machine Introspection (VMI) is a technique that enables monitoring virtual machines at the hypervisor layer. This monitoring concept has gained recently a considerable focus in computer security research due to its complete but semantic less visibility on virtual machines activities and isolation from them. VMI works range from addressing the semantic gap problem to leveraging explored VMI techniques in order to provide novel hypervisor-based services that belong to different fields. This paper aims to survey and classify existing VMI techniques and their applications.", "title": "" }, { "docid": "d7acbf20753e2c9c50b2ab0683d7f03a", "text": "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises/corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods.", "title": "" }, { "docid": "bd3a2546d9f91f224e76759c087a7a1e", "text": "In this paper, we present a practical relay attack that can be mounted on RFID systems found in many applications nowadays. The described attack uses a self-designed proxy device to forward the RF communication from a reader to a modern NFC-enabled smart phone (Google Nexus S). The phone acts as a mole to inquire a victim’s card in the vicinity of the system. As a practical demonstration of our attack, we target a widely used accesscontrol application that usually grants access to office buildings using a strong AES authentication feature. Our attack successfully relays this authentication process via a Bluetooth channel (> 50 meters) within several hundred milliseconds. As a result, we were able to impersonate an authorized user and to enter the building without being detected.", "title": "" }, { "docid": "82e298f7a7c8a4788310ed77f7dfb44f", "text": "Internet addiction (IA) incurs significant social and financial costs in the form of physical side-effects, academic and occupational impairment, and serious relationship problems. The majority of previous studies on Internet addiction disorders (IAD) have focused on structural and functional abnormalities, while few studies have simultaneously investigated the structural and functional brain alterations underlying individual differences in IA tendencies measured by questionnaires in a healthy sample. Here we combined structural (regional gray matter volume, rGMV) and functional (resting-state functional connectivity, rsFC) information to explore the neural mechanisms underlying IAT in a large sample of 260 healthy young adults. The results showed that IAT scores were significantly and positively correlated with rGMV in the right dorsolateral prefrontal cortex (DLPFC, one key node of the cognitive control network, CCN), which might reflect reduced functioning of inhibitory control. More interestingly, decreased anticorrelations between the right DLPFC and the medial prefrontal cortex/rostral anterior cingulate cortex (mPFC/rACC, one key node of the default mode network, DMN) were associated with higher IAT scores, which might be associated with reduced efficiency of the CCN and DMN (e.g., diminished cognitive control and self-monitoring). Furthermore, the Stroop interference effect was positively associated with the volume of the DLPFC and with the IA scores, as well as with the connectivity between DLPFC and mPFC, which further indicated that rGMV variations in the DLPFC and decreased anticonnections between the DLPFC and mPFC may reflect addiction-related reduced inhibitory control and cognitive efficiency. These findings suggest the combination of structural and functional information can provide a valuable basis for further understanding of the mechanisms and pathogenesis of IA.", "title": "" }, { "docid": "a435814e2af70acf985068a17f23845b", "text": "Dropout is a simple yet effective algorithm for regularizing neural networks by randomly dropping out units through Bernoulli multiplicative noise, and for some restricted problem classes, such as linear or logistic regression, several theoretical studies have demonstrated the equivalence between dropout and a fully deterministic optimization problem with data-dependent Tikhonov regularization. This work presents a theoretical analysis of dropout for matrix factorization, where Bernoulli random variables are used to drop a factor, thereby attempting to control the size of the factorization. While recent work has demonstrated the empirical effectiveness of dropout for matrix factorization, a theoretical understanding of the regularization properties of dropout in this context remains elusive. This work demonstrates the equivalence between dropout and a fully deterministic model for matrix factorization in which the factors are regularized by the sum of the product of the norms of the columns. While the resulting regularizer is closely related to a variational form of the nuclear norm, suggesting that dropout may limit the size of the factorization, we show that it is possible to trivially lower the objective value by doubling the size of the factorization. We show that this problem is caused by the use of a fixed dropout rate, which motivates the use of a rate that increases with the size of the factorization. Synthetic experiments validate our theoretical findings.", "title": "" }, { "docid": "409b257d38faef216a1056fd7c548587", "text": "Reservoir computing systems utilize dynamic reservoirs having short-term memory to project features from the temporal inputs into a high-dimensional feature space. A readout function layer can then effectively analyze the projected features for tasks, such as classification and time-series analysis. The system can efficiently compute complex and temporal data with low-training cost, since only the readout function needs to be trained. Here we experimentally implement a reservoir computing system using a dynamic memristor array. We show that the internal ionic dynamic processes of memristors allow the memristor-based reservoir to directly process information in the temporal domain, and demonstrate that even a small hardware system with only 88 memristors can already be used for tasks, such as handwritten digit recognition. The system is also used to experimentally solve a second-order nonlinear task, and can successfully predict the expected output without knowing the form of the original dynamic transfer function. Reservoir computing facilitates the projection of temporal input signals onto a high-dimensional feature space via a dynamic system, known as the reservoir. Du et al. realise this concept using metal-oxide-based memristors with short-term memory to perform digit recognition tasks and solve non-linear problems.", "title": "" }, { "docid": "77b5e915e53e0ec69d3c412e7faaf253", "text": "This paper provides a method for planning fuel-optimal trajectories for multiple unmanned aerial vehicles to reconfigure and traverse between goal points in a dynamic environment in real-time. Recent developments in robot motion planning have shown that trajectory optimization of linear vehicle systems including collision avoidance can be written as a linear program subject to mixed integer constraints, known as a mixed integer linear program (MILP). This paper extends the trajectory optimization to a class of nonlinear systems: differentially flat systems using MILP. A polynomial basis for a Ritz approximation of the optimal solution reduces the optimization variables and computation time without discretizing the systems. Based on the differential flatness property of unmanned vehicle systems, the trajectory planner satisfies the kinematic constraints of the individual vehicles while accounting for inter-vehicle collision and path constraints. The analytical fuel-optimal trajectories are smooth and continuous. Illustrative trajectory planning examples of multiple unmanned aerial vehicles are presented.", "title": "" }, { "docid": "299e7f7d1c48d4a6a22c88dcf422f7a1", "text": "Due to the advantages of deep learning, in this paper, a regularized deep feature extraction (FE) method is presented for hyperspectral image (HSI) classification using a convolutional neural network (CNN). The proposed approach employs several convolutional and pooling layers to extract deep features from HSIs, which are nonlinear, discriminant, and invariant. These features are useful for image classification and target detection. Furthermore, in order to address the common issue of imbalance between high dimensionality and limited availability of training samples for the classification of HSI, a few strategies such as L2 regularization and dropout are investigated to avoid overfitting in class data modeling. More importantly, we propose a 3-D CNN-based FE model with combined regularization to extract effective spectral-spatial features of hyperspectral imagery. Finally, in order to further improve the performance, a virtual sample enhanced method is proposed. The proposed approaches are carried out on three widely used hyperspectral data sets: Indian Pines, University of Pavia, and Kennedy Space Center. The obtained results reveal that the proposed models with sparse constraints provide competitive results to state-of-the-art methods. In addition, the proposed deep FE opens a new window for further research.", "title": "" }, { "docid": "f1220465c3ac6da5a2edc96b5979d4be", "text": "We consider Complexity Leadership Theory [Uhl-Bien, M., Marion, R., & McKelvey, B. (2007). Complexity Leadership Theory: Shifting leadership from the industrial age to the knowledge era. The Leadership Quarterly.] in contexts of bureaucratic forms of organizing to describe how adaptive dynamics can work in combination with administrative functions to generate emergence and change in organizations. Complexity leadership approaches are consistent with the central assertion of the meso argument that leadership is multi-level, processual, contextual, and interactive. In this paper we focus on the adaptive function, an interactive process between adaptive leadership (an agentic behavior) and complexity dynamics (nonagentic social dynamics) that generates emergent outcomes (e.g., innovation, learning, adaptability) for the firm. Propositions regarding the actions of complexity leadership in bureaucratic forms of organizing are offered. © 2009 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "a57c4685bf7cb0015b8e4ec96e6a0183", "text": "How can we connect artificial intelligence with cognitive psychology? What kind of models and approaches were developed in these scientific fields? The main aim of this paper is to provide a broad summary and analyses about the relationships between psychology and artificial intelligence. I present the state of the art applications, human like thinking and acting systems (Human Computer Interface, Modelling Mental Processes, Data Mining Application) Application can be divided into several groups and aspects. Main goal of the artificial intelligence was/is to develop human level intelligence, but the technology transfer turned out to be much comprehensive, and these systems are used widely, and the research is blooming. The first part of the paper introduces the development, and the basic knowledge, general models of the cognitive psychology (gives also its relevant connecting points to artificial intelligence), it describes also the information processing model of the human brain. The second part provides analyses of the human computing interaction, its tasks, application fields, the psychological models used for HCI, and the barriers of the field. In order to extend or defeat these barriers, the science has to face several scientific, pragmatic, and technical challenges (such as the problem of complexity, disturbing coefficients... etc). Other important area demonstrated in this paper is the mental modelling used to prevent, prognoses, manipulate, or to support the human mental processes, like learning. By a prognoses (for example prognoses of the children affected by mental illnesses according to their environments. etc), data mining, knowledge discovery, or expert systems are applied. The paper gives an outline about in the system used coefficients, and analyses the missing attributes. The last part deals with the expert systems used to help people and relatives with autism and with the life simulation (applied mental model) in the virtual reality/virtual environment.", "title": "" }, { "docid": "504cb4e0f2b054f4e0b90fd7d9ab2253", "text": "A monolithic radio frequency power amplifier for 1.9- 2.6 GHz has been realized in a 0.25 µm SiGe-bipolar technology. The balanced 2-stage push-pull power amplifier uses two on-chip transformers as input-balun and for interstage matching and is operating down to supply voltages as low as 1 V. A microstrip line balun acts as output matching network. At 1 V, 1.5 V, 2 V supply voltages output powers of 20 dBm, 23.5 dBm, 26 dBm are achieved at 2.45 GHz. The respective power added efficiency is 36%, 49.5%, 53%. The small-signal gain is 33 dB.", "title": "" }, { "docid": "d1756aa5f0885157bdad130d96350cd3", "text": "In this paper, we describe the winning approach for the RecSys Challenge 2015. Our key points are (1) two-stage classification, (2) massive usage of categorical features, (3) strong classifiers built by gradient boosting and (4) threshold optimization based directly on the competition score. We describe our approach and discuss how it can be used to build scalable personalization systems.", "title": "" }, { "docid": "85b9cd3e6f0f55ad4aea17a52e25bcf8", "text": "Translating or rotating an input image should not affect the results of many computer vision tasks. Convolutional neural networks (CNNs) are already translation equivariant: input image translations produce proportionate feature map translations. This is not the case for rotations. Global rotation equivariance is typically sought through data augmentation, but patch-wise equivariance is more difficult. We present Harmonic Networks or H-Nets, a CNN exhibiting equivariance to patch-wise translation and 360-rotation. We achieve this by replacing regular CNN filters with circular harmonics, returning a maximal response and orientation for every receptive field patch. H-Nets use a rich, parameter-efficient and fixed computational complexity representation, and we show that deep feature maps within the network encode complicated rotational invariants. We demonstrate that our layers are general enough to be used in conjunction with the latest architectures and techniques, such as deep supervision and batch normalization. We also achieve state-of-the-art classification on rotated-MNIST, and competitive results on other benchmark challenges.", "title": "" }, { "docid": "36776b1372e745f683ca66e7c4421a76", "text": "This paper presents the analyzed results of rotational torque and suspension force in a bearingless motor with the short-pitch winding, which are based on the computation by finite element method (FEM). The bearingless drive technique is applied to a conventional brushless DC motor, in which the stator windings are arranged at the short-pitch, and encircle only a single stator tooth. At first, the winding arrangement in the stator core, the principle of suspension force generation and the magnetic suspension control method are shown in the bearingless motor with brushless DC structure. The torque and suspension force are computed by FEM using a machine model with the short-pitch winding arrangement, and the computed results are compared between the full-pitch and short-pitch winding arrangements. The advantages of short-pitch winding arrangement are found on the basis of computed results and discussion.", "title": "" }, { "docid": "d473154967f8fc522bd0d2a95f29bdc3", "text": "This paper presents a model for Virtual Network Function (VNF) placement and chaining across Cloud environments. We propose a new analytical approach for joint VNFs placement and traffic steering for complex service chains and different VNF types. A custom greedy algorithm is also proposed to compare with our solution. Performance evaluation results show that our approach is fast and stable and has a execution time that essentially depends only on the NFV infrastructure size.", "title": "" }, { "docid": "7e71c614713dce3513ebc1f1aa07579a", "text": "Because of the long colonial history of Filipinos and the highly Americanized climate of postcolonial Philippines, many scholars from various disciplines have speculated that colonialism and its legacies may play major roles in Filipino emigration to the United States. However, there are no known empirical studies in psychology that specifically investigate whether colonialism and its effects have influenced the psychological experiences of Filipino American immigrants prior to their arrival in the United States. Further, there is no existing empirical study that specifically investigates the extent to which colonialism and its legacies continue to influence Filipino American immigrants' mental health. Thus, using interviews (N = 6) and surveys (N = 219) with Filipino American immigrants, two studies found that colonialism and its consequences are important factors to consider when conceptualizing the psychological experiences of Filipino American immigrants. Specifically, the findings suggest that (a) Filipino American immigrants experienced ethnic and cultural denigration in the Philippines prior to their U.S. arrival, (b) ethnic and cultural denigration in the Philippines and in the United States may lead to the development of colonial mentality (CM), and (c) that CM may have negative mental health consequences among Filipino American immigrants. The two studies' findings suggest that the Filipino American immigration experience cannot be completely captured by the voluntary immigrant narrative, as they provide empirical support to the notion that the Filipino American immigration experience needs to be understood in the context of colonialism and its most insidious psychological legacy- CM.", "title": "" }, { "docid": "fd4bddf9a5ff3c3b8577c46249bec915", "text": "In order for neural networks to learn complex languages or grammars, they must have sufficient computational power or resources to recognize or generate such languages. Though many approaches have been discussed, one obvious approach to enhancing the processing power of a recurrent neural network is to couple it with an external stack memory in effect creating a neural network pushdown automata (NNPDA). This paper discusses in detail this NNPDA its construction, how it can be trained and how useful symbolic information can be extracted from the trained network. In order to couple the external stack to the neural network, an optimization method is developed which uses an error function that connects the learning of the state automaton of the neural network to the learning of the operation of the external stack. To minimize the error function using gradient descent learning, an analog stack is designed such that the action and storage of information in the stack are continuous. One interpretation of a continuous stack is the probabilistic storage of and action on data. After training on sample strings of an unknown source grammar, a quantization procedure extracts from the analog stack and neural network a discrete pushdown automata (PDA). Simulations show that in learning deterministic context-free grammars the balanced parenthesis language, 1 n0n, and the deterministic Palindrome the extracted PDA is correct in the sense that it can correctly recognize unseen strings of arbitrary length. In addition, the extracted PDAs can be shown to be identical or equivalent to the PDAs of the source grammars which were used to generate the training strings.", "title": "" }, { "docid": "1a446469e6b4357373b61f88255407cf", "text": "In the Western Hemisphere, Zika virus is thought to be transmitted primarily by Aedes aegypti mosquitoes. To determine the extent to which Ae. albopictus mosquitoes from the United States are capable of transmitting Zika virus and the influence of virus dose, virus strain, and mosquito species on vector competence, we evaluated multiple doses of representative Zika virus strains in Ae. aegypti and Ae. albopictus mosquitoes. Virus preparation (fresh vs. frozen) significantly affected virus infectivity in mosquitoes. We calculated 50% infectious doses to be 6.1-7.5 log 10 PFU/mL; minimum infective dose was 4.2 log 10 PFU/mL. Ae. albopictus mosquitoes were more susceptible to infection than Ae. aegypti mosquitoes, but transmission efficiency was higher for Ae. aegypti mosquitoes, indicating a transmission barrier in Ae. albopictus mosquitoes. Results suggest that, although Zika virus transmission is relatively inefficient overall and dependent on virus strain and mosquito species, Ae. albopictus mosquitoes could become major vectors in the Americas.", "title": "" }, { "docid": "0cae8939c57ff3713d7321102c80816e", "text": "In this paper, we propose using 3D Convolutional Neural Networks for large scale user-independent continuous gesture recognition. We have trained an end-to-end deep network for continuous gesture recognition (jointly learning both the feature representation and the classifier). The network performs three-dimensional (i.e. space-time) convolutions to extract features related to both the appearance and motion from volumes of color frames. Space-time invariance of the extracted features is encoded via pooling layers. The earlier stages of the network are partially initialized using the work of Tran et al. before being adapted to the task of gesture recognition. An earlier version of the proposed method, which was trained for 11,250 iterations, was submitted to ChaLearn 2016 Continuous Gesture Recognition Challenge and ranked 2nd with the Mean Jaccard Index Score of 0.269235. When the proposed method was further trained for 28,750 iterations, it achieved state-of-the-art performance on the same dataset, yielding a 0.314779 Mean Jaccard Index Score.", "title": "" } ]
scidocsrr
d5e665934b98be6d01dd149f9f15fa2e
Statics modeling of an underactuated wire-driven flexible robotic arm
[ { "docid": "2089f931cf6fca595898959cbfbca28a", "text": "Continuum robotic manipulators articulate due to their inherent compliance. Tendon actuation leads to compression of the manipulator, extension of the actuators, and is limited by the practical constraint that tendons cannot support compression. In light of these observations, we present a new linear model for transforming desired beam configuration to tendon displacements and vice versa. We begin from first principles in solid mechanics by analyzing the effects of geometrically nonlinear tendon loads. These loads act both distally at the termination point and proximally along the conduit contact interface. The resulting model simplifies to a linear system including only the bending and axial modes of the manipulator as well as the actuator compliance. The model is then manipulated to form a concise mapping from beam configuration-space parameters to n redundant tendon displacements via the internal loads and strains experienced by the system. We demonstrate the utility of this model by implementing an optimal feasible controller. The controller regulates axial strain to a constant value while guaranteeing positive tendon forces and minimizing their magnitudes over a range of articulations. The mechanics-based model from this study provides insight as well as performance gains for this increasingly ubiquitous class of manipulators.", "title": "" } ]
[ { "docid": "4c67d3686008e377220314323a35eecb", "text": "Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.", "title": "" }, { "docid": "1766d61252101e10d0fde31ba3c304e7", "text": "The mobile ecosystem is constantly changing. The roles of each actor are uncertain and the question how each actor cooperates with each other is of interest of researchers both in academia and industry. In this paper we examine the mobile ecosystem from a business perspective. We used five mobile companies as case studies, which were investigated through interviews and questionnaire surveys. The companies covered different roles in the ecosystem, including network operator, device manufacturer, and application developer. With our empirical data as a starting point, we analyze the revenue streams of different actors in the ecosystem. The results will contribute to an understanding of the business models and dependencies that characterize actors in the current mobile ecosystem.", "title": "" }, { "docid": "7514deb49197a5078b1cf9f8f789eee9", "text": "The phrase table is considered to be the main bilingual resource for the phrase-based statistical machine translation (PBSMT) model. During translation, a source sentence is decomposed into several phrases. The best match of each source phrase is selected among several target-side counterparts within the phrase table, and processed by the decoder to generate a sentence-level translation. The best match is chosen according to several factors, including a set of bilingual features. PBSMT engines by default provide four probability scores in phrase tables which are considered as the main set of bilingual features. Our goal is to enrich that set of features, as a better feature set should yield better translations. We propose new scores generated by a Convolutional Neural Network (CNN) which indicate the semantic relatedness of phrase pairs. We evaluate our model in different experimental settings with different language pairs. We observe significant improvements when the proposed features are incorporated into the PBSMT pipeline.", "title": "" }, { "docid": "38dbfacffdb6a9982c14b27b5fef93df", "text": "Information Science Abstracts (ISA) is the oldest abstracting and indexing (A&I) publication covering the field of information science. A&I publications play a valuable “gatekeeping” role in identifying changes in a discipline by tracking its literature. This article briefly reviews the history of ISA as well as the history of attempts to define “information science” because the American Documentation Institute changed its name to ASIS in 1970. A new working definition of the term for ISA is derived from both the historical review and current technological advances. The definition departs from the previous document-centric definitions and concentrates on the Internet-dominated industry of today. Information science is a discipline drawing on important concepts from a number of closely related disciplines that become a cohesive whole focusing on information. The relationships between these interrelated disciplines are portrayed on a “map” of the field, in which the basic subjects are shown as a central “core” with related areas surrounding it.", "title": "" }, { "docid": "43e151ee05922e620e2bbac197357ffd", "text": "Modelling artificial neural networks for accurate time series prediction poses multiple challenges, in particular specifying the network architecture in accordance with the underlying structure of the time series. The data generating processes may exhibit a variety of stochastic or deterministic time series patterns of single or multiple seasonality, trends and cycles, overlaid with pulses, level shifts and structural breaks, all depending on the discrete time frequency in which it is observed. For heterogeneous datasets of time series, such as the 2008 ESTSP competition, a universal methodology is required for automatic network specification across varying data patterns and time frequencies. We propose a fully data driven forecasting methodology that combines filter and wrapper approaches for feature selection, including automatic feature evaluation, construction and transformation. The methodology identifies time series patterns, creates and transforms explanatory variables and specifies multilayer perceptrons for heterogeneous sets of time series without expert intervention. Examples of the valid and reliable performance in comparison to established benchmark methods are shown for a set of synthetic time series and for the ESTSP’08 competition dataset, where the proposed methodology obtained second place. & 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "671516d02d7e95df00de476ab1e3b455", "text": "Microservices are sweeping through cloud design architectures, at once embodying new trends and making use of previous paradigms. This column explores the basis for these trends in both modern and historical standards, and sets out a direction for the future of microservices development.", "title": "" }, { "docid": "18aeabe12c3f890b5aa6d5b1f6ded386", "text": "Many stream-based applications have sophisticated data processing requirements and real-time performance expectations that need to be met under high-volume, time-varying data streams. In order to address these challenges, we propose novel operator scheduling approaches that specify (1) which operators to schedule (2) in which order to schedule the operators, and (3) how many tuples to process at each execution step. We study our approaches in the context of the Aurora data stream manager. We argue that a fine-grained scheduling approach in combination with various scheduling techniques (such as batching of operators and tuples) can significantly improve system efficiency by reducing various system overheads. We also discuss application-aware extensions that make scheduling decisions according to per-application Quality of Service (QoS) specifications. Finally, we present prototype-based experimental results that characterize the efficiency and effectiveness of our approaches under various stream workloads and processing scenarios.", "title": "" }, { "docid": "93c819e7fa80de9e059cc564badec5fa", "text": "The ARRAU corpus is an anaphorically annotated corpus of English providing rich linguistic information about anaphora resolution. The most distinctive feature of the corpus is the annotation of a wide range of anaphoric relations, including bridging references and discourse deixis in addition to identity (coreference). Other distinctive features include treating all NPs as markables, including nonreferring NPs; and the annotation of a variety of morphosyntactic and semantic mention and entity attributes, including the genericity status of the entities referred to by markables. The corpus however has not been extensively used for anaphora resolution research so far. In this paper, we discuss three datasets extracted from the ARRAU corpus to support the three subtasks of the CRAC 2018 Shared Task– identity anaphora resolution over ARRAU-style markables, bridging references resolution, and discourse deixis; the evaluation scripts assessing system performance on those datasets; and preliminary results on these three tasks that may serve as baseline for subsequent research in these phenomena.", "title": "" }, { "docid": "54e541c0a2c8c90862ce5573899aacc7", "text": "The moving sofa problem, posed by L. Moser in 1966, asks for the planar shape of maximal area that can move around a right-angled corner in a hallway of unit width. It is known that a maximal area shape exists, and that its area is at least 2.2195 . . .—the area of an explicit construction found by Gerver in 1992—and at most 2 √ 2 ≈ 2.82, with the lower bound being conjectured as the true value. We prove a new and improved upper bound of 2.37. The method involves a computer-assisted proof scheme that can be used to rigorously derive further improved upper bounds that converge to the correct value.", "title": "" }, { "docid": "6d380dc3fe08d117c090120b3398157b", "text": "Conversational interfaces are likely to become more efficient, intuitive and engaging way for human-computer interaction than today’s text or touch-based interfaces. Current research efforts concerning conversational interfaces focus primarily on question answering functionality, thereby neglecting support for search activities beyond targeted information lookup. Users engage in exploratory search when they are unfamiliar with the domain of their goal, unsure about the ways to achieve their goals, or unsure about their goals in the first place. Exploratory search is often supported by approaches from information visualization. However, such approaches cannot be directly translated to the setting of conversational search. In this paper we investigate the affordances of interactive storytelling as a tool to enable exploratory search within the framework of a conversational interface. Interactive storytelling provides a way to navigate a document collection in the pace and order a user prefers. In our vision, interactive storytelling is to be coupled with a dialogue-based system that provides verbal explanations and responsive design. We discuss challenges and sketch the research agenda required to put this vision into life.", "title": "" }, { "docid": "f3286ddc33169c65ead1c5661f7b38e1", "text": "Many educational institutions, especially higher education institutions, are considering to embrace smartphones as part of learning aids in classes as most students (in many cases all students) not only own them but also are also attached to them. The main question is whether embracing smartphones in classroom teaching enhances the learning or perhaps an interference. This paper presents the finding of our study on embracing smartphone in classroom teaching. The study was carried out through a survey and interview/discussion with a focus group of students. We found that they use their smartphones to access teaching materials or supporting information, which are normally accessible through the Internet. Students use smartphones as learning aids due many reasons such as they provide convenience, portability, comprehensive learning experiences, multi sources and multitasks, and environmentally friendly. They also use smartphones to interact with teachers outside classes and using smartphones to manage their group assignments. However, integrating smartphones in a classroom-teaching environment is a challenging task. Lecturers may need to incorporate smartphones in teaching and learning to create attractive teaching and optimum interaction with students in classes while mitigating or at least minimising distractions that can be created. Some of the challenges are distraction, dependency, lacking hands on skills, and the reduce quality of face-to-face interaction. To avoid any disturbances in using smartphones within a classroom environment, proper rules of using smartphones in class should be established before teaching, and students need to abide to these rules.", "title": "" }, { "docid": "3e46e094088e44d6b6a96b58fe167c46", "text": "Functional magnetic resonance imaging (fMRI) studies of the human brain have suggested that low-frequency fluctuations in resting fMRI data collected using blood oxygen level dependent (BOLD) contrast correspond to functionally relevant resting state networks (RSNs). Whether the fluctuations of resting fMRI signal in RSNs are a direct consequence of neocortical neuronal activity or are low-frequency artifacts due to other physiological processes (e.g., autonomically driven fluctuations in cerebral blood flow) is uncertain. In order to investigate further these fluctuations, we have characterized their spatial and temporal properties using probabilistic independent component analysis (PICA), a robust approach to RSN identification. Here, we provide evidence that: i. RSNs are not caused by signal artifacts due to low sampling rate (aliasing); ii. they are localized primarily to the cerebral cortex; iii. similar RSNs also can be identified in perfusion fMRI data; and iv. at least 5 distinct RSN patterns are reproducible across different subjects. The RSNs appear to reflect \"default\" interactions related to functional networks related to those recruited by specific types of cognitive processes. RSNs are a major source of non-modeled signal in BOLD fMRI data, so a full understanding of their dynamics will improve the interpretation of functional brain imaging studies more generally. Because RSNs reflect interactions in cognitively relevant functional networks, they offer a new approach to the characterization of state changes with pathology and the effects of drugs.", "title": "" }, { "docid": "abc82f9d4aa6e8cc8bc4d58a10291430", "text": "With the recent advances in computer-supported cooperative work systems and increasing popularization of speech-based interfaces, groupware attempting to emulate a knowledgeable participant in a collaborative environment is bound to become a reality in the near future. In this paper, we present IdeaWall, a real-time system that continuously extracts essential information from a verbal discussion and augments that information with web-search materials. IdeaWall provides combinatorial visual stimuli to the participants to facilitate their creative process. We develop three cognitive strategies, from which a prototype application with three display modes was designed, implemented, and evaluated. The results of the user study with twelve groups show that IdeaWall effectively presents visual cues to facilitate verbal creative collaboration for idea generation and sets the stage for future research on intelligent systems that assist collaborative work.", "title": "" }, { "docid": "610476babafbf2785ace600ed409638c", "text": "In the utility grid interconnection of photovoltaic (PV) energy sources, inverters determine the overall system performance, which result in the demand to route the grid connected transformerless PV inverters (GCTIs) for residential and commercial applications, especially due to their high efficiency, light weight, and low cost benefits. In spite of these benefits of GCTIs, leakage currents due to distributed PV module parasitic capacitances are a major issue in the interconnection, as they are undesired because of safety, reliability, protective coordination, electromagnetic compatibility, and PV module lifetime issues. This paper classifies the kW and above range power rating GCTI topologies based on their leakage current attributes and investigates and/illustrates their leakage current characteristics by making use of detailed microscopic waveforms of a representative topology of each class. The cause and quantity of leakage current for each class are identified, not only providing a good understanding, but also aiding the performance comparison and inverter design. With the leakage current characteristic investigation, the study places most topologies under small number of classes with similar leakage current attributes facilitating understanding, evaluating, and the design of GCTIs. Establishing a clear relation between the topology type and leakage current characteristic, the topology families are extended with new members, providing the design engineers a variety of GCTI topology configurations with different characteristics.", "title": "" }, { "docid": "d65279ebd7c525eff509baf2a97b3f76", "text": "It is well known that irony is one of the most subtle devices used to, in a refined way and without a negation marker, deny what is literally said. As such, its automatic detection would represent valuable knowledge regarding tasks as diverse as sentiment analysis, information extraction, or decision making. The research described in this article is focused on identifying key values of components to represent underlying characteristics of this linguistic phenomenon. In the absence of a negation marker, we focus on representing the core of irony by means of three conceptual layers. These layers involve 8 different textual features. By representing four available data sets with these features, we try to find hints about how to deal with this unexplored task from a computational point of view. Our findings are assessed by human annotators in two strata: isolated sentences and entire documents. The results show how complex and subjective the task of automatically detecting irony could be.", "title": "" }, { "docid": "9ad1acc78312d66f3e37dfb39f4692df", "text": "This work targets human action recognition in video. While recent methods typically represent actions by statistics of local video features, here we argue for the importance of a representation derived from human pose. To this end we propose a new Pose-based Convolutional Neural Network descriptor (P-CNN) for action recognition. The descriptor aggregates motion and appearance information along tracks of human body parts. We investigate different schemes of temporal aggregation and experiment with P-CNN features obtained both for automatically estimated and manually annotated human poses. We evaluate our method on the recent and challenging JHMDB and MPII Cooking datasets. For both datasets our method shows consistent improvement over the state of the art.", "title": "" }, { "docid": "b6043969fad2b2fd195a069fcf003ca1", "text": "In recent years, deep learning (DL), a rebranding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, and natural language processing. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV, e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should not only be aware of advancements such as DL, but also be leading researchers in this area. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools, and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as they relate to (i) inadequate data sets, (ii) human-understandable solutions for modeling physical phenomena, (iii) big data, (iv) nontraditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial, and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL. © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI. [DOI: 10.1117/1.JRS.11.042609]", "title": "" }, { "docid": "b7c9e2900423a0cd7cc21c3aa95ca028", "text": "In this article, the state of the art of research on emotion work (emotional labor) is summarized with an emphasis on its effects on well-being. It starts with a definition of what emotional labor or emotion work is. Aspects of emotion work, such as automatic emotion regulation, surface acting, and deep acting, are discussed from an action theory point of view. Empirical studies so far show that emotion work has both positive and negative effects on health. Negative effects were found for emotional dissonance. Concepts related to the frequency of emotion expression and the requirement to be sensitive to the emotions of others had both positive and negative effects. Control and social support moderate relations between emotion work variables and burnout and job satisfaction. Moreover, there is empirical evidence that the cooccurrence of emotion work and organizational problems leads to high levels of burnout. D 2002 Published by Elsevier Science Inc.", "title": "" }, { "docid": "2fc05946c4e17c0ca199cc8896e38362", "text": "Hierarchical multilabel classification allows a sample to belong to multiple class labels residing on a hierarchy, which can be a tree or directed acyclic graph (DAG). However, popular hierarchical loss functions, such as the H-loss, can only be defined on tree hierarchies (but not on DAGs), and may also under- or over-penalize misclassifications near the bottom of the hierarchy. Besides, it has been relatively unexplored on how to make use of the loss functions in hierarchical multilabel classification. To overcome these deficiencies, we first propose hierarchical extensions of the Hamming loss and ranking loss which take the mistake at every node of the label hierarchy into consideration. Then, we first train a general learning model, which is independent of the loss function. Next, using Bayesian decision theory, we develop Bayes-optimal predictions that minimize the corresponding risks with the trained model. Computationally, instead of requiring an exhaustive summation and search for the optimal multilabel, the resultant optimization problem can be efficiently solved by a greedy algorithm. Experimental results on a number of real-world data sets show that the proposed Bayes-optimal classifier outperforms state-of-the-art methods.", "title": "" }, { "docid": "0678581b45854e8903c0812a25fd9ad1", "text": "In this study we explored the relationship between narcissism and the individual's use of personal pronouns during extemporaneous monologues. The subjects, 24 males and 24 females, were asked to talk for approximately 5 minutes on any topic they chose. Following the monologues the subjects were administered the Narcissistic Personality Inventory, the Eysenck Personality Questionnaire, and the Rotter Internal-External Locus of Control Scale. The monologues were tape-recorded and later transcribed and analyzed for the subjects' use of personal pronouns. As hypothesized, individuals who scored higher on narcissism tended to use more first person singular pronouns and fewer first person plural pronouns. Discriminant validity for the relationship between narcissism and first person pronoun usage was exhibited in that narcissism did not show a relationship with subjects' use of second and third person pronouns, nor did the personality variables of extraversion, neuroticism, or locus of control exhibit any relationship with the subjects' personal pronoun usage.", "title": "" } ]
scidocsrr
9c3fc17385d97af04784daebe5d1ce20
Effect of peer support on prevention of postnatal depression among high risk women: multisite randomised controlled trial
[ { "docid": "c6b1ad47687dbd86b28a098160f406bb", "text": "The development of a 10-item self-report scale (EPDS) to screen for Postnatal Depression in the community is described. After extensive pilot interviews a validation study was carried out on 84 mothers using the Research Diagnostic Criteria for depressive illness obtained from Goldberg's Standardised Psychiatric Interview. The EPDS was found to have satisfactory sensitivity and specificity, and was also sensitive to change in the severity of depression over time. The scale can be completed in about 5 minutes and has a simple method of scoring. The use of the EPDS in the secondary prevention of Postnatal Depression is discussed.", "title": "" } ]
[ { "docid": "7be1f8be2c74c438b1ed1761e157d3a3", "text": "The feeding behavior and digestive physiology of the sea cucumber, Apostichopus japonicus are not well understood. A better understanding may provide useful information for the development of the aquaculture of this species. In this article the tentacle locomotion, feeding rhythms, ingestion rate (IR), feces production rate (FPR) and digestive enzyme activities were studied in three size groups (small, medium and large) of sea cucumber under a 12h light/12h dark cycle. Frame-by-frame video analysis revealed that all size groups had similar feeding strategies using a grasping motion to pick up sediment particles. The tentacle insertion rates of the large size group were significantly faster than those of the small and medium-sized groups (P<0.05). Feeding activities investigated by charge coupled device cameras with infrared systems indicated that all size groups of sea cucumber were nocturnal and their feeding peaks occurred at 02:00-04:00. The medium and large-sized groups also had a second feeding peak during the day. Both IR and FPR in all groups were significantly higher at night than those during the daytime (P<0.05). Additionally, the peak activities of digestive enzymes were 2-4h earlier than the peak of feeding. Taken together, these results demonstrated that the light/dark cycle was a powerful environment factor that influenced biological rhythms of A. japonicus, which had the ability to optimize the digestive processes for a forthcoming ingestion.", "title": "" }, { "docid": "2ab32a04c2d0af4a76ad29ce5a3b2748", "text": "The future of solid-state lighting relies on how the performance parameters will be improved further for developing high-brightness light-emitting diodes. Eventually, heat removal is becoming a crucial issue because the requirement of high brightness necessitates high-operating current densities that would trigger more joule heating. Here we demonstrate that the embedded graphene oxide in a gallium nitride light-emitting diode alleviates the self-heating issues by virtue of its heat-spreading ability and reducing the thermal boundary resistance. The fabrication process involves the generation of scalable graphene oxide microscale patterns on a sapphire substrate, followed by its thermal reduction and epitaxial lateral overgrowth of gallium nitride in a metal-organic chemical vapour deposition system under one-step process. The device with embedded graphene oxide outperforms its conventional counterpart by emitting bright light with relatively low-junction temperature and thermal resistance. This facile strategy may enable integration of large-scale graphene into practical devices for effective heat removal.", "title": "" }, { "docid": "83b376a0bd567e24dd1d3b5d415e08b2", "text": "BACKGROUND\nThe biomechanical effects of lateral meniscal posterior root tears with and without meniscofemoral ligament (MFL) tears in anterior cruciate ligament (ACL)-deficient knees have not been studied in detail.\n\n\nPURPOSE\nTo determine the biomechanical effects of the lateral meniscus (LM) posterior root tear in ACL-intact and ACL-deficient knees. In addition, the biomechanical effects of disrupting the MFLs in ACL-deficient knees with meniscal root tears were evaluated.\n\n\nSTUDY DESIGN\nControlled laboratory study.\n\n\nMETHODS\nTen paired cadaveric knees were mounted in a 6-degrees-of-freedom robot for testing and divided into 2 groups. The sectioning order for group 1 was (1) ACL, (2) LM posterior root, and (3) MFLs, and the order for group 2 was (1) LM posterior root, (2) ACL, and (3) MFLs. For each cutting state, displacements and rotations of the tibia were measured and compared with the intact state after a simulated pivot-shift test (5-N·m internal rotation torque combined with a 10-N·m valgus torque) at 0°, 20°, 30°, 60°, and 90° of knee flexion; an anterior translation load (88 N) at 0°, 30°, 60°, and 90° of knee flexion; and internal rotation (5 N·m) at 0°, 30°, 60°, 75°, and 90°.\n\n\nRESULTS\nCutting the LM root and MFLs significantly increased anterior tibial translation (ATT) during a pivot-shift test at 20° and 30° when compared with the ACL-cut state (both Ps < .05). During a 5-N·m internal rotation torque, cutting the LM root in ACL-intact knees significantly increased internal rotation by between 0.7° ± 0.3° and 1.3° ± 0.9° (all Ps < .05) except at 0° (P = .136). When the ACL + LM root cut state was compared with the ACL-cut state, the increase in internal rotation was significant at greater flexion angles of 75° and 90° (both Ps < .05) but not between 0°and 60° (all Ps > .2). For an anterior translation load, cutting the LM root in ACL-deficient knees significantly increased ATT only at 30° (P = .007).\n\n\nCONCLUSION\nThe LM posterior root was a significant stabilizer of the knee for ATT during a pivot-shift test at lower flexion angles and internal rotation at higher flexion angles.\n\n\nCLINICAL RELEVANCE\nIncreased knee anterior translation and rotatory instability due to posterior lateral meniscal root disruption may contribute to increased loads on an ACL reconstruction graft. It is recommended that lateral meniscal root tears be repaired at the same time as an ACL reconstruction to prevent possible ACL graft overload.", "title": "" }, { "docid": "0535b81322bdd4a5690c5d421f5ac1b7", "text": "Social media presents unique challenges for topic classification, including the brevity of posts, the informal nature of conversations, and the frequent reliance on external hyperlinks to give context to a conversation. In this paper we investigate the usefulness of these external hyperlinks for determining the topic of an individual post. We focus specifically on hyperlinks to objects which have related metadata available on the Web, including Amazon products and YouTube videos. Our experiments show that the inclusion of metadata from hyperlinked objects in addition to the original post content improved classifier performance measured with the F-score from 84% to 90%. Further, even classification based on object metadata alone outperforms classification based on the original post content.", "title": "" }, { "docid": "720648646b401761ee53b9b4c8844849", "text": "Theorists have suggested some people find it easier to express their ‘‘true selves’’ online than in person. Among 523 participants in an online study, Shyness was positively associated with online ‘Real Me’ self location, while Conscientiousness was negatively associated with an online self. Extraversion was indirectly negatively associated with an online self, mediated by Shyness. Neuroticism was positively associated with an online self, partly mediated by Shyness. 107 online and offline friends of participants provided ratings of them. Overall, both primary participants and their observers indicated that offline relationships were closer. However, participants who located their Real Me online reported feeling closer to their online friends than did those locating their real selves offline. To test whether personality is better expressed in online or offline interactions, observers’ ratings of participants’ personalities were compared. Both online and offline observers’ ratings of Extraversion, Agreeableness and Conscientiousness correlated with participants’ self-reports. However, only offline observers’ ratings of Neuroticism correlated with participants’ own. Except for Neuroticism, the similarity of online and offline observers’ personality ratings to participants’ self-reports did not differ significantly. The study provides no evidence that online self-presentations are more authentic; indeed Neuroticism may be more visibly", "title": "" }, { "docid": "91e8516d2e7e1e9de918251ac694ee08", "text": "High performance 3D integration Systems need a higher interconnect density between the die than traditional μbump interconnects can offer. For ultra-fine pitches interconnect pitches below 5μm a different solution is required. This paper describes a hybrid wafer-to-wafer (W2W) bonding approach that uses Cu damascene patterned surface bonding, allowing to scale down the interconnection pitch below 5 μm, potentially even down to 1μm, depending on the achievable W2W bonding accuracy. The bonding method is referred to as hybrid bonding since the bonding of the Cu/dielectric damascene surfaces leads simultaneously to metallic and dielectric bonding. In this paper, the integration flow for 300mm hybrid wafer bonding at 3.6μm and 1.8μm pitch will be described using a novel, alternative, non-oxide Cu/dielectric damascene process. Optimization of the surface preparation before bonding will be discussed. Of particular importance is the wafer chemical-mechanical-polishing (CMP) process and the pre-bonding wafer treatment. Using proper surface activation and very low roughness dielectrics, void-free room temperature bonding can be achieved. High bonding strengths are obtained, even using low temperature anneal (250°C). The process flow also integrates the use of a 5μm diameter, 50μm deep via-middle through-silicon-vias (TSV) to connect the wafer interfaces to the external wafer backside.", "title": "" }, { "docid": "e4007c7e6a80006238e1211a213e391b", "text": "Various techniques for multiprogramming parallel multiprocessor systems have been proposed recently as a way to improve performance. A natural approach is to divide the set of processing elements into independent partitions, and simultaneously execute a diierent parallel program in each partition. Several issues arise, including the determination of the optimal number of programs allowed to execute simultaneously (i.e., the number of partitions) and the corresponding partition sizes. This can be done statically, dynamically, or adaptively, depending on the system and workload characteristics. In this paper several adaptive partitioning policies are evaluated. Their behavior, as well as the behavior of static policies, is investigated using real parallel programs. The policy applicability to actual systems is addressed, and implementation results of the proposed policies on an iPSC/2 hypercube system are reported. The concept of robustness (i.e., the ability to perform well on a wide range of workload types over a wide range of arrival rates) is presented and quantiied. Relative rankings of the policies are obtained, depending on the speciic work-load characteristics. A trade-oo is shown between potential performance and the amount of knowledge of the workload characteristics required to select the best policy. A policy that performs best when such knowledge of workload parallelism and/or arrival rate is not available is proposed as the most robust of those analyzed.", "title": "" }, { "docid": "0dc7d10302ebaee75ba4223c5f4147e9", "text": "This paper presents a novel nonlinearity correction algorithm for wideband frequency modulated continuous wave (FMCW) radars based on high-order ambiguity functions (HAF) and time resampling. By emphasizing the polynomial-phase nature of the FMCW signal, it is shown that the HAF is an excellent tool for estimating the sweep nonlinearity polynomial coefficients. The estimated coefficients are used to build a correction function which is applied to the beat signal by time resampling. The nonlinearity correction algorithm is tested by simulation and validated on real data sets acquired with an X-band FMCW radar.", "title": "" }, { "docid": "0cf9fe94b6bdb224c326444268e467f4", "text": "Mobile payment is very important and critical solution for mobile commerce. A user-friendly mobile payment solution is strongly needed to support mobile users to conduct secure and reliable payment transactions using mobile devices. This paper presents an innovative mobile payment system based on 2-Dimentional (2D) barcodes for mobile users to improve mobile user experience in mobile payment.Unlike other existing mobile payment systems, the proposed payment solution provides distinct advantages to support buy-and-sale products and services based on 2D Barcodes.This system uses one standard 2D Barcode (DataMatrix) as an example to demonstrate how to deal with underlying mobile business workflow, mobile transactions and security issues. The paper discusses system architecture, design and implementation of the proposed mobile payment solution, as well as 2D barcode based security solutions. In addition, this paper also presents some application examples of the system.", "title": "" }, { "docid": "9b073b904551c5a855ee21f5790e950b", "text": "Alcohols (CnHn+2OH) are classified into primary, secondary, and tertiary alcohols, which can be branched or unbranched. They can also feature more than one OH-group (two OH-groups = diol; three OH-groups = triol). Presently, except for ethanol and sugar alcohols, they are mainly produced from fossil-based resources, such as petroleum, gas, and coal. Methanol and ethanol have the highest annual production volume accounting for 53 and 91 million tons/year, respectively. Most alcohols are used as fuels (e.g., ethanol), solvents (e.g., butanol), and chemical intermediates.This chapter gives an overview of recent research on the production of short-chain unbranched alcohols (C1-C5), focusing in particular on propanediols (1,2- and 1,3-propanediol), butanols, and butanediols (1,4- and 2,3-butanediol). It also provides a short summary on biobased higher alcohols (>C5) including branched alcohols.", "title": "" }, { "docid": "d3a6be631dcf65791b4443589acb6880", "text": "We present a deep generative model for Zero-Shot Learning (ZSL). Unlike most existing methods for this problem, that represent each class as a point (via a semantic embedding), we represent each seen/unseen class using a classspecific latent-space distribution, conditioned on class attributes. We use these latent-space distributions as a prior for a supervised variational autoencoder (VAE), which also facilitates learning highly discriminative feature representations for the inputs. The entire framework is learned end-to-end using only the seen-class training data. At test time, the label for an unseen-class test input is the class that maximizes the VAE lower bound. We further extend the model to a (i) semi-supervised/transductive setting by leveraging unlabeled unseen-class data via an unsupervised learning module, and (ii) few-shot learning where we also have a small number of labeled inputs from the unseen classes. We compare our model with several state-of-the-art methods through a comprehensive set of experiments on a variety of benchmark data sets.", "title": "" }, { "docid": "fa3587a9f152db21ec7fe5e935ebf8ba", "text": "Person re-identification has been usually solved as either the matching of single-image representation (SIR) or the classification of cross-image representation (CIR). In this work, we exploit the connection between these two categories of methods, and propose a joint learning frame-work to unify SIR and CIR using convolutional neural network (CNN). Specifically, our deep architecture contains one shared sub-network together with two sub-networks that extract the SIRs of given images and the CIRs of given image pairs, respectively. The SIR sub-network is required to be computed once for each image (in both the probe and gallery sets), and the depth of the CIR sub-network is required to be minimal to reduce computational burden. Therefore, the two types of representation can be jointly optimized for pursuing better matching accuracy with moderate computational cost. Furthermore, the representations learned with pairwise comparison and triplet comparison objectives can be combined to improve matching performance. Experiments on the CUHK03, CUHK01 and VIPeR datasets show that the proposed method can achieve favorable accuracy while compared with state-of-the-arts.", "title": "" }, { "docid": "1177dc9bc616ef4221a7db7722b58a6c", "text": "The typical septum polarizer that has gained popularity in the literature may be unsuitable for high-power applications, due to the sharp corners in the design. In order to address this issue, the fundamentals of the septum operation are first revisited, using a graphical visualization through full-wave analysis. A septum profiled with smooth edges is next presented, with enhanced power-handling capabilities in comparison to the stepped-septum polarizer. In this work, the sigmoid function is introduced to represent the smooth contour of the septum, and to enable diverse configurations without any discontinuities. The smooth and stepped profiles are optimized using the Particle Swarm Optimization (PSO) technique. The maximum electric-field intensity around the smooth edges is investigated using a full-wave simulator, HFSS. Our observations show that the maximum electric field is reduced by 40% in comparison to the stepped septum. In Appendix 1, the numerical approach is evaluated by comparing the exact series solution for the half-plane scattering problem with the simulated results in HFSS. In Appendix 2, a septum design with rounded edges is also studied as another possible design to reduce the maximum fields.", "title": "" }, { "docid": "6c784fc34cf7a8e700c67235e05d8cb0", "text": "Fully automatic methods that extract lists of objects from the Web have been studied extensively. Record extraction, the first step of this object extraction process, identifies a set of Web page segments, each of which represents an individual object (e.g., a product). State-of-the-art methods suffice for simple search, but they often fail to handle more complicated or noisy Web page structures due to a key limitation -- their greedy manner of identifying a list of records through pairwise comparison (i.e., similarity match) of consecutive segments. This paper introduces a new method for record extraction that captures a list of objects in a more robust way based on a holistic analysis of a Web page. The method focuses on how a distinct tag path appears repeatedly in the DOM tree of the Web document. Instead of comparing a pair of individual segments, it compares a pair of tag path occurrence patterns (called visual signals) to estimate how likely these two tag paths represent the same list of objects. The paper introduces a similarity measure that captures how closely the visual signals appear and interleave. Clustering of tag paths is then performed based on this similarity measure, and sets of tag paths that form the structure of data records are extracted. Experiments show that this method achieves higher accuracy than previous methods.", "title": "" }, { "docid": "e1edaf3e8754e8403b9be29f58ba3550", "text": "This paper presents a simulation framework for pathological gait assistance with a hip exoskeleton. Previously we had developed an event-driven controller for gait assistance [1]. We now simulate (or optimize) the gait assistance in ankle pathologies (e.g., weak dorsiflexion or plantarflexion). It is done by 1) utilizing the neuromuscular walking model, 2) parameterizing assistive torques for swing and stance legs, and 3) performing dynamic optimizations that takes into account the human-robot interactive dynamics. We evaluate the energy expenditures and walking parameters for the different gait types. Results show that each gait type should have a different assistance strategy comparing with the assistance of normal gait. Although we need further studies about the pathologies, our simulation model is feasible to design the gait assistance for the ankle muscle weaknesses.", "title": "" }, { "docid": "a8f391b630a0261a0693c7038370411a", "text": "In this paper, we address the problem of globally localizing and tracking the pose of a camera-equipped micro aerial vehicle (MAV) flying in urban streets at low altitudes without GPS. An image-based global positioning system is introduced to localize the MAV with respect to the surrounding buildings. We propose a novel airground image-matching algorithm to search the airborne image of the MAV within a ground-level, geotagged image database. Based on the detected matching image features, we infer the global position of the MAV by back-projecting the corresponding image points onto a cadastral three-dimensional city model. Furthermore, we describe an algorithm to track the position of the flying vehicle over several frames and to correct the accumulated drift of the visual odometry whenever a good match is detected between the airborne and the ground-level images. The proposed approach is tested on a 2 km trajectory with a small quadrocopter flying in the streets of Zurich. Our vision-based global localization can robustly handle extreme changes in viewpoint, illumination, perceptual aliasing, and over-season variations, thus outperforming conventional visual placerecognition approaches. The dataset is made publicly available to the research community. To the best of our knowledge, this is the first work that studies and demonstrates global localization and position tracking of a drone in urban streets with a single onboard camera. C © 2015 Wiley Periodicals, Inc.", "title": "" }, { "docid": "02b4c741b4a68e1b437674d874f10253", "text": "Traffic sign recognition is an important step for integrating smart vehicles into existing road transportation systems. In this paper, an NVIDIA Jetson TX1-based traffic sign recognition system is introduced for driver assistance applications. The system incorporates two major operations, traffic sign detection and recognition. Image color and shape based detection is used to locate potential signs in each frame. A pre-trained convolutional neural network performs classification on these potential sign candidates. The proposed system is implemented on NVIDIA Jetson TX1 board with web-camera. Based on a well-known benchmark suite, 96% detection accuracy is achieved while executing at 1.6 frames per seconds.", "title": "" }, { "docid": "1bba7daf36d5febe7b77dcb89be421bd", "text": "The past 15 years have seen a rapid expansion in the number of studies using neuroimaging techniques to investigate maturational changes in the human brain. In this paper, I review MRI studies on structural changes in the developing brain, and fMRI studies on functional changes in the social brain during adolescence. Both MRI and fMRI studies point to adolescence as a period of continued neural development. In the final section, I discuss a number of areas of research that are just beginning and may be the subject of developmental neuroimaging in the next twenty years. Future studies might focus on complex questions including the development of functional connectivity; how gender and puberty influence adolescent brain development; the effects of genes, environment and culture on the adolescent brain; development of the atypical adolescent brain; and implications for policy of the study of the adolescent brain.", "title": "" }, { "docid": "4a3555afece9711c3b202f493fcab6a3", "text": "Typically, products in a software product line differ by their functionality, and quality attributes are not intentionally varied. Why, how, and which quality attributes to vary has remained an open issue. A systematically conducted literature review on quality attribute variability is presented, where primary studies are selected by reading all content of full studies in Software Product Line Conference. The results indicate that the success of feature modeling influences the proposed approaches, different approaches suit specific quality attributes differently, and empirical evidence on industrial quality variability is lacking.", "title": "" }, { "docid": "048f553914e3d7419918f6862a6eacd6", "text": "Automated retinal layer segmentation of optical coherence tomography (OCT) images has been successful for normal eyes but becomes challenging for eyes with retinal diseases if the retinal morphology experiences critical changes. We propose a method to automatically segment the retinal layers in 3-D OCT data with serous retinal pigment epithelial detachments (PED), which is a prominent feature of many chorioretinal disease processes. The proposed framework consists of the following steps: fast denoising and B-scan alignment, multi-resolution graph search based surface detection, PED region detection and surface correction above the PED region. The proposed technique was evaluated on a dataset with OCT images from 20 subjects diagnosed with PED. The experimental results showed the following. 1) The overall mean unsigned border positioning error for layer segmentation is 7.87±3.36 μm, and is comparable to the mean inter-observer variability ( 7.81±2.56 μm). 2) The true positive volume fraction (TPVF), false positive volume fraction (FPVF) and positive predicative value (PPV) for PED volume segmentation are 87.1%, 0.37%, and 81.2%, respectively. 3) The average running time is 220 s for OCT data of 512 × 64 × 480 voxels.", "title": "" } ]
scidocsrr
8d839b3e2ff986a5ec2111a472d06e72
13.56 MHz high voltage multi-level resonant DC-DC converter
[ { "docid": "f92087a8e81c45cd8bedc12fddd682fc", "text": "This paper presented a novel power conversion method of realizing the galvanic isolation by dual safety capacitors (Y-cap) instead of conventional transformer. With limited capacitance of the Y capacitor, series resonant is proposed to achieve the power transfer. The basic concept is to control the power path impedance, which blocks the dominant low-frequency part of touch current and let the high-frequency power flow freely. Conceptual analysis, simulation and design considerations are mentioned in this paper. An 85W AC/AC prototype is designed and verified to substitute the isolation transformer of a CCFL LCD TV backlight system. Compared with the conventional transformer isolation, the new method is proved to meet the function and safety requirements of its specification while has higher efficiency and smaller size.", "title": "" } ]
[ { "docid": "8020c4f3df7bca37b7ebfcd14ae5299d", "text": "We present a two-part case study to explore how technology toys can promote computational thinking for young children. First, we conducted a formal study using littleBits, a commercially available technology toy, to explore its potential as a learning tool for computational thinking in three different educational settings. Our findings revealed differences in learning indicators across settings. We applied these insights during a teaching project in Cape Town, South Africa, where we partnered with an educational NGO, ORT SA CAPE, to offer enriching learning opportunities for both privileged and impoverished children. We describe our methods, observations, and lessons learned using littleBits to teach computational thinking to children in early elementary school, and discuss how our lab study informed practical work in the developing world.", "title": "" }, { "docid": "f200bd78f0785d4b5c6963b46907f6f1", "text": "We’re releasing highly optimized GPU kernels for an underexplored class of neural network architectures: networks with block-sparse weights. The kernels allow for efficient evaluation and differentiation of linear layers, including convolutional layers, with flexibly configurable block-sparsity patterns in the weight matrix. We find that depending on the sparsity, these kernels can run orders of magnitude faster than the best available alternatives such as cuBLAS. Using the kernels we improve upon the state-of-the-art in text sentiment analysis and generative modeling of text and images. By releasing our kernels in the open we aim to spur further advancement in model and algorithm design.", "title": "" }, { "docid": "68971b7efc9663c37113749206b5382b", "text": "Trehalose 6-phosphate (Tre6P), the intermediate of trehalose biosynthesis, has a profound influence on plant metabolism, growth, and development. It has been proposed that Tre6P acts as a signal of sugar availability and is possibly specific for sucrose status. Short-term sugar-feeding experiments were carried out with carbon-starved Arabidopsis thaliana seedlings grown in axenic shaking liquid cultures. Tre6P increased when seedlings were exogenously supplied with sucrose, or with hexoses that can be metabolized to sucrose, such as glucose and fructose. Conditional correlation analysis and inhibitor experiments indicated that the hexose-induced increase in Tre6P was an indirect response dependent on conversion of the hexose sugars to sucrose. Tre6P content was affected by changes in nitrogen status, but this response was also attributable to parallel changes in sucrose. The sucrose-induced rise in Tre6P was unaffected by cordycepin but almost completely blocked by cycloheximide, indicating that de novo protein synthesis is necessary for the response. There was a strong correlation between Tre6P and sucrose even in lines that constitutively express heterologous trehalose-phosphate synthase or trehalose-phosphate phosphatase, although the Tre6P:sucrose ratio was shifted higher or lower, respectively. It is proposed that the Tre6P:sucrose ratio is a critical parameter for the plant and forms part of a homeostatic mechanism to maintain sucrose levels within a range that is appropriate for the cell type and developmental stage of the plant.", "title": "" }, { "docid": "aaba5dc8efc9b6a62255139965b6f98d", "text": "The interaction of an autonomous mobile robot with the real world critically depends on the robots morphology and on its environment. Building a model of these aspects is extremely complex, making simulation insu cient for accurate validation of control algorithms. If simulation environments are often very e cient, the tools for experimenting with real robots are often inadequate. The traditional programming languages and tools seldom provide enought support for realtime experiments, thus hindering the understanding of the control algorithms and making the experimentation complex and time-consuming. A miniature robot is presented: it has a cylindrical shape measuring 55 mm in diameter and 30 mm in height. Due to its small size, experiments can be performed quickly and cost-e ectively in a small working area. Small peripherals can be designed and connected to the basic module and can take advantage of a versatile communication scheme. A serial-link is provided to run control algorithms on a workstation during debugging, thereby giving the user the opportunity of employing all available graphical tools. Once debugged, the algorithm can be downloaded to the robot and run on its own processor. Experimentation with groups of robots is hardly possible with commercially available hardware. The size and the price of the described robot open the way to cost-e ective investigations into collective behaviour. This aspect of research drives the design of the robot described in this paper. Experiments with some twenty units are planned for the near future.", "title": "" }, { "docid": "7456ceee02f50c9e92a665d362a9a419", "text": "Visualization of dynamically changing networks (graphs) is a significant challenge for researchers. Previous work has experimentally compared animation, small multiples, and other techniques, and found trade-offs between these. One potential way to avoid such trade-offs is to combine previous techniques in a hybrid visualization. We present two taxonomies of visualizations of dynamic graphs: one of non-hybrid techniques, and one of hybrid techniques. We also describe a prototype, called DiffAni, that allows a graph to be visualized as a sequence of three kinds of tiles: diff tiles that show difference maps over some time interval, animation tiles that show the evolution of the graph over some time interval, and small multiple tiles that show the graph state at an individual time slice. This sequence of tiles is ordered by time and covers all time slices in the data. An experimental evaluation of DiffAni shows that our hybrid approach has advantages over non-hybrid techniques in certain cases.", "title": "" }, { "docid": "c9df206d8c0bc671f3109c1c7b12b149", "text": "Internet of Things (IoT) — a unified network of physical objects that can change the parameters of the environment or their own, gather information and transmit it to other devices. It is emerging as the third wave in the development of the internet. This technology will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. The IoT is enabled by the latest developments, smart sensors, communication technologies, and Internet protocols. This article contains a description of lnternet of things (IoT) networks. Much attention is given to prospects for future of using IoT and it's development. Some problems of development IoT are were noted. The article also gives valuable information on building(construction) IoT systems based on PLC technology.", "title": "" }, { "docid": "4ed74450320dfef4156013292c1d2cbb", "text": "This paper describes the decisions by which teh Association for Computing Machinery integrated good features from the Los Alamos e-print (physics) archive and from Cornell University's Networked Computer Science Technical Reference Library to form their own open, permanent, online “computing research repository” (CoRR). Submitted papers are not refereed and anyone can browse and extract CoRR material for free, so Corr's eventual success could revolutionize computer science publishing. But several serious challenges remain: some journals forbid online preprints, teh CoRR user interface is cumbersome, submissions are only self-indexed, (no professional library staff manages teh archive) and long-term funding is uncertain.", "title": "" }, { "docid": "616f03bad838129e9aede8f8e707e6fb", "text": "The popularity of Web 2.0 has resulted in a large number of publicly available online consumer reviews created by a demographically diverse user base. Information about the authors of these reviews, such as age, gender and location, provided by many on-line consumer review platforms may allow companies to better understand the preferences of different market segments and improve their product design, manufacturing processes and marketing campaigns accordingly. However, previous work in sentiment analysis has largely ignored these additional user meta-data. To address this deficiency, in this paper, we propose parametric and non-parametric User-aware Sentiment Topic Models (USTM) that incorporate demographic information of review authors into topic modeling process in order to discover associations between market segments, topical aspects and sentiments. Qualitative examination of the topics discovered using USTM framework in the two datasets collected from popular online consumer review platforms as well as quantitative evaluation of the methods utilizing those topics for the tasks of review sentiment classification and user attribute prediction both indicate the utility of accounting for demographic information of review authors in opinion mining.", "title": "" }, { "docid": "7be3de98485a50c1ee56d808ad18e0c5", "text": "All natural cognitive systems, and, in particular, our own, gradually forget previously learned information. Consequently, plausible models of human cognition should exhibit similar patterns of gradual forgetting old information as new information is acquired. Only rarely (see Box 3) does new learning in natural cognitive systems co pletely disrupt or erase previously learned information. In other words, natural cognitive systems do not, in general, forget catastrophically. Unfortunately, however, this is precisely what occurs under certain circumstances in distributed connectionist networks. It turns out that the very features that give these networks their much-touted abilities to generalize, to function in the presence of degraded input, etc., are the root cause of catastrophic forgetting. The challenge is how to keep the advantages of distributed connectionist networks while avoiding the problem of catastrophic forgetting. In this article, we examine the causes, consequences and numerous solutions to the problem of catastrophic forgetting in neural networks. We consider how the brain might have overcome this problem and explore the consequences of this solution. Introduction By the end of the 1980’s many of the early problems with connectionist networks, such as their difficulties with sequence-learning and the profoundly stimulus-response nature of supervised learning algorithms such as error backpropagation had been largely solved. However, as these problems were being solved, another was discovered by McCloskey and Cohen and Ratcliff . They suggested that there might be a fundamental imitation to this type of distributed architecture, in the same way that Minsky and Papert 3 had shown twenty years before that there were certain fundamental limitations to what a perceptron 4,5 could do. They observed that under certain conditions, the process of learning a new set of patterns suddenly and completely erased a network’s knowledge of what it had already learned. They referred to this phenomenon as catastrophic interference (or catastrophic forgetting) and suggested that the underlying reason for this difficulty was the very thing — a single set of shared weights — that gave the networks their remarkable abilities to generalize and degrade gracefully. Catastrophic interference is a radical manifestation of a more general problem for connectionist models of memory — in fact, for any model of memory —, the so-called “stability-plasticity” problem. 6,7 The problem is how to design a system that is simultaneously sensitive to, but not radically disrupted by, new input. In this article we will focus primarily on a particular, widely used class of distributed neural network architectures — namely, those with a single set of shared (or partially shared) multiplicative weights. While this defines a very broad class of networks, this definition is certainly not exhaustive. In the remainder of this article we will discuss the numerous attempts over the last decade to solve this problem within the context of this type of network.", "title": "" }, { "docid": "f573c79dde4ce12c234df084dea149b4", "text": "The presence of geometric details on object surfaces dramatically changes the way light interacts with these surfaces. Although synthesizing realistic pictures requires simulating this interaction as faithfully as possible, explicitly modeling all the small details tends to be impractical. To address these issues, an image-based technique called relief mapping has recently been introduced for adding per-fragment details onto arbitrary polygonal models (Policarpo et al. 2005). The technique has been further extended to render correct silhouettes (Oliveira and Policarpo 2005) and to handle non-height-field surface details (Policarpo and Oliveira 2006). In all its variations, the ray-height-field intersection is performed using a binary search, which refines the result produced by some linear search procedure. While the binary search converges very fast, the linear search (required to avoid missing large structures) is prone to aliasing, by possibly missing some thin structures, as is evident in Figure 18-1a. Several space-leaping techniques have since been proposed to accelerate the ray-height-field intersection and to minimize the occurrence of aliasing (Donnelly 2005, Dummer 2006, Baboud and Décoret 2006). Cone step mapping (CSM) (Dummer 2006) provides a clever solution to accelerate the intersection calculation for the average case and avoids skipping height-field structures by using some precomputed data (a cone map). However, because CSM uses a conservative approach, the rays tend to stop before the actual surface, which introduces different Relaxed Cone Stepping for Relief Mapping", "title": "" }, { "docid": "638e6cc3b3bc22377d471c51ee17c000", "text": "Computer security students benefit from hands-on experience applying security tools and techniques to attack and defend vulnerable systems. Virtual machines (VMs) provide an effective way of sharing targets for hacking. However, developing these hacking challenges is time consuming, and once created, essentially static. That is, once the challenge has been \"solved\" there is no remaining challenge for the student, and if the challenge is created for a competition or assessment, the challenge cannot be reused without risking plagiarism, and collusion. Security Scenario Generator (SecGen) can build complex VMs based on randomised scenarios, with a number of diverse use-cases, including: building networks of VMs with randomised services and in-thewild vulnerabilities and with themed content, which can form the basis of penetration testing activities; VMs for educational lab use; and VMs with randomised CTF challenges. SecGen has a modular architecture which can dynamically generate challenges by nesting modules, and a hints generation system, which is designed to provide scaffolding for novice security students to make progress on complex challenges. SecGen has been used for teaching at universities, and hosting a recent UK-wide CTF event.", "title": "" }, { "docid": "1c7457ef393a604447b0478451ef0c62", "text": "Melasma is an acquired increased pigmentation of the skin [1], a symmetric hypermelanosis, characterized by irregular light to gray brown macules. Melasma comes from the Greek word melas [= black color), formerly known as Chloasma, another Greek word meaning green color, even though the term was more often used for melasma cases during pregnancy. It is considered to be part of a large group of facial melanosis, such as Riehl’s melanosis, Lichen planuspigmentous, erythema dyschromicumperstans, erythrosis and poikiloderma of Civatte [2]. Hyperpigmented macules and patches are most commonly developed in the sun-exposed areas of the skin [3]. Melasma is considered to be a chronic acquired hypermelanosis of the skin [4], with poorly understood pathogenesis [5]. The increased pigmentation and the photo damaged features that characterize melasma include solar elastosis, even though the main pathogenesis still remains unknown [6].", "title": "" }, { "docid": "068d87d2f1e24fdbe8896e0ab92c2934", "text": "This paper presents a primary color optical pixel sensor circuit that utilizes hydrogenated amorphous silicon thin-film transistors (TFTs). To minimize the effect of ambient light on the sensing result of optical sensor circuit, the proposed sensor circuit combines photo TFTs with color filters to sense a primary color optical input signal. A readout circuit, which also uses thin-film transistors, is integrated into the sensor circuit for sampling the stored charges in the pixel sensor circuit. Measurements demonstrate that the signal-to-noise ratio of the proposed sensor circuit is unaffected by ambient light under illumination up to 12 000 lux by white LEDs. Thus, the proposed optical pixel sensor circuit is suitable for receiving primary color optical input signals in large TFT-LCD panels.", "title": "" }, { "docid": "a88cbc1a779763fe6724f732c20b423a", "text": "Surface Acoustic Wave (SAW) devices, are not normally amenable to simulation through circuit simulators. In this letter, an electrical macromodel of Mason's Equivalent Circuit for an interdigital transducer (IDT) is proposed which is compatible to a widely used general purpose circuit simulator SPICE endowed with the capability to handle negative capacitances and inductances. Illustrations have been given to demonstrate the simplicity of ascertaining the frequency and time domain characteristics of IDT and amenability to simulate the IDT along with other external circuit elements.<<ETX>>", "title": "" }, { "docid": "faca51b6762e4d7c3306208ad800abd3", "text": "Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A well-founded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.", "title": "" }, { "docid": "5c13abdcfaa701acf736c2b2852b5a49", "text": "Membrane cofactor protein (CD46) is complement regulatory protein with probable function in the reproduction process. Expression of CD46 on human, mice, rat and guinea pig spermatozoa is restricted to the inner acrosomal membrane. In spite of the presence of anti-sperm antibodies and other potential complement activating agents in follicular fluid, CD46 is not expressed on the plasma membrane of spermatozoa as the other complement regulatory proteins (DAF and CD59) in human. Using dual immunofluorescence labelling with mAb IVA-520 (anti-bovine CD46) and various lectins with different binding pattern or monoclonal antibody ACR.4, targeted against intra-acrosomal protein, we excluded the expression of CD46 on the inner acrosomal membrane as well as in the acrosomal content but, we suggested the localization of this molecule on the outer acrosomal membrane and possibly on the plasma membrane of bovine sperm.", "title": "" }, { "docid": "57ccc061377399b669d5ece668b7e030", "text": "We present a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor. The novelty of our approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video. To achieve this, we accurately capture the facial performances of the source and target subjects in real-time using a commodity RGB-D sensor. For each frame, we jointly fit a parametric model for identity, expression, and skin reflectance to the input color and depth data, and also reconstruct the scene lighting. For expression transfer, we compute the difference between the source and target expressions in parameter space, and modify the target parameters to match the source expressions. A major challenge is the convincing re-rendering of the synthesized target face into the corresponding video stream. This requires a careful consideration of the lighting and shading design, which both must correspond to the real-world environment. We demonstrate our method in a live setup, where we modify a video conference feed such that the facial expressions of a different person (e.g., translator) are matched in real-time.", "title": "" }, { "docid": "c1f9740f056ceb7653fe37c4902f62b6", "text": "This work explores the use of Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) for automatic language identification (LID). The use of RNNs is motivated by their better ability in modeling sequences with respect to feed forward networks used in previous works. We show that LSTM RNNs can effectively exploit temporal dependencies in acoustic data, learning relevant features for language discrimination purposes. The proposed approach is compared to baseline i-vector and feed forward Deep Neural Network (DNN) systems in the NIST Language Recognition Evaluation 2009 dataset. We show LSTM RNNs achieve better performance than our best DNN system with an order of magnitude fewer parameters. Further, the combination of the different systems leads to significant performance improvements (up to 28%).", "title": "" }, { "docid": "ca655b741316e8c65b6b7590833396e1", "text": "• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.", "title": "" }, { "docid": "9ba6265ae3f4dd77743ee4d0972cef9d", "text": "The path loss measurements and modeling in durian orchard for wireless network at 5.8 GHz are presented. The path loss model is important to predict and design the wireless communication system in the durian orchard. The transmitter is set up near the bole of a durian tree and received the signal strength by spectrum analyzer. The communication channel is connected between the communication nodes. From the measurement results, it is observed that the path loss along the distance of 128 meters from the transmitter is less than −50 dB. The dual-slope modeling is employed to represent the path loss propagation in the durian orchard at 5.8 GHz.", "title": "" } ]
scidocsrr
9eb292e6d44df6fdddd532082cd65247
Personality within Information Systems Research: a literature Analysis
[ { "docid": "717bb81a5000035b1199eeb3b2308518", "text": "Technology acceptance research has tended to focus on instrumental beliefs such as perceived usefulness and perceived ease of use as drivers of usage intentions, with technology characteristics as major external stimuli. Behavioral sciences and individual psychology, however, suggest that social influences and personal traits such as individual innovativeness are potentially important determinants of adoption as well, and may be a more important element in potential adopters’ decisions. This paper models and tests these relationships in non-work settings among several latent constructs such as intention to adopt wireless mobile technology, social influences, and personal innovativeness. Structural equation analysis reveals strong causal relationships between the social influences, personal innovativeness and the perceptual beliefs—usefulness and ease of use, which in turn impact adoption intentions. The paper concludes with some important implications for both theory research and implementation strategies. q 2005 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "2f7990443281ed98189abb65a23b0838", "text": "In recent years, there has been a tendency to correlate the origin of modern culture and language with that of anatomically modern humans. Here we discuss this correlation in the light of results provided by our first hand analysis of ancient and recently discovered relevant archaeological and paleontological material from Africa and Europe. We focus in particular on the evolutionary significance of lithic and bone technology, the emergence of symbolism, Neandertal behavioral patterns, the identification of early mortuary practices, the anatomical evidence for the acquisition of language, the", "title": "" }, { "docid": "661d5db6f4a8a12b488d6f486ea5995e", "text": "Reliability and high availability have always been a major concern in distributed systems. Providing highly available and reliable services in cloud computing is essential for maintaining customer confidence and satisfaction and preventing revenue losses. Although various solutions have been proposed for cloud availability and reliability, but there are no comprehensive studies that completely cover all different aspects in the problem. This paper presented a ‘Reference Roadmap’ of reliability and high availability in cloud computing environments. A big picture was proposed which was divided into four steps specifying through four pivotal questions starting with ‘Where?’, ‘Which?’, ‘When?’ and ‘How?’ keywords. The desirable result of having a highly available and reliable cloud system could be gained by answering these questions. Each step of this reference roadmap proposed a specific concern of a special portion of the issue. Two main research gaps were proposed by this reference roadmap.", "title": "" }, { "docid": "2e7a88fb1eef478393a99366ff7089c8", "text": "Asbestos has been described as a physical carcinogen in that long thin fibers are generally more carcinogenic than shorter thicker ones. It has been hypothesized that long thin fibers disrupt chromosome behavior during mitosis, causing chromosome abnormalities which lead to cell transformation and neoplastic progression. Using high-resolution time lapse video-enhanced light microscopy and the uniquely suited lung epithelial cells of the newt Taricha granulosa, we have characterized for the first time the behavior of crocidolite asbestos fibers, and their interactions with chromosomes, during mitosis in living cells. We found that the keratin cage surrounding the mitotic spindle inhibited fiber migration, resulting in spindles with few fibers. As in interphase, fibers displayed microtubule-mediated saltatory movements. Fiber position was only slightly affected by the ejection forces of the spindle asters. Physical interactions between crocidolite fibers and chromosomes occurred randomly within the spindle and along its edge. Crocidolite fibers showed no affinity toward chromatin and most encounters ended with the fiber passively yielding to the chromosome. In a few encounters along the spindle edge the chromosome yielded to the fiber, which remained stationary as if anchored to the keratin cage. We suggest that fibers thin enough to be caught in the keratin cage and long enough to protrude into the spindle are those fibers with the ability to snag or block moving chromosomes.", "title": "" }, { "docid": "89b73780755b1ee92babc7ce3933c05e", "text": "Big Data analytics provide support for decision making by discovering patterns and other useful information from large set of data. Organizations utilizing advanced analytics techniques to gain real value from Big Data will grow faster than their competitors and seize new opportunities. Cross-Industry Standard Process for Data Mining (CRISP-DM) is an industry-proven way to build predictive analytics models across the enterprise. However, the manual process in CRISP-DM hinders faster decision making on real-time application for efficient data analysis. In this paper, we present an approach to automate the process using Automatic Service Composition (ASC). Focusing on the planning stage of ASC, we propose an ontology-based workflow generation method to automate the CRISP-DM process. Ontology and rules are designed to infer workflow for data analytics process according to the properties of the datasets as well as user needs. Empirical study of our prototyping system has proved the efficiency of our workflow generation method.", "title": "" }, { "docid": "23ee528e0efe7c4fec7f8cda7e49a8dd", "text": "The development of reliability-based design criteria for surface ship structures needs to consider the following three components: (1) loads, (2) structural strength, and (3) methods of reliability analysis. A methodology for reliability-based design of ship structures is provided in this document. The methodology consists of the following two approaches: (1) direct reliabilitybased design, and (2) load and resistance factor design (LRFD) rules. According to this methodology, loads can be linearly or nonlinearly treated. Also in assessing structural strength, linear or nonlinear analysis can be used. The reliability assessment and reliability-based design can be performed at several levels of a structural system, such as at the hull-girder, grillage, panel, plate and detail levels. A rational treatment of uncertainty is suggested by considering all its types. Also, failure definitions can have significant effects on the assessed reliability, or resulting reliability-based designs. A method for defining and classifying failures at the system level is provided. The method considers the continuous nature of redundancy in ship structures. A bibliography is provided at the end of this document to facilitate future implementation of the methodology.", "title": "" }, { "docid": "3eb0ed6db613c94af266279bc38c1c28", "text": "We can better understand deep neural networks by identifying which features each of their neurons have learned to detect. To do so, researchers have created Deep Visualization techniques including activation maximization, which synthetically generates inputs (e.g. images) that maximally activate each neuron. A limitation of current techniques is that they assume each neuron detects only one type of feature, but we know that neurons can be multifaceted, in that they fire in response to many different types of features: for example, a grocery store class neuron must activate either for rows of produce or for a storefront. Previous activation maximization techniques constructed images without regard for the multiple different facets of a neuron, creating inappropriate mixes of colors, parts of objects, scales, orientations, etc. Here we introduce an algorithm that explicitly uncovers the multiple facets of each neuron by producing a synthetic visualization of each of the types of images that activate a neuron. We also introduce regularization methods that produce state-of-the-art results in terms of the interpretability of images obtained by activation maximization. By separately synthesizing each type of image a neuron fires in response to, the visualizations have more appropriate colors and coherent global structure. Multifaceted feature visualization thus provides a clearer and more comprehensive description of the role of each neuron. Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). Figure 1. Top: Visualizations of 8 types of images (feature facets) that activate the same “grocery store” class neuron. Bottom: Example training set images that activate the same neuron, and resemble the corresponding synthetic image in the top panel.", "title": "" }, { "docid": "f8e20b99ea8f6921d1904fe47bc88742", "text": "OBJECTIVE\nTo summarize the role of melatonin and circadian rhythms in determining optimal female reproductive physiology, especially at the peripheral level.\n\n\nDESIGN\nDatabases were searched for the related English-language literature published up to March 1, 2014. Only papers in peer-reviewed journals are cited.\n\n\nSETTING\nNot applicable.\n\n\nPATIENT(S)\nNot applicable.\n\n\nINTERVENTION(S)\nMelatonin treatment, alterations of the normal light:dark cycle and light exposure at night.\n\n\nMAIN OUTCOME MEASURE(S)\nMelatonin levels in the blood and in the ovarian follicular fluid and melatonin synthesis, oxidative damage and circadian rhythm disturbances in peripheral reproductive organs.\n\n\nRESULT(S)\nThe central circadian regulatory system is located in the suprachiasmatic nucleus (SCN). The output of this master clock is synchronized to 24 hours by the prevailing light-dark cycle. The SCN regulates rhythms in peripheral cells via the autonomic nervous system and it sends a neural message to the pineal gland where it controls the cyclic production of melatonin; after its release, the melatonin rhythm strengthens peripheral oscillators. Melatonin is also produced in the peripheral reproductive organs, including granulosa cells, the cumulus oophorus, and the oocyte. These cells, along with the blood, may contribute melatonin to the follicular fluid, which has melatonin levels higher than those in the blood. Melatonin is a powerful free radical scavenger and protects the oocyte from oxidative stress, especially at the time of ovulation. The cyclic levels of melatonin in the blood pass through the placenta and aid in the organization of the fetal SCN. In the absence of this synchronizing effect, the offspring may exhibit neurobehavioral deficits. Also, melatonin protects the developing fetus from oxidative stress. Melatonin produced in the placenta likewise may preserve the optimal function of this organ.\n\n\nCONCLUSION(S)\nBoth stable circadian rhythms and cyclic melatonin availability are critical for optimal ovarian physiology and placental function. Because light exposure after darkness onset at night disrupts the master circadian clock and suppresses elevated nocturnal melatonin levels, light at night should be avoided.", "title": "" }, { "docid": "54ceed51f750eadda3038b42eb9977a5", "text": "Starting from the revolutionary Retinex by Land and McCann, several further perceptually inspired color correction models have been developed with different aims, e.g. reproduction of color sensation, robust features recognition, enhancement of color images. Such models have a differential, spatially-variant and non-linear nature and they can coarsely be distinguished between white-patch (WP) and gray-world (GW) algorithms. In this paper we show that the combination of a pure WP algorithm (RSR: random spray Retinex) and an essentially GW one (ACE) leads to a more robust and better performing model (RACE). The choice of RSR and ACE follows from the recent identification of a unified spatially-variant approach for both algorithms. Mathematically, the originally distinct non-linear and differential mechanisms of RSR and ACE have been fused using the spray technique and local average operations. The investigation of RACE allowed us to put in evidence a common drawback of differential models: corruption of uniform image areas. To overcome this intrinsic defect, we devised a local and global contrast-based and image-driven regulation mechanism that has a general applicability to perceptually inspired color correction algorithms. Tests, comparisons and discussions are presented.", "title": "" }, { "docid": "37af8daa32affcdedb0b4820651a0b62", "text": "Bag of words (BoW) model, which was originally used for document processing field, has been introduced to computer vision field recently and used in object recognition successfully. However, in face recognition, the order less collection of local patches in BoW model cannot provide strong distinctive information since the objects (face images) belong to the same category. A new framework for extracting facial features based on BoW model is proposed in this paper, which can maintain holistic spatial information. Experimental results show that the improved method can obtain better face recognition performance on face images of AR database with extreme expressions, variant illuminations, and partial occlusions.", "title": "" }, { "docid": "9193aad006395bd3bd76cabf44012da5", "text": "In recent years, there is growing evidence that plant-foods polyphenols, due to their biological properties, may be unique nutraceuticals and supplementary treatments for various aspects of type 2 diabetes mellitus. In this article we have reviewed the potential efficacies of polyphenols, including phenolic acids, flavonoids, stilbenes, lignans and polymeric lignans, on metabolic disorders and complications induced by diabetes. Based on several in vitro, animal models and some human studies, dietary plant polyphenols and polyphenol-rich products modulate carbohydrate and lipid metabolism, attenuate hyperglycemia, dyslipidemia and insulin resistance, improve adipose tissue metabolism, and alleviate oxidative stress and stress-sensitive signaling pathways and inflammatory processes. Polyphenolic compounds can also prevent the development of long-term diabetes complications including cardiovascular disease, neuropathy, nephropathy and retinopathy. Further investigations as human clinical studies are needed to obtain the optimum dose and duration of supplementation with polyphenolic compounds in diabetic patients.", "title": "" }, { "docid": "b5c2e36e805f3ca96cde418137ed0239", "text": "PURPOSE\nTo report a novel method for measuring the degree of inferior oblique muscle overaction and to investigate the correlation with other factors.\n\n\nDESIGN\nCross-sectional diagnostic study.\n\n\nMETHODS\nOne hundred and forty-two eyes (120 patients) were enrolled in this study. Subjects underwent a full orthoptic examination and photographs were obtained in the cardinal positions of gaze. The images were processed using Photoshop and analyzed using the ImageJ program to measure the degree of inferior oblique muscle overaction. Reproducibility or interobserver variability was assessed by Bland-Altman plots and by calculation of the intraclass correlation coefficient (ICC). The correlation between the degree of inferior oblique muscle overaction and the associated factors was estimated with linear regression analysis.\n\n\nRESULTS\nThe mean angle of inferior oblique muscle overaction was 17.8 ± 10.1 degrees (range, 1.8-54.1 degrees). The 95% limit of agreement of interobserver variability for the degree of inferior oblique muscle overaction was ±1.76 degrees, and ICC was 0.98. The angle of inferior oblique muscle overaction showed significant correlation with the clinical grading scale (R = 0.549, P < .001) and with hypertropia in the adducted position (R = 0.300, P = .001). The mean angles of inferior oblique muscle overaction classified into grades 1, 2, 3, and 4 according to the clinical grading scale were 10.5 ± 9.1 degrees, 16.8 ± 7.8 degrees, 24.3 ± 8.8 degrees, and 40.0 ± 12.2 degrees, respectively (P < .001).\n\n\nCONCLUSIONS\nWe describe a new method for measuring the degree of inferior oblique muscle overaction using photographs of the cardinal positions. It has the potential to be a diagnostic tool that measures inferior oblique muscle overaction with minimal observer dependency.", "title": "" }, { "docid": "69d826aa8309678cf04e2870c23a99dd", "text": "Contemporary analyses of cell metabolism have called out three metabolites: ATP, NADH, and acetyl-CoA, as sentinel molecules whose accumulation represent much of the purpose of the catabolic arms of metabolism and then drive many anabolic pathways. Such analyses largely leave out how and why ATP, NADH, and acetyl-CoA (Figure 1 ) at the molecular level play such central roles. Yet, without those insights into why cells accumulate them and how the enabling properties of these key metabolites power much of cell metabolism, the underlying molecular logic remains mysterious. Four other metabolites, S-adenosylmethionine, carbamoyl phosphate, UDP-glucose, and Δ2-isopentenyl-PP play similar roles in using group transfer chemistry to drive otherwise unfavorable biosynthetic equilibria. This review provides the underlying chemical logic to remind how these seven key molecules function as mobile packets of cellular currencies for phosphoryl transfers (ATP), acyl transfers (acetyl-CoA, carbamoyl-P), methyl transfers (SAM), prenyl transfers (IPP), glucosyl transfers (UDP-glucose), and electron and ADP-ribosyl transfers (NAD(P)H/NAD(P)+) to drive metabolic transformations in and across most primary pathways. The eighth key metabolite is molecular oxygen (O2), thermodynamically activated for reduction by one electron path, leaving it kinetically stable to the vast majority of organic cellular metabolites.", "title": "" }, { "docid": "57390f3fdf19f09d127a53e74337fe06", "text": "As a competitor for Li4Ti5O12 with a higher capacity and extreme safety, monoclinic TiNb2O7 has been considered as a promising anode material for next-generation high power lithium ion batteries. However, TiNb2O7 suffers from low electronic conductivity and ionic conductivity, which restricts the electrochemical kinetics. Herein, a facile and advanced architecture design of hierarchical TiNb2O7 microspheres is successfully developed for large-scale preparation without any surfactant assistance. To the best of our knowledge, this is the first report on the one step solvothermal synthesis of TiNb2O7 microspheres with micro- and nano-scale composite structures. When evaluated as an anode material for lithium ion batteries, the electrode exhibits excellent high rate capacities and ultra-long cyclability, such as 258 mA h g(-1) at 1 C, 175 mA h g(-1) at 5 C, and 138 mA h g(-1) at 10 C, extending to more than 500 cycles.", "title": "" }, { "docid": "3ff55193d10980cbb8da5ec757b9161c", "text": "The growth of social web contributes vast amount of user generated content such as customer reviews, comments and opinions. This user generated content can be about products, people, events, etc. This information is very useful for businesses, governments and individuals. While this content meant to be helpful analyzing this bulk of user generated content is difficult and time consuming. So there is a need to develop an intelligent system which automatically mine such huge content and classify them into positive, negative and neutral category. Sentiment analysis is the automated mining of attitudes, opinions, and emotions from text, speech, and database sources through Natural Language Processing (NLP). The objective of this paper is to discover the concept of Sentiment Analysis in the field of Natural Language Processing, and presents a comparative study of its techniques in this field. Keywords— Natural Language Processing, Sentiment Analysis, Sentiment Lexicon, Sentiment Score.", "title": "" }, { "docid": "529045d9f2f78b5168ec2c7ca67ea9ab", "text": "The development of a chronic mollusc toxicity test is a current work item on the agenda of the OECD. The freshwater pond snail Lymnaea stagnalis is one of the candidate snail species for such a test. This paper presents a 21-day chronic toxicity test with L. stagnalis, focussing on embryonic development. Eggs were collected from freshly laid egg masses and exposed individually until hatching. The endpoints were hatching success and mean hatching time. Tributyltin (TBT), added as TBT-chloride, was chosen as model substance. The selected exposure concentrations ranged from 0.03 to 10 μg TBT/L (all as nominal values) and induced the full range of responses. The embryos were sensitive to TBT (the NOEC for mean hatching time was 0.03 μg TBT/L and the NOEC for hatching success was 0.1 μg TBT/L). In addition, data on maximum limit concentrations of seven common solvents, recommended in OECD aquatic toxicity testing guidelines, are presented. Among the results, further findings as average embryonic growth and mean hatching time of control groups are provided. In conclusion, the test presented here could easily be standardised and is considered useful as a potential trigger to judge if further studies, e.g. a (partial) life-cycle study with molluscs, should be conducted.", "title": "" }, { "docid": "6b19185466fb134b6bfb09b04b9e4b15", "text": "BACKGROUND\nThe increasing concern about the adverse effects of overuse of smartphones during clinical practicum implies the need for policies restricting smartphone use while attending to patients. It is important to educate health personnel about the potential risks that can arise from the associated distraction.\n\n\nOBJECTIVE\nThe aim of this study was to analyze the relationship between the level of nomophobia and the distraction associated with smartphone use among nursing students during their clinical practicum.\n\n\nMETHODS\nA cross-sectional study was carried out on 304 nursing students. The nomophobia questionnaire (NMP-Q) and a questionnaire about smartphone use, the distraction associated with it, and opinions about phone restriction policies in hospitals were used.\n\n\nRESULTS\nA positive correlation between the use of smartphones and the total score of nomophobia was found. In the same way, there was a positive correlation between opinion about smartphone restriction polices with each of the dimensions of nomophobia and the total score of the questionnaire.\n\n\nCONCLUSIONS\nNursing students who show high levels of nomophobia also regularly use their smartphones during their clinical practicum, although they also believe that the implementation of policies restricting smartphone use while working is necessary.", "title": "" }, { "docid": "06614a4d74d2d059944b9487f2966ff4", "text": "In web search, relevance ranking of popular pages is relatively easy, because of the inclusion of strong signals such as anchor text and search log data. In contrast, with less popular pages, relevance ranking becomes very challenging due to a lack of information. In this paper the former is referred to as head pages, and the latter tail pages. We address the challenge by learning a model that can extract search-focused key n-grams from web pages, and using the key n-grams for searches of the pages, particularly, the tail pages. To the best of our knowledge, this problem has not been previously studied. Our approach has four characteristics. First, key n-grams are search-focused in the sense that they are defined as those which can compose \"good queries\" for searching the page. Second, key n-grams are learned in a relative sense using learning to rank techniques. Third, key n-grams are learned using search log data, such that the characteristics of key n-grams in the search log data, particularly in the heads; can be applied to the other data, particularly to the tails. Fourth, the extracted key n-grams are used as features of the relevance ranking model also trained with learning to rank techniques. Experiments validate the effectiveness of the proposed approach with large-scale web search datasets. The results show that our approach can significantly improve relevance ranking performance on both heads and tails; and particularly tails, compared with baseline approaches. Characteristics of our approach have also been fully investigated through comprehensive experiments.", "title": "" }, { "docid": "c772bc43f2b8c76aa3e096405cd1b824", "text": "Application programmers increasingly prefer distributed storage systems with strong consistency and distributed transactions (e.g., Google's Spanner) for their strong guarantees and ease of use. Unfortunately, existing transactional storage systems are expensive to use -- in part because they require costly replication protocols, like Paxos, for fault tolerance. In this paper, we present a new approach that makes transactional storage systems more affordable: we eliminate consistency from the replication protocol while still providing distributed transactions with strong consistency to applications.\n We present TAPIR -- the Transactional Application Protocol for Inconsistent Replication -- the first transaction protocol to use a novel replication protocol, called inconsistent replication, that provides fault tolerance without consistency. By enforcing strong consistency only in the transaction protocol, TAPIR can commit transactions in a single round-trip and order distributed transactions without centralized coordination. We demonstrate the use of TAPIR in a transactional key-value store, TAPIR-KV. Compared to conventional systems, TAPIR-KV provides better latency and throughput.", "title": "" }, { "docid": "3550dbe913466a675b621d476baba219", "text": "Successful implementing and managing of change is urgently necessary for each adult educational organization. During the process, leading of the staff is becoming a key condition and the most significant factor. Beside certain personal traits of the leader, change management demands also certain leadership knowledges, skills, versatilities and behaviour which may even border on changing the organizational culture. The paper finds the significance of certain values and of organizational climate and above all the significance of leadership style which a leader will adjust to the staff and to the circumstances. The author presents a multiple qualitative case study of managing change in three adult educational organizations. The paper finds that factors of successful leading of change exist which represent an adequate approach to leading the staff during the introduction of changes in educational organizations. Its originality/value is in providing information on the important relationship between culture, leadership styles and leader’s behaviour as preconditions for successful implementing and managing of strategic change.", "title": "" }, { "docid": "736f8a02bbe5ab9a5b9dd5026430e05c", "text": "We present a novel approach for interactive navigation and planning of multiple agents in crowded scenes with moving obstacles. Our formulation uses a precomputed roadmap that provides macroscopic, global connectivity for wayfinding and combines it with fast and localized navigation for each agent. At runtime, each agent senses the environment independently and computes a collision-free path based on an extended \"Velocity Obstacles\" concept. Furthermore, our algorithm ensures that each agent exhibits no oscillatory behaviors. We have tested the performance of our algorithm in several challenging scenarios with a high density of virtual agents. In practice, the algorithm performance scales almost linearly with the number of agents and can run at interactive rates on multi-core processors.", "title": "" } ]
scidocsrr
03dd415c4c06083c36548784758f6f44
Working Locally Thinking Globally - Part I: Theoretical Guarantees for Convolutional Sparse Coding
[ { "docid": "fb812ad6355e10dafff43c3d4487f6a7", "text": "Image priors are of great importance in image restoration tasks. These problems can be addressed by decomposing the degraded image into overlapping patches, treating the patches individually and averaging them back together. Recently, the Expected Patch Log Likelihood (EPLL) method has been introduced, arguing that the chosen model should be enforced on the final reconstructed image patches. In the context of a Gaussian Mixture Model (GMM), this idea has been shown to lead to state-of-the-art results in image denoising and debluring. In this paper we combine the EPLL with a sparse-representation prior. Our derivation leads to a close yet extended variant of the popular K-SVD image denoising algorithm, where in order to effectively maximize the EPLL the denoising process should be iterated. This concept lies at the core of the K-SVD formulation, but has not been addressed before due the need to set different denoising thresholds in the successive sparse coding stages. We present a method that intrinsically determines these thresholds in order to improve the image estimate. Our results show a notable improvement over K-SVD in image denoising and inpainting, achieving comparable performance to that of EPLL with GMM in denoising.", "title": "" }, { "docid": "e72d47bddec148ed3edbbd26950016be", "text": "Sparse and convolutional constraints form a natural prior for many optimization problems that arise from physical processes. Detecting motifs in speech and musical passages, super-resolving images, compressing videos, and reconstructing harmonic motions can all leverage redundancies introduced by convolution. Solving problems involving sparse and convolutional constraints remains a difficult computational problem, however. In this paper we present an overview of convolutional sparse coding in a consistent framework. The objective involves iteratively optimizing a convolutional least-squares term for the basis functions, followed by an L1-regularized least squares term for the sparse coefficients. We discuss a range of optimization methods for solving the convolutional sparse coding objective, and the properties that make each method suitable for different applications. In particular, we concentrate on computational complexity, speed to convergence, memory usage, and the effect of implied boundary conditions. We present a broad suite of examples covering different signal and application domains to illustrate the general applicability of convolutional sparse coding, and the efficacy of the available optimization methods.", "title": "" }, { "docid": "de0c3f4d5cbad1ce78e324666937c232", "text": "We propose an unsupervised method for learning multi-stage hierarchies of sparse convolutional features. While sparse coding has become an in creasingly popular method for learning visual features, it is most often traine d at the patch level. Applying the resulting filters convolutionally results in h ig ly redundant codes because overlapping patches are encoded in isolation. By tr aining convolutionally over large image windows, our method reduces the redudancy b etween feature vectors at neighboring locations and improves the efficienc y of the overall representation. In addition to a linear decoder that reconstruct s the image from sparse features, our method trains an efficient feed-forward encod er that predicts quasisparse features from the input. While patch-based training r arely produces anything but oriented edge detectors, we show that convolution al training produces highly diverse filters, including center-surround filters, corner detectors, cross detectors, and oriented grating detectors. We show that using these filters in multistage convolutional network architecture improves perfor mance on a number of visual recognition and detection tasks.", "title": "" } ]
[ { "docid": "617a3f1ed0164a058932cd9e96a9d103", "text": "Conventional approaches to speaker diarization use short-term features such as Mel Frequency Cepstral Co-efficients (MFCC). Features such as i-vectors have been used on longer segments (minimum 2.5 seconds of speech). Using i-vectors for speaker diarization has been shown to be beneficial as it models speaker information explicitly. In this paper, the i-vector modelling technique is adapted to be used as short term features for diarization by estimating i-vectors over a short window of MFCCs. The Information Bottleneck (IB) approach provides a convenient platform to integrate multiple features together for fast and accurate diarization of speech. Speaker models are estimated over a window of 10 frames of speech and used as features in the IB system. Experiments on the NIST RT datasets show absolute improvements of 3.9% in the best case when ivectors are used as auxiliary features to MFCC. Further, discriminative training algorithms such as LDA and PLDA are applied on the i-vectors. A best case performance improvement of 5% in absolute terms is obtained on the RT datasets.", "title": "" }, { "docid": "23a21e2d967c8fb8ccc5d282c597ff06", "text": "Digital pathology and microscopy image analysis is widely used for comprehensive studies of cell morphology or tissue structure. Manual assessment is labor intensive and prone to interobserver variations. Computer-aided methods, which can significantly improve the objectivity and reproducibility, have attracted a great deal of interest in recent literature. Among the pipeline of building a computer-aided diagnosis system, nucleus or cell detection and segmentation play a very important role to describe the molecular morphological information. In the past few decades, many efforts have been devoted to automated nucleus/cell detection and segmentation. In this review, we provide a comprehensive summary of the recent state-of-the-art nucleus/cell segmentation approaches on different types of microscopy images including bright-field, phase-contrast, differential interference contrast, fluorescence, and electron microscopies. In addition, we discuss the challenges for the current methods and the potential future work of nucleus/cell detection and segmentation.", "title": "" }, { "docid": "0d9c60a9cdd5809e02da5ba660ba3c65", "text": "In this paper, we propose a method that infers both accurate depth maps and color-consistent stereo images for radiometrically varying stereo images. In general, stereo matching and performing color consistency between stereo images are a chicken-and-egg problem since it is not a trivial task to simultaneously achieve both goals. Hence, we have developed an iterative framework in which these two processes can boost each other. First, we transform the input color images to log-chromaticity color space, from which a linear relationship can be established during constructing a joint pdf of transformed left and right color images. From this joint pdf, we can estimate a linear function that relates the corresponding pixels in stereo images. Based on this linear property, we present a new stereo matching cost by combining Mutual Information (MI), SIFT descriptor, and segment-based plane-fitting to robustly find correspondence for stereo image pairs which undergo radiometric variations. Meanwhile, we devise a Stereo Color Histogram Equalization (SCHE) method to produce color-consistent stereo image pairs, which conversely boost the disparity map estimation. Experimental results show that our method produces both accurate depth maps and color-consistent stereo images, even for stereo images with severe radiometric differences.", "title": "" }, { "docid": "ffde296b436c2d9f5e2aa85f731a5758", "text": "Financial institutions are interested in ensuring security and quality for their customers. Banks, for instance, need to identify and stop harmful transactions in a timely manner. In order to detect fraudulent operations, data mining techniques and customer profile analysis are commonly used. However, these approaches are not supported by Visual Analytics techniques yet. Visual Analytics techniques have potential to considerably enhance the knowledge discovery process and increase the detection and prediction accuracy of financial fraud detection systems. Thus, we propose EVA, a Visual Analytics approach for supporting fraud investigation, fine-tuning fraud detection algorithms, and thus, reducing false positive alarms.", "title": "" }, { "docid": "66acdc82a531a8ca9817399a2df8a255", "text": "Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach.", "title": "" }, { "docid": "824480b0f5886a37ca1930ce4484800d", "text": "Conduction loss reduction technique using a small resonant capacitor for a phase shift full bridge converter with clamp diodes is proposed in this paper. The proposed technique can be implemented simply by adding a small resonant capacitor beside the leakage inductor of transformer. Since the voltage across the small resonant capacitor is applied to the small leakage inductor of transformer during freewheeling period, the primary current can be decreased rapidly. This results in the reduced conduction loss on the secondary side of transformer while the proposed technique can still guarantee the wide ZVS ranges. The operational principles and analysis are presented. Experimental results show that the proposed reduction technique of conduction loss can be operated properly.", "title": "" }, { "docid": "012da2dd973e4b3fa94c46e417ed8d17", "text": "Sustainable HCI is now a recognized area of human-computer interaction drawing from a variety of disciplinary approaches, including the arts. How might HCI researchers working on sustainability productively understand the discourses and practices of ecologically engaged art as a means of enriching their own activities? We argue that an understanding of both the history of ecologically engaged art, and the art-historical and critical discourses surrounding it, provide a fruitful entry-point into a more critically aware sustainable HCI. We illustrate this through a consideration of frameworks from the arts, looking specifically at how these frameworks act more as generative devices than prescriptive recipes. Taking artistic influences seriously will require a concomitant rethinking of sustainable HCI standpoints - a potentially useful exercise for HCI research in general.", "title": "" }, { "docid": "3ddf82be24ab5e20c141f67dfde05fdc", "text": "In August 1998, Texas AM University implemented on campus a trap-test-vaccinate-alter-return-monitor (TTVARM) program to manage the feral cat population. TTVARM is an internationally recognized term for trapping and neutering programs aimed at management of feral cat populations. In this article we summarize results of the program for the period August 1998 to July 2000. In surgery laboratories, senior veterinary students examined cats that were humanely trapped once a month and tested them for feline leukemia and feline immunodeficiency virus infections, vaccinated, and surgically neutered them. They euthanized cats testing positive for either infectious disease. Volunteers provided food and observed the cats that were returned to their capture sites on campus and maintained in managed colonies. The program placed kittens and tame cats for adoption; cats totaled 158. Of the majority of 158 captured cats, there were less kittens caught in Year 2 than in Year 1. The proportion of tame cats trapped was significantly greater in Year 2 than in Year 1. The prevalence found for feline leukemia and feline immunodeficiency virus ELISA test positives was 5.8% and 6.5%, respectively. Following surgery, 101 cats returned to campus. The project recaptured, retested, and revaccinated more than one-fourth of the cats due for their annual vaccinations. The program placed 32 kittens, juveniles, and tame adults for adoption. The number of cat complaints received by the university's pest control service decreased from Year 1 to Year 2.", "title": "" }, { "docid": "664a2f7213b27087970305544d83d78f", "text": "We give a new construction of overconvergent modular forms of arbitrary weights, defining them in terms of functions on certain affinoid subsets of Scholze’s infinite-level modular curve. These affinoid subsets, and a certain canonical coordinate on them, play a role in our construction which is strongly analogous with the role of the upper half-plane and its coordinate ‘z’ in the classical analytic theory of modular forms. As one application of these ideas, we define and study an overconvergent Eichler-Shimura map in the context of compact Shimura curves over Q, proving stronger analogues of results of Andreatta-Iovita-Stevens.", "title": "" }, { "docid": "2c0a4b5c819a8fcfd5a9ab92f59c311e", "text": "Line starting capability of Synchronous Reluctance Motors (SynRM) is a crucial challenge in their design that if solved, could lead to a valuable category of motors. In this paper, the so-called crawling effect as a potential problem in Line-Start Synchronous Reluctance Motors (LS-SynRM) is analyzed. Two interfering scenarios on LS-SynRM start-up are introduced and one of them is treated in detail by constructing the asynchronous model of the motor. In the third section, a definition of this phenomenon is given utilizing a sample cage configuration. The LS-SynRM model and characteristics are compared with that of a reference induction motor (IM) in all sections of this work to convey a better perception of successful and unsuccessful synchronization consequences to the reader. Several important post effects of crawling on motor performance are discussed in the rest of the paper to evaluate how it would influence the motor operation. All simulations have been performed using Finite Element Analysis (FEA).", "title": "" }, { "docid": "a9868eeca8a2b94c7bfe2e9bf880645d", "text": "UNLABELLED\nPart 1 of this two-part series (presented in the June issue of IJSPT) provided an introduction to functional movement screening, as well as the history, background, and a summary of the evidence regarding the reliability of the Functional Movement Screen (FMS™). Part 1 presented three of the seven fundamental movement patterns that comprise the FMS™, and the specific ordinal grading system from 0-3, used in the their scoring. Specifics for scoring each test are presented. Part 2 of this series provides a review of the concepts associated with the analysis of fundamental movement as a screening system for functional movement competency. In addition, the four remaining movements of the FMS™, which complement those described in Part 1, will be presented (to complete the total of seven fundamental movements): Shoulder Mobility, the Active Straight Leg Raise, the Trunk Stability Push-up, and Rotary Stability. The final four patterns are described in detail, and the specifics for scoring each test are presented, as well as the proposed clinical implications for receiving a grade less than a perfect \"3\". The intent of this two part series is to present the concepts associated with screening of fundamental movements, whether it is the FMS™ system or a different system devised by another clinician. Such a fundamental screen of the movement system should be incorporated into pre-participation screening and return to sport testing in order to determine whether an athlete has the essential movements needed to participate in sports activities at a level of minimum competency. Part 2 concludes with a discussion of the evidence related to functional movement screening, myths related to the FMS™, the future of functional movement screening, and the concept of movement as a system.\n\n\nLEVEL OF EVIDENCE\n5.", "title": "" }, { "docid": "a016fb3b7e5c4bcf386d775c7c61a887", "text": "How do journalists mark quoted content as certain or uncertain, and how do readers interpret these signals? Predicates such as thinks, claims, and admits offer a range of options for framing quoted content according to the author’s own perceptions of its credibility. We gather a new dataset of direct and indirect quotes from Twitter, and obtain annotations of the perceived certainty of the quoted statements. We then compare the ability of linguistic and extra-linguistic features to predict readers’ assessment of the certainty of quoted content. We see that readers are indeed influenced by such framing devices — and we find no evidence that they consider other factors, such as the source, journalist, or the content itself. In addition, we examine the impact of specific framing devices on perceptions of credibility.", "title": "" }, { "docid": "fb7b31b83a0d79a054bab155dfaae79e", "text": "An otherwise-healthy 13-year-old girl with previously normal nails developed longitudinal pigmented bands on multiple fingernails. Physical examination revealed faintly pigmented bands on multiple fingernails and on the left fifth toenail. We believed that the cause of the pigmented bands was onychophagia-induced longitudinal melanonychia, a rare phenomenon, which emphasizes the need for dermatologists to question patients with melanonychia about their nail biting habits because they may not be forthcoming with this information.", "title": "" }, { "docid": "62ff5888ad0c8065097603da8ff79cd6", "text": "Modern Internet systems often combine different applications (e.g., DNS, web, and database), span different administrative domains, and function in the context of network mechanisms like tunnels, VPNs, NATs, and overlays. Diagnosing these complex systems is a daunting challenge. Although many diagnostic tools exist, they are typically designed for a specific layer (e.g., traceroute) or application, and there is currently no tool for reconstructing a comprehensive view of service behavior. In this paper we propose X-Trace, a tracing framework that provides such a comprehensive view for systems that adopt it. We have implemented X-Trace in several protocols and software systems, and we discuss how it works in three deployed scenarios: DNS resolution, a three-tiered photo-hosting website, and a service accessed through an overlay network.", "title": "" }, { "docid": "4efc6eeabd2a3f6c4f376cb3e533f9d1", "text": "Recognizing and distinguishing antonyms from other types of semantic relations is an essential part of language understanding systems. In this paper, we present a novel method for deriving antonym pairs using paraphrase pairs containing negation markers. We further propose a neural network model, AntNET, that integrates morphological features indicative of antonymy into a path-based relation detection algorithm. We demonstrate that our model outperforms state-of-the-art models in distinguishing antonyms from other semantic relations and is capable of efficiently handling multi-word expressions.", "title": "" }, { "docid": "ba34f6120b08c57cec8794ec2b9256d2", "text": "Principles of reconstruction dictate a number of critical points for successful repair. To achieve aesthetic and functional goals, the dermatologic surgeon should avoid deviation of anatomical landmarks and free margins, maintain shape and symmetry, and repair with skin of similar characteristics. Reconstruction of the ear presents a number of unique challenges based on the limited amount of adjacent lax tissue within the cosmetic unit and the structure of the auricle, which consists of a relatively thin skin surface and flexible cartilaginous framework.", "title": "" }, { "docid": "dc2d5f9bfe41246ae9883aa6c0537c40", "text": "Phosphatidylinositol 3-kinases (PI3Ks) are crucial coordinators of intracellular signalling in response to extracellular stimuli. Hyperactivation of PI3K signalling cascades is one of the most common events in human cancers. In this Review, we discuss recent advances in our knowledge of the roles of specific PI3K isoforms in normal and oncogenic signalling, the different ways in which PI3K can be upregulated, and the current state and future potential of targeting this pathway in the clinic.", "title": "" }, { "docid": "9b4ffbbcd97e94524d2598cd862a400a", "text": "Head pose monitoring is an important task for driver assistance systems, since it is a key indicator for human attention and behavior. However, current head pose datasets either lack complexity or do not adequately represent the conditions that occur while driving. Therefore, we introduce DriveAHead, a novel dataset designed to develop and evaluate head pose monitoring algorithms in real driving conditions. We provide frame-by-frame head pose labels obtained from a motion-capture system, as well as annotations about occlusions of the driver's face. To the best of our knowledge, DriveAHead is the largest publicly available driver head pose dataset, and also the only one that provides 2D and 3D data aligned at the pixel level using the Kinect v2. Existing performance metrics are based on the mean error without any consideration of the bias towards one position or another. Here, we suggest a new performance metric, named Balanced Mean Angular Error, that addresses the bias towards the forward looking position existing in driving datasets. Finally, we present the Head Pose Network, a deep learning model that achieves better performance than current state-of-the-art algorithms, and we analyze its performance when using our dataset.", "title": "" }, { "docid": "66fce3b6c516a4fa4281d19d6055b338", "text": "This paper presents the mechatronic design and experimental validation of a novel powered knee-ankle orthosis for testing torque-driven rehabilitation control strategies. The modular actuator of the orthosis is designed with a torque dense motor and a custom low-ratio transmission (24:1) to provide mechanical transparency to the user, allowing them to actively contribute to their joint kinematics during gait training. The 4.88 kg orthosis utilizes frameless components and light materials, such as aluminum alloy and carbon fiber, to reduce its mass. A human subject experiment demonstrates accurate torque control with high output torque during stance and low backdrive torque during swing at fast walking speeds. This work shows that backdrivability, precise torque control, high torque output, and light weight can be achieved in a powered orthosis without the high cost and complexity of variable transmissions, clutches, and/or series elastic components.", "title": "" }, { "docid": "265a4658b20bc59b5b71732864a4ba69", "text": "-------------------------------------------------------------------ABSTRACT-----------------------------------------------------------------In this paper, we discuss the implementation of a rule based expert system for diagnosing neuromuscular diseases. The proposed system is implemented as a rule based expert system in JESS for the diagnosis of Cerebral Palsy, Multiple Sclerosis, Muscular Dystrophy and Parkinson’s disease. In the system, the user is presented with a list of questionnaires about the symptoms of the patients based on which the disease of the patient is diagnosed and possible treatment is suggested. The system can aid and support the patients suffering from neuromuscular diseases to get an idea of their disease and possible treatment for the disease.", "title": "" } ]
scidocsrr
1b25c619f24301d138bf6ed7d4e52cb0
Summarization system evaluation revisited: N-gram graphs
[ { "docid": "6d227bbf8df90274f44a26d9c269c663", "text": "Text categorization is a fundamental task in document processing, allowing the automated handling of enormous streams of documents in electronic form. One difficulty in handling some classes of documents is the presence of different kinds of textual errors, such as spelling and grammatical errors in email, and character recognition errors in documents that come through OCR. Text categorization must work reliably on all input, and thus must tolerate some level of these kinds of problems. We describe here an N-gram-based approach to text categorization that is tolerant of textual errors. The system is small, fast and robust. This system worked very well for language classification, achieving in one test a 99.8% correct classification rate on Usenet newsgroup articles written in different languages. The system also worked reasonably well for classifying articles from a number of different computer-oriented newsgroups according to subject, achieving as high as an 80% correct classification rate. There are also several obvious directions for improving the system’s classification performance in those cases where it did not do as well. The system is based on calculating and comparing profiles of N-gram frequencies. First, we use the system to compute profiles on training set data that represent the various categories, e.g., language samples or newsgroup content samples. Then the system computes a profile for a particular document that is to be classified. Finally, the system computes a distance measure between the document’s profile and each of the category profiles. The system selects the category whose profile has the smallest distance to the document’s profile. The profiles involved are quite small, typically 10K bytes for a category training set, and less than 4K bytes for an individual document. Using N-gram frequency profiles provides a simple and reliable way to categorize documents in a wide range of classification tasks.", "title": "" } ]
[ { "docid": "d10a27e1a43ac6de88b6db4b3874fb11", "text": "In this thesis, a quantitative evaluation is performed to find the most relevant physically based rendering systems in research. As a consequence of this evaluation, the rendering systems Mitsuba, PBRT-v3 and LuxRender are compared to each other and their potential for interoperability is assessed. The goal is to find common materials and light models and analyze the effects of changing the parameters of those models.", "title": "" }, { "docid": "0e4b2c41b3564721f1f4d8a7321356db", "text": "In human-computer conversation systems, the context of a userissued utterance is particularly important because it provides useful background information of the conversation. However, it is unwise to track all previous utterances in the current session as not all of them are equally important. In this paper, we address the problem of session segmentation. We propose an embedding-enhanced TextTiling approach, inspired by the observation that conversation utterances are highly noisy, and that word embeddings provide a robust way of capturing semantics. Experimental results show that our approach achieves better performance than the TextTiling, MMD approaches.", "title": "" }, { "docid": "7fd0466726e23256dc8c63539e90980b", "text": "We are building a system that can automatically acquire 3D range scans and 2D images to build geometrically correct, texture mapped 3D models of urban environments. This paper deals with the problem of automatically registering the 3D range scans with images acquired at other times and with unknown camera calibration and location. The method involves the utilization of parallelism and orthogonality constraints that naturally exist in urban environments. We present results for building a texture mapped 3-D model of an urban", "title": "" }, { "docid": "ec6b1d26b06adc99092659b4a511da44", "text": "Social identity threat is the notion that one of a person's many social identities may be at risk of being devalued in a particular context (C. M. Steele, S. J. Spencer, & J. Aronson, 2002). The authors suggest that in domains in which women are already negatively stereotyped, interacting with a sexist man can trigger social identity threat, undermining women's performance. In Study 1, male engineering students who scored highly on a subtle measure of sexism behaved in a dominant and sexually interested way toward an ostensible female classmate. In Studies 2 and 3, female engineering students who interacted with such sexist men, or with confederates trained to behave in the same way, performed worse on an engineering test than did women who interacted with nonsexist men. Study 4 replicated this finding and showed that women's underperformance did not extend to an English test, an area in which women are not negatively stereotyped. Study 5 showed that interacting with sexist men leads women to suppress concerns about gender stereotypes, an established mechanism of stereotype threat. Discussion addresses implications for social identity threat and for women's performance in school and at work.", "title": "" }, { "docid": "a212c06f01d746779da52c6ead7e185c", "text": "Existing visual tracking methods usually localize the object with a bounding box, in which the foreground object trackers/detectors are often disturbed by the introduced background information. To handle this problem, we aim to learn a more robust object representation for visual tracking. In particular, the tracked object is represented with a graph structure (i.e., a set of non-overlapping image patches), in which the weight of each node (patch) indicates how likely it belongs to the foreground and edges are also weighed for indicating the appearance compatibility of two neighboring nodes. This graph is dynamically learnt (i.e., the nodes and edges received weights) and applied in object tracking and model updating. We constrain the graph learning from two aspects: i) the global low-rank structure over all nodes and ii) the local sparseness of node neighbors. During the tracking process, our method performs the following steps at each frame. First, the graph is initialized by assigning either 1 or 0 to the weights of some image patches according to the predicted bounding box. Second, the graph is optimized through designing a new ALM (Augmented Lagrange Multiplier) based algorithm. Third, the object feature representation is updated by imposing the weights of patches on the extracted image features. The object location is finally predicted by adopting the Struck tracker (Hare, Saffari, and Torr 2011). Extensive experiments show that our approach outperforms the state-of-the-art tracking methods on two standard benchmarks, i.e., OTB100 and NUS-PRO.", "title": "" }, { "docid": "32775ba6d1a26274eaa6ce92513d9850", "text": "Data reduction plays an important role in machine learning and pattern recognition with a high-dimensional data. In real-world applications data usually exists with hybrid formats, and a unified data reducing technique for hybrid data is desirable. In this paper, an information measure is proposed to computing discernibility power of a crisp equivalence relation or a fuzzy one, which is the key concept in classical rough set model and fuzzy-rough set model. Based on the information measure, a general definition of significance of nominal, numeric and fuzzy attributes is presented. We redefine the independence of hybrid attribute subset, reduct, and relative reduct. Then two greedy reduction algorithms for unsupervised and supervised data dimensionality reduction based on the proposed information measure are constructed. Experiments show the reducts found by the proposed algorithms get a better performance compared with classical rough set approaches. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c49b4bf87335ad6620de2c59761f240c", "text": "Due to the continually increasing levels of penetration of distributed generation the correct operation of Loss-Of-Mains protection is of prime importance. Many UK utilities report persistent problems relating to incorrect operation of the ROCOF and Vector Shift methods which are currently the most commonly applied methods for Loss-Of-Mains (LOM) detection. The main focus of this paper is to demonstrate the problems associated with these methods through detailed dynamic modelling of existing available relays. The ability to investigate the transient response of the LOM protection to various system events highlights the main weaknesses of the existing methods, and more importantly, provides the means of quantitative analysis and better understanding of these weaknesses. Consequently, the dynamic analysis of the protective algorithms supports the identification of best compromise settings and gives insight to the future areas requiring improvement.", "title": "" }, { "docid": "672e9317533f874caf955271c2a2ea66", "text": "Ant colony optimization (ACO) has been widely used for different combinatorial optimization problems. In this paper, we investigate ACO algorithms with respect to their runtime behavior for the traveling salesperson (TSP) problem. Ant Colony Optimization (ACO) is a heuristic algorithm which has been proven a successful technique and applied to a number of combinatorial optimization (CO) problems. The traveling salesman problem (TSP) is one of the most important combinatorial problems. There are several reasons for the choice of the TSP as the problem to explain the working of ACO algorithms it is easily understandable, so that the algorithm behavior is not obscured by too many technicalities; and it is a standard test bed for new algorithmic ideas as a good performance on the TSP is often taken as a proof of their usefulness.", "title": "" }, { "docid": "bc5e9005cbcd08b7a9e64b7584b14750", "text": "• Taxonomise It must allow users to locate or select material from a large corpus; to find the pattern(s) they need. • Proximate It must allow users to locate supporting, perhaps inter-related, patterns applicable to their solution – both “broader” and “narrower”. (This use mirrors known designer behaviour, to jump between levels and layers of the problem, undertaking “opportunistic forays into design detail”) • Evaluative It would be desirable if an organising principle allowed users to consider the problem from different viewpoints – so that they could evaluate and change their approach, or, equally, confirm the quality of their existing solution • Generative It should allow users to build new solutions, not previously considered.", "title": "" }, { "docid": "3ddf6fab70092eade9845b04dd8344a0", "text": "Fractional Fourier transform (FRFT) is a generalization of the Fourier transform, rediscovered many times over the past 100 years. In this paper, we provide an overview of recent contributions pertaining to the FRFT. Specifically, the paper is geared toward signal processing practitioners by emphasizing the practical digital realizations and applications of the FRFT. It discusses three major topics. First, the manuscripts relates the FRFT to other mathematical transforms. Second, it discusses various approaches for practical realizations of the FRFT. Third, we overview the practical applications of the FRFT. From these discussions, we can clearly state that the FRFT is closely related to other mathematical transforms, such as time–frequency and linear canonical transforms. Nevertheless, we still feel that major contributions are expected in the field of the digital realizations and its applications, especially, since many digital realizations of a b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.", "title": "" }, { "docid": "6605397ad283fd4d353150d9066f8e6e", "text": "In this paper we present our continuing efforts to generate narrative using a character-centric approach. In particular we discuss the advantages of explicitly representing the emergent event sequence in order to be able to exert influence on it and generate stories that ‘retell’ the emergent narrative. Based on a narrative distinction between fabula, plot and presentation, we make a first step by presenting a model based on story comprehension that can capture the fabula, and show how it can be used for the automatic creation of stories.", "title": "" }, { "docid": "7920ac3492c7b3ef07e33857800ef66f", "text": "Despite of processing elements which are thousands of times faster than the neurons in the brain, modern computers still cannot match quite a few processing capabilities of the brain, many of which we even consider trivial (such as recognizing faces or voices, or following a conversation). A common principle for those capabilities lies in the use of correlations between patterns in order to identify patterns which are similar. Looking at the brain as an information processing mechanism with { maybe among others { associative processing capabilities together with the converse view of associative memories as certain types of artiicial neural networks initiated a number of interesting results, ranging from theoretical considerations to insights in the functioning of neurons, as well as parallel hardware implementations of neural associative memories. This paper discusses three main aspects of neural associative memories: theoretical investigations, e.g. on the information storage capacity, local learning rules, eeective retrieval strategies, and encoding schemes implementation aspects, in particular for parallel hardware and applications One important outcome of our analytical considerations is that the combination of binary synaptic weights, sparsely encoded memory patterns, and local learning rules | in particular Hebbian learning | leads to favorable representation and access schemes. Based on these considerations, a series of parallel hardware architectures has been developed in the last decade; the current one is the Pan-IV (Parallel Associative Network), which uses the special purpose Bacchus{chips and standard memory for realizing 4096 neurons with 128 MBytes of storage capacity.", "title": "" }, { "docid": "3abf10f8539840b1830f14d83a7d3ab0", "text": "We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work responds to Zhang et al. (2016), who showed deep neural networks can easily memorize randomly labeled training data, despite generalizing well on real labels of the same inputs. We show that the same phenomenon occurs in small linear models. These observations are explained by the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization. We also demonstrate that, when one holds the learning rate fixed, there is an optimum batch size which maximizes the test set accuracy. We propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large. Interpreting stochastic gradient descent as a stochastic differential equation, we identify the “noise scale” g = (NB −1) ≈ N/B, where is the learning rate, N the training set size and B the batch size. Consequently the optimum batch size is proportional to both the learning rate and the size of the training set, Bopt ∝ N . We verify these predictions empirically.", "title": "" }, { "docid": "4592c8f5758ccf20430dbec02644c931", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "6f9f95f29a2fb1069ce924f733947d7d", "text": "While human action recognition from still images finds wide applications in computer vision, it remains a very challenging problem. Compared with videobased ones, image-based action representation and recognition are impossible to access the motion cues of action, which largely increases the difficulties in dealing with pose variances and cluttered backgrounds. Motivated by the recent success of convolutional neural networks (CNN) in learning discriminative features from objects in the presence of variations and backgrounds, in this paper, we investigate the potentials of CNN in image-based action recognition. A new action recognition method is proposed by implicitly integrating pose hints into the CNN framework, i.e., we use a CNN originally learned for object recognition as a base network and then transfer it to action recognition by training the base network jointly with inference of poses. Such a joint training scheme can guide the network towards pose inference and meanwhile prevent the unrelated knowledge inherited from the base network. For further performance improvement, the training data is augmented by enriching the pose-related samples. The experimental results on three benchmark datasets have demonstrated the effectiveness of our method.", "title": "" }, { "docid": "10b857d497759f7b49d35155e79734f9", "text": "Disclaimer Mention of any company or product does not constitute endorsement by the National Institute for Occupational Safety and Health (NIOSH). In addition, citations to Web sites external to NIOSH do not constitute NIOSH endorsement of the sponsoring organizations or their programs or products. Furthermore, NIOSH is not responsible for the content of these Web sites. All Web addresses referenced in this document were accessible as of the publication date. To receive documents or other information about occupational safety and health topics, contact NIOSH at ACPH air changes per hour ACGIH American Conference of Governmental Industrial Hygienists CT computed tomography HEPA high efficiency particulate air HVAC heating, ventilation, and air conditioning IARC International Agency for Research on Cancer LEV local exhaust ventilation LHD load-haul-dump MSHA Mine Safety and Health Administration NIOSH National Institute for Occupational Safety and Health OASIS overhead air supply island system PDM personal dust monitor pDR personal DataRAM PEL permissible exposure limit PMF progressive massive fibrosis PPE personal protective equipment PVC poly vinyl chloride TEOM tapered-element oscillating microbalance TMVS total mill ventilation system XRD X-ray diffraction UNIT OF MEASURE ABBREVIATIONS USED IN THIS REPORT cfm cubic foot per minute fpm foot per minute gpm gallon per minute in w.g. inches water gauge lpm liter per minute mg/m 3 milligram per cubic meter mm millimeter mph miles per hour µg/m 3 microgram per cubic meter psi pound-force per square inch INTRODUCTION Respirable silica dust exposure has long been known to be a serious health threat to workers in many industries. Overexposure to respirable silica dust can lead to the development of silicosis— a lung disease that can be disabling and fatal in its most severe form. Once contracted, there is no cure for silicosis so the goal must be to prevent development by limiting a worker's exposure to respirable silica dust. In addition, the International Agency for Research on Cancer (IARC) has concluded that there is sufficient evidence to classify silica as a human carcinogen.", "title": "" }, { "docid": "8283789e148f6e84f7901dc2a6ad0550", "text": "A physical map has been constructed of the human genome containing 15,086 sequence-tagged sites (STSs), with an average spacing of 199 kilobases. The project involved assembly of a radiation hybrid map of the human genome containing 6193 loci and incorporated a genetic linkage map of the human genome containing 5264 loci. This information was combined with the results of STS-content screening of 10,850 loci against a yeast artificial chromosome library to produce an integrated map, anchored by the radiation hybrid and genetic maps. The map provides radiation hybrid coverage of 99 percent and physical coverage of 94 percent of the human genome. The map also represents an early step in an international project to generate a transcript map of the human genome, with more than 3235 expressed sequences localized. The STSs in the map provide a scaffold for initiating large-scale sequencing of the human genome.", "title": "" }, { "docid": "368f904533e17beec78d347ee8ceabb1", "text": "A brand community from a customer-experiential perspective is a fabric of relationships in which the customer is situated. Crucial relationships include those between the customer and the brand, between the customer and the firm, between the customer and the product in use, and among fellow customers. The authors delve ethnographically into a brand community and test key findings through quantitative methods. Conceptually, the study reveals insights that differ from prior research in four important ways: First, it expands the definition of a brand community to entities and relationships neglected by previous research. Second, it treats vital characteristics of brand communities, such as geotemporal concentrations and the richness of social context, as dynamic rather than static phenomena. Third, it demonstrates that marketers can strengthen brand communities by facilitating shared customer experiences in ways that alter those dynamic characteristics. Fourth, it yields a new and richer conceptualization of customer loyalty as integration in a brand community.", "title": "" }, { "docid": "adcaa15fd8f1e7887a05d3cb1cd47183", "text": "The dynamic capabilities framework analyzes the sources and methods of wealth creation and capture by private enterprise firms operating in environments of rapid technological change. The competitive advantage of firms is seen as resting on distinctive processes (ways of coordinating and combining), shaped by the firm's (specific) asset positions (such as the firm's portfolio of difftcult-to-trade knowledge assets and complementary assets), and the evolution path(s) it has aflopted or inherited. The importance of path dependencies is amplified where conditions of increasing retums exist. Whether and how a firm's competitive advantage is eroded depends on the stability of market demand, and the ease of replicability (expanding intemally) and imitatability (replication by competitors). If correct, the framework suggests that private wealth creation in regimes of rapid technological change depends in large measure on honing intemal technological, organizational, and managerial processes inside the firm. In short, identifying new opportunities and organizing effectively and efficiently to embrace them are generally more fundamental to private wealth creation than is strategizing, if by strategizing one means engaging in business conduct that keeps competitors off balance, raises rival's costs, and excludes new entrants. © 1997 by John Wiley & Sons, Ltd.", "title": "" } ]
scidocsrr
c5bf0001f0e8e10f04e67c21be9a9945
Spectrum and energy efficiency maximization in UAV-enabled mobile relaying
[ { "docid": "6243620ecc902b74a5a1e67a92f2082b", "text": "Wireless communication with unmanned aerial vehicles (UAVs) is a promising technology for future communication systems. In this paper, assuming that the UAV flies horizontally with a fixed altitude, we study energy-efficient UAV communication with a ground terminal via optimizing the UAV’s trajectory, a new design paradigm that jointly considers both the communication throughput and the UAV’s energy consumption. To this end, we first derive a theoretical model on the propulsion energy consumption of fixed-wing UAVs as a function of the UAV’s flying speed, direction, and acceleration. Based on the derived model and by ignoring the radiation and signal processing energy consumption, the energy efficiency of UAV communication is defined as the total information bits communicated normalized by the UAV propulsion energy consumed for a finite time horizon. For the case of unconstrained trajectory optimization, we show that both the rate-maximization and energy-minimization designs lead to vanishing energy efficiency and thus are energy-inefficient in general. Next, we introduce a simple circular UAV trajectory, under which the UAV’s flight radius and speed are jointly optimized to maximize the energy efficiency. Furthermore, an efficient design is proposed for maximizing the UAV’s energy efficiency with general constraints on the trajectory, including its initial/final locations and velocities, as well as minimum/maximum speed and acceleration. Numerical results show that the proposed designs achieve significantly higher energy efficiency for UAV communication as compared with other benchmark schemes.", "title": "" } ]
[ { "docid": "e0b96837b0908aa859fa56a2b0a5701c", "text": "Being able to automatically describe the content of an image using properly formed English sentences is a challenging task, but it could have great impact by helping visually impaired people better understand their surroundings. Most modern mobile phones are able to capture photographs, making it possible for the visually impaired to make images of their environments. These images can then be used to generate captions that can be read out loud to the visually impaired, so that they can get a better sense of what is happening around them. In this paper, we present a deep recurrent architecture that automatically generates brief explanations of images. Our models use a convolutional neural network (CNN) to extract features from an image. These features are then fed into a vanilla recurrent neural network (RNN) or a Long Short-Term Memory (LSTM) network to generate a description of the image in valid English. Our models achieve comparable to state of the art performance, and generate highly descriptive captions that can potentially greatly improve the lives of visually impaired people.", "title": "" }, { "docid": "31effa8f9a86950fa34c518f7c25e0e7", "text": "Generative models can be seen as the swiss army knives of machine learning, as many problems can be written probabilistically in terms of the distribution of the data, including prediction, reconstruction, imputation and simulation. One of the most promising directions for unsupervised learning may lie in Deep Learning methods, given their success in supervised learning. However, one of the current problems with deep unsupervised learning methods, is that they often are harder to scale. As a result there are some easier, more scalable shallow methods, such as the Gaussian Mixture Model and the Student-t Mixture Model, that remain surprisingly competitive. In this paper we propose a new scalable deep generative model for images, called the Deep Gaussian Mixture Model, that is a straightforward but powerful generalization of GMMs to multiple layers. The parametrization of a Deep GMM allows it to efficiently capture products of variations in natural images. We propose a new EM-based algorithm that scales well to large datasets, and we show that both the Expectation and the Maximization steps can easily be distributed over multiple machines. In our density estimation experiments we show that deeper GMM architectures generalize better than more shallow ones, with results in the same ballpark as the state of the art.", "title": "" }, { "docid": "4d18ea8816e9e4abf428b3f413c82f9e", "text": "This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.", "title": "" }, { "docid": "e820d9e767766d460463805edf86c684", "text": "Software systems are often designed without considering their social intentionality and the software process changes required to accommodate them. With the rise of artificial intelligence and cognitive services-based systems, software can no longer be considered a passive participant in a domain. Structured and methodological approaches are required to study the intentions and motives of such software systems and their corresponding effect on the design of business and software processes that interact with these software systems. This paper considers chatbots as domain example for illustrating the complexities of designing such intentional and intelligent systems, and the resultant changes and reconfigurations in processes. A mechanism of associating process architecture models and actor models is presented. The modeling and analysis of two types of chatbots retrieval-based and generative are shown using both process architecture and actor models.", "title": "" }, { "docid": "01e5c592760ea2a9448bf1c13bbf5b79", "text": "Ontology is an important emerging discipline that has the huge potential to improve information organization, management and understanding. It has a crucial role to play in enabling content-based access, interoperability, communications, and providing qualitatively new levels of services on the next generation of Web transformation in the form of the Semantic Web. The issues pertaining to ontology generation, mapping and maintenance are critical key areas that need to be understood and addressed. This timely survey is presented in two parts. This first part reviews the state-of-the-art techniques and work done on semiautomatic and automatic ontology generation, as well as the problems facing these researches. The second complimentary survey is dedicated to ontology mapping and ontology evolving. Through this survey, we identified that shallow information extraction and natural language processing techniques are deployed to extract concepts or classes from free-text or semi-structured data. However, relation extraction is a very complex and difficult issue to resolve and it has turned out to be the main impedance to ontology learning and applicability. Further researches are encouraged to find appropriate and efficient ways to detect or identify relations through semi-automatic automatic means.", "title": "" }, { "docid": "7f27b01099a38a1413df06b6a250425c", "text": "This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. Our approach is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance of BoosTexter and a number of other text-categorization algorithms on a variety of tasks. We conclude by describing the application of our system to automatic call-type identification from unconstrained spoken customer responses.", "title": "" }, { "docid": "513b378c3fc2e2e6f23a406b63dc33a9", "text": "Mining frequent itemsets from the large transactional database is a very critical and important task. Many algorithms have been proposed from past many years, But FP-tree like algorithms are considered as very effective algorithms for efficiently mine frequent item sets. These algorithms considered as efficient because of their compact structure and also for less generation of candidates itemsets compare to Apriori and Apriori like algorithms. Therefore this paper aims to presents a basic Concepts of some of the algorithms (FP-Growth, COFI-Tree, CT-PRO) based upon the FPTree like structure for mining the frequent item sets along with their capabilities and comparisons.", "title": "" }, { "docid": "c01e3b06294f9e84bcc9d493990c6149", "text": "An integrated CMOS 60 GHz phased-array antenna module supporting symmetrical 32 TX/RX elements for wireless docking is described. Bidirectional architecture with shared blocks, mm-wave TR switch design with less than 1dB TX loss, and a full built in self test (BIST) circuits with 5deg and +/-1dB measurement accuracy of phase and power are presented. The RFIC size is 29mm2, consuming 1.2W/0.85W at TX and RX with a 29dBm EIRP at -19dB EVM and 10dB NF.", "title": "" }, { "docid": "d8748f3c6192e0e2fe3cdb9b745ef703", "text": "In this paper, we consider a method for computing the similarity of executable files, based on opcode graphs. We apply this technique to the challenging problem of metamorphic malware detection and compare the results to previous work based on hidden Markov models. In addition, we analyze the effect of various morphing techniques on the success of our proposed opcode graph-based detection scheme.", "title": "" }, { "docid": "7ec2f6b720cdcabbcdfb7697dbdd25ae", "text": "To help marketers to build and manage their brands in a dramatically changing marketing communications environment, the customer-based brand equity model that emphasizes the importance of understanding consumer brand knowledge structures is put forth. Specifically, the brand resonance pyramid is reviewed as a means to track how marketing communications can create intense, active loyalty relationships and affect brand equity. According to this model, integrating marketing communications involves mixing and matching different communication options to establish the desired awareness and image in the minds of consumers. The versatility of on-line, interactive marketing communications to marketers in brand building is also addressed.", "title": "" }, { "docid": "8bd570ecdcaadac2e2c2903b22a63a48", "text": "In this paper, we investigate the use of recurrent neural networks (RNNs) in the context of search-based online advertising. We use RNNs to map both queries and ads to real valued vectors, with which the relevance of a given (query, ad) pair can be easily computed. On top of the RNN, we propose a novel attention network, which learns to assign attention scores to different word locations according to their intent importance (hence the name DeepIntent). The vector output of a sequence is thus computed by a weighted sum of the hidden states of the RNN at each word according their attention scores. We perform end-to-end training of both the RNN and attention network under the guidance of user click logs, which are sampled from a commercial search engine. We show that in most cases the attention network improves the quality of learned vector representations, evaluated by AUC on a manually labeled dataset. Moreover, we highlight the effectiveness of the learned attention scores from two aspects: query rewriting and a modified BM25 metric. We show that using the learned attention scores, one is able to produce sub-queries that are of better qualities than those of the state-of-the-art methods. Also, by modifying the term frequency with the attention scores in a standard BM25 formula, one is able to improve its performance evaluated by AUC.", "title": "" }, { "docid": "19d79b136a9af42ac610131217de8c08", "text": "The aim of the experimental study described in this article is to investigate the effect of a lifelike character with subtle expressivity on the affective state of users. The character acts as a quizmaster in the context of a mathematical game. This application was chosen as a simple, and for the sake of the experiment, highly controllable, instance of human–computer interfaces and software. Subtle expressivity refers to the character’s affective response to the user’s performance by emulating multimodal human–human communicative behavior such as different body gestures and varying linguistic style. The impact of em-pathic behavior, which is a special form of affective response, is examined by deliberately frustrating the user during the game progress. There are two novel aspects in this investigation. First, we employ an animated interface agent to address the affective state of users rather than a text-based interface, which has been used in related research. Second, while previous empirical studies rely on questionnaires to evaluate the effect of life-like characters, we utilize physiological information of users (in addition to questionnaire data) in order to precisely associate the occurrence of interface events with users’ autonomic nervous system activity. The results of our study indicate that empathic character response can significantly decrease user stress see front matter r 2004 Elsevier Ltd. All rights reserved. .ijhcs.2004.11.009 cle is a significantly revised and extended version of Prendinger et al. (2003). nding author. Tel.: +813 4212 2650; fax: +81 3 3556 1916. dresses: helmut@nii.ac.jp (H. Prendinger), jmori@miv.t.u-tokyo.ac.jp (J. Mori), v.t.u-tokyo.ac.jp (M. Ishizuka).", "title": "" }, { "docid": "65d3d020ee63cdeb74cb3da159999635", "text": "We investigated the effects of format of an initial test and whether or not students received corrective feedback on that test on a final test of retention 3 days later. In Experiment 1, subjects studied four short journal papers. Immediately after reading each paper, they received either a multiple choice (MC) test, a short answer (SA) test, a list of statements to read, or a filler task. The MC test, SA test, and list of statements tapped identical facts from the studied material. No feedback was provided during the initial tests. On a final test 3 days later (consisting of MC and SA questions), having had an intervening MC test led to better performance than an intervening SA test, but the intervening MC condition did not differ significantly from the read statements condition. To better equate exposure to test-relevant information, corrective feedback during the initial tests was introduced in Experiment 2. With feedback provided, having had an intervening SA test led to the best performance on the final test, suggesting that the more demanding the retrieval processes engendered by the intervening test, the greater the benefit to final retention. The practical application of these findings is that regular SA quizzes with feedback may be more effective in enhancing student learning than repeated presentation of target facts or taking an MC quiz.", "title": "" }, { "docid": "438093b14f983499ada7ce392ba27664", "text": "The spline under tension was introduced by Schweikert in an attempt to imitate cubic splines but avoid the spurious critical points they induce. The defining equations are presented here, together with an efficient method for determining the necessary parameters and computing the resultant spline. The standard scalar-valued curve fitting problem is discussed, as well as the fitting of open and closed curves in the plane. The use of these curves and the importance of the tension in the fitting of contour lines are mentioned as application.", "title": "" }, { "docid": "e97f74244a032204e49d9306032f09a7", "text": "For the discovery of biomarkers in the retinal vasculature it is essential to classify vessels into arteries and veins. We automatically classify retinal vessels as arteries or veins based on colour features using a Gaussian Mixture Model, an Expectation-Maximization (GMM-EM) unsupervised classifier, and a quadrant-pairwise approach. Classification is performed on illumination-corrected images. 406 vessels from 35 images were processed resulting in 92% correct classification (when unlabelled vessels are not taken into account) as compared to 87.6%, 90.08%, and 88.28% reported in [12] [14] and [15]. The classifier results were compared against two trained human graders to establish performance parameters to validate the success of classification method. The proposed system results in specificity of (0.8978, 0.9591) and precision (positive predicted value) of (0.9045, 0.9408) as compared to specificity of (0.8920, 0.7918) and precision of (0.8802, 0.8118) for (arteries, veins) respectively as reported in [13]. The classification accuracy was found to be 0.8719 and 0.8547 for veins and arteries, respectively.", "title": "" }, { "docid": "f31b3c4a2a8f3f05c3391deb1660ce75", "text": "In the field of providing mobility for the elderly or disabled the aspect of dealing with stairs continues largely unresolved. This paper focuses on presenting continued development of the “Nagasaki Stairclimber”, a duel section tracked wheelchair capable of negotiating the large number of twisting and irregular stairs typically encounted by the residents living on the slopes that surround the Nagasaki harbor. Recent developments include an auto guidance system, auto leveling of the chair angle and active control of the frontrear track angle.", "title": "" }, { "docid": "9754e47ad74b2d8a3cf6ae31c1ebc322", "text": "Variational auto-encoder (VAE) is a powerful unsupervised learning framework for image generation. One drawback of VAE is that it generates blurry images due to its Gaussianity assumption and thus `2 loss. To allow the generation of high quality images by VAE, we increase the capacity of decoder network by employing residual blocks and skip connections, which also enable efficient optimization. To overcome the limitation of `2 loss, we propose to generate images in a multi-stage manner from coarse to fine. In the simplest case, the proposed multi-stage VAE divides the decoder into two components in which the second component generates refined images based on the course images generated by the first component. Since the second component is independent of the VAE model, it can employ other loss functions beyond the `2 loss and different model architectures. The proposed framework can be easily generalized to contain more than two components. Experiment results on the MNIST and CelebA datasets demonstrate that the proposed multi-stage VAE can generate sharper images as compared to those from the original VAE.", "title": "" }, { "docid": "2065e94d7bee8d9c3d446589b3060c76", "text": "Background:The trial aimed to investigate whether a general practitioner's (GP) letter encouraging participation and a more explicit leaflet explaining how to complete faecal occult blood test (FOBT) included with the England Bowel Cancer Screening Programme invitation materials would improve uptake.Methods:A randomised controlled 2 × 2 factorial trial was conducted in the south of England. Overall, 1288 patients registered with 20 GPs invited for screening in October 2009 participated in the trial. Participants were randomised to either a GP's endorsement letter and/or an enhanced information leaflet with their FOBT kit. The primary outcome was verified with return of the test kit within 20 weeks.Results:Both the GP's endorsement letter and the enhanced procedural leaflet, each increased participation by ∼6% – the GP's letter by 5.8% (95% CI: 4.1–7.8%) and the leaflet by 6.0% (95% CI: 4.3–8.1%). On the basis of the intention-to-treat analysis, the random effects logistic regression model confirmed that there was no important interaction between the two interventions, and estimated an adjusted rate ratio of 1.11 (P=0.038) for the GP's letter and 1.12 (P=0.029) for the leaflet. In the absence of an interaction, an additive effect for receiving both the GP's letter and leaflet (11.8%, 95% CI: 8.5–16%) was confirmed. The per-protocol analysis indicated that the insertion of an electronic GP's signature on the endorsement letter was associated with increased participation (P=0.039).Conclusion:Including both an endorsement letter from each patient's GP and a more explicit procedural leaflet could increase participation in the English Bowel Cancer Screening Programme by ∼10%, a relative improvement of 20% on current performance.", "title": "" }, { "docid": "5a416fb88c3f5980989f7556fb19755c", "text": "Cloud computing helps to share data and provide many resources to users. Users pay only for those resources as much they used. Cloud computing stores the data and distributed resources in the open environment. The amount of data storage increases quickly in open environment. So, load balancing is a main challenge in cloud environment. Load balancing is helped to distribute the dynamic workload across multiple nodes to ensure that no single node is overloaded. It helps in proper utilization of resources .It also improve the performance of the system. Many existing algorithms provide load balancing and better resource utilization. There are various types load are possible in cloud computing like memory, CPU and network load. Load balancing is the process of finding overloaded nodes and then transferring the extra load to other nodes.", "title": "" }, { "docid": "872ef59b5bec5f6cbb9fcb206b6fe49e", "text": "In this paper, the analysis and design of a three-level LLC series resonant converter (TL LLC SRC) for high- and wide-input-voltage applications is presented. The TL LLC SRC discussed in this paper consists of two half-bridge LLC SRCs in series, sharing a resonant inductor and a transformer. Its main advantages are that the voltage across each switch is clamped at half of the input voltage and that voltage balance is achieved. Thus, it is suitable for high-input-voltage applications. Moreover, due to its simple driving signals, the additional circulating current of the conventional TL LLC SRCs does not appear in the converter, and a simpler driving circuitry is allowed to be designed. With this converter, the operation principles, the gain of the LLC resonant tank, and the zero-voltage-switching condition under wide input voltage variation are analyzed. Both the current and voltage stresses over different design factors of the resonant tank are discussed as well. Based on the results of these analyses, a design example is provided and its validity is confirmed by an experiment involving a prototype converter with an input of 400-600 V and an output of 48 V/20 A. In addition, a family of TL LLC SRCs with double-resonant tanks for high-input-voltage applications is introduced. While this paper deals with a TL LLC SRC, the analysis results can be applied to other TL LLC SRCs for wide-input-voltage applications.", "title": "" } ]
scidocsrr
e72dfb0ebe6c84b23391d367818c3e08
An Approach for the Cooperative Control of FES With a Powered Exoskeleton During Level Walking for Persons With Paraplegia
[ { "docid": "2b09ae15fe7756df3da71cfc948e9506", "text": "Repair of the injured spinal cord by regeneration therapy remains an elusive goal. In contrast, progress in medical care and rehabilitation has resulted in improved health and function of persons with spinal cord injury (SCI). In the absence of a cure, raising the level of achievable function in mobility and self-care will first and foremost depend on creative use of the rapidly advancing technology that has been so widely applied in our society. Building on achievements in microelectronics, microprocessing and neuroscience, rehabilitation medicine scientists have succeeded in developing functional electrical stimulation (FES) systems that enable certain individuals with SCI to use their paralyzed hands, arms, trunk, legs and diaphragm for functional purposes and gain a degree of control over bladder and bowel evacuation. This review presents an overview of the progress made, describes the current challenges and suggests ways to improve further FES systems and make these more widely available.", "title": "" } ]
[ { "docid": "0d6e5e20d6a909a6450671feeb4ac261", "text": "Rita bakalu, a new species, is described from the Godavari river system in peninsular India. With this finding, the genus Rita is enlarged to include seven species, comprising six species found in South Asia, R. rita, R. macracanthus, R. gogra, R. chrysea, R. kuturnee, R. bakalu, and one species R. sacerdotum from Southeast Asia. R. bakalu is distinguished from its congeners by a combination of the following characters: eye diameter 28–39% HL and 20–22 caudal fin rays; teeth in upper jaw uniformly villiform in two patches, interrupted at the midline; palatal teeth well-developed villiform, in two distinct patches located at the edge of the palate. The mtDNA cytochrome C oxidase I sequence analysis confirmed that the R. bakalu is distinct from the other congeners of Rita. Superficially, R. bakalu resembles R. kuturnee, reported from the Godavari and Krishna river systems; however, the two species are discriminated due to differences in the structure of their teeth patches on upper jaw and palate, anal fin originating before the origin of adipose fin, comparatively larger eye diameter, longer mandibular barbels, and vertebral count. The results conclude that the river Godavari harbors a different species of Rita, R. bakalu which is new to science.", "title": "" }, { "docid": "fd2d04af3b259a433eb565a41b11ffbd", "text": "OVERVIEW • We develop novel orthogonality regularizations on training deep CNNs, by borrowing ideas and tools from sparse optimization. • These plug-and-play regularizations can be conveniently incorporated into training almost any CNN without extra hassle. • The proposed regularizations can consistently improve the performances of baseline deep networks on CIFAR-10/100, ImageNet and SVHN datasets, based on intensive empirical experiments, as well as accelerate/stabilize the training curves. • The proposed orthogonal regularizations outperform existing competitors.", "title": "" }, { "docid": "2fba3b2ae27e1389557794673137480d", "text": "The paper provides an OWL ontology for legal cases with an instantiation of the legal case Popov v. Hayashi. The ontology makes explicit the conceptual knowledge of the legal case domain, supports reasoning about the domain, and can be used to annotate the text of cases, which in turn can be used to populate the ontology. A populated ontology is a case base which can be used for information retrieval, information extraction, and case based reasoning. The ontology contains not only elements for indexing the case (e.g. the parties, jurisdiction, and date), but as well elements used to reason to a decision such as argument schemes and the components input to the schemes. We use the Protégé ontology editor and knowledge acquisition system, current guidelines for ontology development, and tools for visual and linguistic presentation of the ontology.", "title": "" }, { "docid": "1e51c63d00373a45460b11d5a3b5e2ae", "text": "Software architecture is one of the most important tools for designing and understanding a system, whether that system is in preliminary design, active deployment, or maintenance. Scenarios are important tools for exercising an architecture in order to gain information about a system’s fitness with respect to a set of desired quality attributes. This paper presents an experiential case study illustrating the methodological use of scenarios to gain architecture-level understanding and predictive insight into large, real-world systems in various domains. A structured method for scenario-based architectural analysis is presented, using scenarios to analyze architectures with respect to achieving quality attributes. Finally, lessons and morals are presented, drawn from the growing body of experience in applying scenario-based architectural analysis techniques.", "title": "" }, { "docid": "2a3f5f621195c036064e3d8c0b9fc884", "text": "This paper describes our system for the CoNLL 2016 Shared Task’s supplementary task on Discourse Relation Sense Classification. Our official submission employs a Logistic Regression classifier with several cross-argument similarity features based on word embeddings and performs with overall F-scores of 64.13 for the Dev set, 63.31 for the Test set and 54.69 for the Blind set, ranking first in the Overall ranking for the task. We compare the feature-based Logistic Regression classifier to different Convolutional Neural Network architectures. After the official submission we enriched our model for Non-Explicit relations by including similarities of explicit connectives with the relation arguments, and part of speech similarities based on modal verbs. This improved our Non-Explicit result by 1.46 points on the Dev set and by 0.36 points on the Blind set.", "title": "" }, { "docid": "070a1c6b47a0a5c217e747cd7e0e0d0b", "text": "In this paper we develop a computational model of visual adaptation for realistic image synthesis based on psychophysical experiments. The model captures the changes in threshold visibility, color appearance, visual acuity, and sensitivity over time that are caused by the visual system’s adaptation mechanisms. We use the model to display the results of global illumination simulations illuminated at intensities ranging from daylight down to starlight. The resulting images better capture the visual characteristics of scenes viewed over a wide range of illumination levels. Because the model is based on psychophysical data it can be used to predict the visibility and appearance of scene features. This allows the model to be used as the basis of perceptually-based error metrics for limiting the precision of global illumination computations. CR", "title": "" }, { "docid": "af22932b48a2ea64ecf3e5ba1482564d", "text": "Collaborative embedded systems (CES) heavily rely on information models to understand the contextual situations they are exposed to. These information models serve different purposes. First, during development time it is necessary to model the context for eliciting and documenting the requirements that a CES is supposed to achieve. Second, information models provide information to simulate different contextual situations and CES ́s behavior in these situations. Finally, CESs need information models about their context during runtime in order to react to different contextual situations and exchange context information with other CESs. Heavyweight ontologies, based on Ontology Web Language (OWL), have already proven suitable for representing knowledge about contextual situations during runtime. Furthermore, lightweight ontologies (e.g. class diagrams) have proven their practicality for creating domain specific languages for requirements documentation. However, building an ontology (lightor heavyweight) is a non-trivial task that needs to be integrated into development methods for CESs such that it serves the above stated purposes in a seamless way. This paper introduces the requirements for the building of ontologies and proposes a method that is integrated into the engineering of CESs.", "title": "" }, { "docid": "45cbfbe0a0bcf70910a6d6486fb858f0", "text": "Grid cells in the entorhinal cortex of freely moving rats provide a strikingly periodic representation of self-location which is indicative of very specific computational mechanisms. However, the existence of grid cells in humans and their distribution throughout the brain are unknown. Here we show that the preferred firing directions of directionally modulated grid cells in rat entorhinal cortex are aligned with the grids, and that the spatial organization of grid-cell firing is more strongly apparent at faster than slower running speeds. Because the grids are also aligned with each other, we predicted a macroscopic signal visible to functional magnetic resonance imaging (fMRI) in humans. We then looked for this signal as participants explored a virtual reality environment, mimicking the rats’ foraging task: fMRI activation and adaptation showing a speed-modulated six-fold rotational symmetry in running direction. The signal was found in a network of entorhinal/subicular, posterior and medial parietal, lateral temporal and medial prefrontal areas. The effect was strongest in right entorhinal cortex, and the coherence of the directional signal across entorhinal cortex correlated with spatial memory performance. Our study illustrates the potential power of combining single-unit electrophysiology with fMRI in systems neuroscience. Our results provide evidence for grid-cell-like representations in humans, and implicate a specific type of neural representation in a network of regions which supports spatial cognition and also autobiographical memory.", "title": "" }, { "docid": "90469bbf7cf3216b2ab1ee8441fbce14", "text": "This work presents the evolution of a solution for predictive maintenance to a Big Data environment. The proposed adaptation aims for predicting failures on wind turbines using a data-driven solution deployed in the cloud and which is composed by three main modules. (i) A predictive model generator which generates predictive models for each monitored wind turbine by means of Random Forest algorithm. (ii) A monitoring agent that makes predictions every 10 minutes about failures in wind turbines during the next hour. Finally, (iii) a dashboard where given predictions can be visualized. To implement the solution Apache Spark, Apache Kafka, Apache Mesos and HDFS have been used. Therefore, we have improved the previous work in terms of data process speed, scalability and automation. In addition, we have provided fault-tolerant functionality with a centralized access point from where the status of all the wind turbines of a company localized all over the world can be monitored, reducing O&M costs.", "title": "" }, { "docid": "70927f955911f63836cc4142ef8aad44", "text": "Cyberbullying is a unique phenomenon, distinguished from traditional bullying by the speed at which information is distributed, permanence of material and availability of victims. There is however a paucity of research in this area, and few studies have examined the factors contributing to cyberbullying behaviour. The present study investigated the influence of self-esteem, empathy and loneliness on cyberbullying victimisation and perpetration. British adolescents (N = 90) aged 16–18 years were recruited from Further Education colleges. Participants completed the Revised Cyber Bullying Inventory (RCBI, Topcu & Erdur-Baker, 2010), the UCLA Loneliness Scale (Russell, Peplau, & Ferguson, 1978), Toronto Empathy Questionnaire (TEQ, Spreng, McKinnon, Mar, & Levine, 2009) and Rosenberg Self-Esteem Scale (Rosenberg, 1965) online. Standard multiple regressions revealed that together, loneliness, empathy and self-esteem predicted levels of cyberbullying victimisation and perpetration. Self-esteem was a significant individual predictor of cyberbullying victimisation and perpetration, such that those with low self-esteem were most likely to report experience of cyberbullying. Empathy was a significant individual predictor of cyberbullying perpetration, such that as empathy decreases, likelihood of cyberbullying perpetration increases. These findings indicate that self-esteem and empathy oriented interventions may successfully address cyberbullying behaviour. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "35e4df3d3da5fee60235bf7680de7fd1", "text": "Many people who would benefit from mental health services opt not to pursue them or fail to fully participate once they have begun. One of the reasons for this disconnect is stigma; namely, to avoid the label of mental illness and the harm it brings, people decide not to seek or fully participate in care. Stigma yields 2 kinds of harm that may impede treatment participation: It diminishes self-esteem and robs people of social opportunities. Given the existing literature in this area, recommendations are reviewed for ongoing research that will more comprehensively expand understanding of the stigma-care seeking link. Implications for the development of antistigma programs that might promote care seeking and participation are also reviewed.", "title": "" }, { "docid": "4e91d37de7701e4a03c506c602ef3455", "text": "This paper presents the design of Glow, a machine learning compiler for heterogeneous hardware. It is a pragmatic approach to compilation that enables the generation of highly optimized code for multiple targets. Glow lowers the traditional neural network dataflow graph into a two-phase strongly-typed intermediate representation. The high-level intermediate representation allows the optimizer to perform domain-specific optimizations. The lower-level instruction-based address-only intermediate representation allows the compiler to perform memory-related optimizations, such as instruction scheduling, static memory allocation and copy elimination. At the lowest level, the optimizer performs machine-specific code generation to take advantage of specialized hardware features. Glow features a lowering phase which enables the compiler to support a high number of input operators as well as a large number of hardware targets by eliminating the need to implement all operators on all targets. The lowering phase is designed to reduce the input space and allow new hardware backends to focus on a small number of linear algebra primitives.", "title": "" }, { "docid": "687b8d68cd2fe687dff2edb77fec0f63", "text": "MicroRNAs (miRNAs) are an abundant class of small non-protein-coding RNAs that function as negative gene regulators. They regulate diverse biological processes, and bioinformatic data indicates that each miRNA can control hundreds of gene targets, underscoring the potential influence of miRNAs on almost every genetic pathway. Recent evidence has shown that miRNA mutations or mis-expression correlate with various human cancers and indicates that miRNAs can function as tumour suppressors and oncogenes. miRNAs have been shown to repress the expression of important cancer-related genes and might prove useful in the diagnosis and treatment of cancer.", "title": "" }, { "docid": "e48240932e17d7b3cd340dee79797984", "text": "We describe a two-scan algorithm for labeling connected components in binary images in raster format. Unlike the classical two-scan approach, our algorithm processes equivalences during the first scan by merging equivalence classes as soon as a new equivalence is found. We show that this significantly improves the efficiency of the labeling process with respect to the classical approach. The datastructure used to support the handling of equivalences is a 1D-array. This renders the more frequent operation of finding class identifiers very fast, while the less-frequent classmerging operation has a relatively high computational cost. Nonetheless, it is possible to reduce significantly the mergings cost by two slight modifications to algorithm’s basic structure. The ideas of merging equivalence classes is present also in Samet’s general labeling algorithm. However, when considering the case of binary images in raster format this algorithm is much more complex than the one we describe in this paper.", "title": "" }, { "docid": "0974cee877ff2fecfda81d48012c07d3", "text": "New method of blinking detection is proposed. The utmost important of blinking detection method is robust against different users, noise, and also change of eye shape. In this paper, we propose blinking detection method by measuring the distance between two arcs of eye (upper part and lower part). We detect eye arcs by apply Gabor filter onto eye image. As we know that Gabor filter has advantage on image processing application since it able to extract spatial localized spectral features such as line, arch, and other shapes. After two of eye arcs are detected, we measure the distance between arcs of eye by using connected labeling method. The open eye is marked by the distance between two arcs is more than threshold and otherwise, the closed eye is marked by the distance less than threshold. The experiment result shows that our proposed method robust enough against different users, noise, and eye shape changes with perfectly accuracy.", "title": "" }, { "docid": "ebc57f065fa7f3206564ff14539b0707", "text": "Following the Daubert ruling in 1993, forensic evidence based on fingerprints was first challenged in the 1999 case of the U.S. versus Byron C. Mitchell and, subsequently, in 20 other cases involving fingerprint evidence. The main concern with the admissibility of fingerprint evidence is the problem of individualization, namely, that the fundamental premise for asserting the uniqueness of fingerprints has not been objectively tested and matching error rates are unknown. In order to assess the error rates, we require quantifying the variability of fingerprint features, namely, minutiae in the target population. A family of finite mixture models has been developed in this paper to represent the distribution of minutiae in fingerprint images, including minutiae clustering tendencies and dependencies in different regions of the fingerprint image domain. A mathematical model that computes the probability of a random correspondence (PRC) is derived based on the mixture models. A PRC of 2.25 times10-6 corresponding to 12 minutiae matches was computed for the NIST4 Special Database, when the numbers of query and template minutiae both equal 46. This is also the estimate of the PRC for a target population with a similar composition as that of NIST4.", "title": "" }, { "docid": "4a8fa0edc026c1c0d44293ee3840b6dc", "text": "We introduce an extended representation of time series that allows fast, accurate classification and clustering in addition to the ability to explore time series data in a relevance feedback framework. The representation consists of piecewise linear segments to represent shape and a weight vector that contains the relative importance of each individual linear segment. In the classification context, the weights are learned automatically as part of the training cycle. In the relevance feedback context, the weights are determined by an interactive and iterative process in which users rate various choices presented to them. Our representation allows a user to define a variety of similarity measures that can be tailored to specific domains. We demonstrate our approach on space telemetry, medical and synthetic data.", "title": "" }, { "docid": "295d423e72159cb5855aace159592c67", "text": "This paper proposes a novel speech recognition method combining Audio-Visual Voice Activity Detection (AVVAD) and Audio-Visual Automatic Speech Recognition (AVASR). AVASR has been developed to enhance the robustness of ASR in noisy environments, using visual information in addition to acoustic features. Similarly, AVVAD increases the precision of VAD in noisy conditions, which detects presence of speech from an audio signal. In our approach, AVVAD is conducted as a preprocessing followed by an AVASR system, making a significantly robust speech recognizer. To evaluate the proposed system, recognition experiments were conducted using noisy audio-visual data, testing several AVVAD approaches. Then it is found that the proposed AVASR system using the model-free feature-fusion AVVAD method outperforms not only non-VAD audio-only ASR but also conventional AVASR.", "title": "" }, { "docid": "c973dc425e0af0f5253b71ae4ebd40f9", "text": "A growing body of research on Bitcoin and other permissionless cryptocurrencies that utilize Nakamoto’s blockchain has shown that they do not easily scale to process a high throughput of transactions, or to quickly approve individual transactions; blocks must be kept small, and their creation rates must be kept low in order to allow nodes to reach consensus securely. As of today, Bitcoin processes a mere 3-7 transactions per second, and transaction confirmation takes at least several minutes. We present SPECTRE, a new protocol for the consensus core of crypto-currencies that remains secure even under high throughput and fast confirmation times. At any throughput, SPECTRE is resilient to attackers with up to 50% of the computational power (up until the limit defined by network congestion and bandwidth constraints). SPECTRE can operate at high block creation rates, which implies that its transactions confirm in mere seconds (limited mostly by the round-trip-time in the network). Key to SPECTRE’s achievements is the fact that it satisfies weaker properties than classic consensus requires. In the conventional paradigm, the order between any two transactions must be decided and agreed upon by all non-corrupt nodes. In contrast, SPECTRE only satisfies this with respect to transactions performed by honest users. We observe that in the context of money, two conflicting payments that are published concurrently could only have been created by a dishonest user, hence we can afford to delay the acceptance of such transactions without harming the usability of the system. Our framework formalizes this weaker set of requirements for a crypto-currency’s distributed ledger. We then provide a formal proof that SPECTRE satisfies these requirements.", "title": "" }, { "docid": "eba25ae59603328f3ef84c0994d46472", "text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.", "title": "" } ]
scidocsrr
97a676418c152d6b79748390ee141722
Interaction primitives for human-robot cooperation tasks
[ { "docid": "cae4703a50910c7718284c6f8230a4bc", "text": "Autonomous helicopter flight is widely regarded to be a highly challenging control problem. Despite this fact, human experts can reliably fly helicopters through a wide range of maneuvers, including aerobatic maneuvers at the edge of the helicopter’s capabilities. We present apprenticeship learning algorithms, which leverage expert demonstrations to efficiently learn good controllers for tasks being demonstrated by an expert. These apprenticeship learning algorithms have enabled us to significantly extend the state of the art in autonomous helicopter aerobatics. Our experimental results include the first autonomous execution of a wide range of maneuvers, including but not limited to in-place flips, in-place rolls, loops and hurricanes, and even auto-rotation landings, chaos and tic-tocs, which only exceptional human pilots can perform. Our results also include complete airshows, which require autonomous transitions between many of these maneuvers. Our controllers perform as well as, and often even better than, our expert pilot.", "title": "" }, { "docid": "7b526ab92e31c2677fd20022a8b46189", "text": "Close physical interaction between robots and humans is a particularly challenging aspect of robot development. For successful interaction and cooperation, the robot must have the ability to adapt its behavior to the human counterpart. Based on our earlier work, we present and evaluate a computationally efficient machine learning algorithm that is well suited for such close-contact interaction scenarios. We show that this algorithm helps to improve the quality of the interaction between a robot and a human caregiver. To this end, we present two human-in-the-loop learning scenarios that are inspired by human parenting behavior, namely, an assisted standing-up task and an assisted walking task.", "title": "" } ]
[ { "docid": "e74fc82a37b7278e2cb8bbe5f839639d", "text": "While neural networks have been remarkably successful for a variety of practical problems, they are often applied as a black box, which limits their utility for scientific discoveries. Here, we present a neural network architecture that can be used to discover physical concepts from experimental data without being provided with additional prior knowledge. For a variety of simple systems in classical and quantum mechanics, our network learns to compress experimental data to a simple representation and uses the representation to answer questions about the physical system. Physical concepts can be extracted from the learned representation, namely: (1) The representation stores the physically relevant parameters, like the frequency of a pendulum. (2) The network finds and exploits conservation laws: it stores the total angular momentum to predict the motion of two colliding particles. (3) Given measurement data of a simple quantum mechanical system, the network correctly recognizes the number of degrees of freedom describing the underlying quantum state. (4) Given a time series of the positions of the Sun and Mars as observed from Earth, the network discovers the heliocentric model of the solar system — that is, it encodes the data into the angles of the two planets as seen from the Sun. Our work provides a first step towards answering the question whether the traditional ways by which physicists model nature naturally arise from the experimental data without any mathematical and physical pre-knowledge, or if there are alternative elegant formalisms, which may solve some of the fundamental conceptual problems in modern physics, such as the measurement problem in quantum mechanics.", "title": "" }, { "docid": "ed20c75915c3c0c7d7e32f2ec0334a65", "text": "Who is likely to view materials online maligning groups based on race, nationality, ethnicity, sexual orientation, gender, political views, immigration status, or religion? We use an online survey (N 1⁄4 1034) of youth and young adults recruited from a demographically balanced sample of Americans to address this question. By studying demographic characteristics and online habits of individuals who are exposed to online extremist groups and their messaging, this study serves as a precursor to a larger research endeavor examining the online contexts of extremism. Descriptive results indicate that a sizable majority of respondents were exposed to negative materials online. The materials were most commonly used to stereotype groups. Nearly half of negative material centered on race or ethnicity, and respondents were likely to encounter such material on social media sites. Regression results demonstrate African-Americans and foreign-born respondents were significantly less likely to be exposed to negative material online, as are younger respondents. Additionally, individuals expressing greater levels of trust in the federal government report significantly less exposure to such materials. Higher levels of education result in increased exposure to negative materials, as does a proclivity towards risk-taking. © 2016 Elsevier Ltd. All rights reserved. While the Internet has obvious benefits, its uncensored nature also exposes users to extremist ideas some find vile, offensive, and disturbing. Authorities consider online extremism a threat to national security and note the need for research on it (e.g. Hussain & Saltman, 2014; Levin, 2015; The White House, 2015). While scholars are discussing strategies for countering the effects of extremism (Helmus, York, & Chalk, 2013; Neumann, 2013), few have investigated who is exposed to extremist materials (for exceptions, see Hawdon, Oksanen, & R€ as€anen, 2014; R€ as€anen et al., 2015). Yet, we must understand who sees extremist materials if we are to effectively limit exposure or disseminate countermessages. Moreover, since youth appear to be most vulnerable to extremist messages (Oksanen, Hawdon, Holkeri, N€ asi, & R€ as€ anen, 2014; Onuoha, 2014; Torok, 2016), there is an enhanced need to investigate what online behaviors place them at risk for exposure. To help guide efforts in combatting online extremism by ). understanding who sees these materials, we use a sample of youth and young adults to investigate the behavioral and attitudinal factors that lead to exposure. We frame the analysis using routine activity theory (RAT), which argues that crimes occur when a motivated offender, a suitable target, and a lack of capable guardians converge in time and space (Cohen & Felson, 1979). The theory explains how victims’ activities can expose them to dangerous people, places, and situations. We also extend RAT by incorporating insights from social learning theory (Akers, 1977). Specifically, we consider if those who distrust government are more likely to view extremist messages because their ideology leads them to frequent online environments where extremist opinions are posted. Therefore, we focus on two research questions: R1: What behaviors place youth and young adults at risk of being virtually proximate to extremist materials? R2: Does the lack of trust in the government increase exposure to extremist materials, all else being equal? By identifying behaviors and attitudes that lead to extremism, our research will help authorities design strategies to counter its effects. M. Costello et al. / Computers in Human Behavior 63 (2016) 311e320 312 The current study, which was approved by the Intuitional Review Boards (IRBs) of the universities involved in the project aswell as the National Institute of Justice, begins with a discussion of online extremism. We then review RAT and extend it by considering insights from social learning theory. We then predict exposure to online hate materials among a sample of 1029 youth and young adults. We conclude by considering the implications of our research. 1. Online extremism: its nature, types, and dangers The phenomenon we consider is a type of cyberviolence (see Wall, 2001) and goes bymany names: online extremism, online hate, or cyberhate. We consider online hate or extremism to be the use of information computer technology (ICT) to profess attitudes devaluating others because of their religion, race, ethnicity, gender, sexual orientation, national origin, or some other characteristic. As such, online hate material is a distinct form of cyberviolence as abuse is aimed at a collective identity rather than a specific individual (Hawdon et al., 2014). Contrasting exposure to online hate with cyberbullying, R€ as€ anen andhis colleagues (forthcoming) argue, that: exposure to online hate material does not attack the individual in isolation; instead, this form of violence occurs when individuals are unwittingly exposed to materials against their will that express hatred or degrading attitudes toward a collective to which they belong. That is, hate materials denigrate groups; it is not an attack that focuses on individuals. Extremists, both individuals and organized groups, champion their cause, recruit members, advocate violence, and create international extremist communities through websites, blogs, chat rooms, file archives, listservers, news groups, internet communities, online video games, and web rings (Amster, 2009; Burris, Smith, & Strahm, 2000; Franklin 2010; Hussain & Saltman, 2014). Organized hate groups such as the Ku Klux Klan have used the web since its public inception (Amster, 2009; Gerstenfeld, Grant, & Chiang, 2003), but individuals maintaining sites or commenting online have surpassed organized groups as the main perpetrators (Potok, 2015). Given the nature of our analysis (self-reported exposure to online hate materials), we cannot determine if the material to which the respondents refer was posted by a formal group or an individual; nevertheless, the respondents claim the site expressed hatred toward some collectivedthe essence of our definition of online hate materials. It is important to realize exposure to online hate material may not be victimizing, per se. Some people actively seek such materials, and they would not be “victimized” in the traditional sense of the word. Others, however, come upon this material inadvertently. Even when the material is found accidently, we should not overstate the dangers these materials pose. Many people view hate materials without experiencing negative consequences, and most hate messages do not directly advocate violence (Douglas, McGarty, Bliuc, & Lala, 2005; Gerstenfeld et al., 2003; Glaser, Dixit, & Green, 2002; McNamee, Peterson, & Pe~ na, 2010). Nevertheless, exposure to hate materials correlates with several problematic behaviors and attitudes (Subrahmanyam & Smahel, 2011). For example, members of targeted groups can experience mood swings, anger, and fear after exposure (Tynes, 2006; Tynes, Reynolds, & Greenfield, 2004). In addition, exposure to online hate materials is inversely related to social trust (Nasi et al., 2015). Long-term exposure to hate materials can reinforce discrimination against vulnerable groups (Cowan & Mettrick, 2002; Foxman & Wolf, 2013) and lead to an inter-generational perpetuation of extremist ideologies (Perry, 2000; Tynes, 2006). In some cases, exposure to online hate materials is directly linked to violence, including acts of mass violence and terror (Federal Bureau of Investigation 2011a; for a list of deadly attacks see Freilich, Belli, & Chermak., 2011; The New America Foundation International Security Program 2015). Recently, exposure to extremist ideology has been implicated in recruiting youth to extremist causes, including terrorist organizations such as the Islamic State of Iraq and the Levant (ISIL). It is therefore important to understand who is likely to be exposed to these materials. 2. Correlates of exposure The limited number of existing studies analyzing exposure to online hate and extremism rely on Cohen and Felson’s (1979) routine activity theory (RAT) and its recent revisions. RAT argues that crimes occur when a motivated offender, a suitable target, and a lack of capable guardians converge in time and space (Cohen & Felson, 1979). Individuals’ activities can place them in danger by bringing them into contact with potential offenders and into environments that lack guardians who could confront those offenders (see Cohen & Felson, 1979; Miethe & Meier, 1990). In addition, individuals’ routines influence how attractive they are to offenders, and the probability of victimization increases as target attractiveness increases (Cohen & Felson, 1979). While there are complicating factors for applying RAT to the online world (see Tillyer & Eck, 2009; Yar, 2005, 2013), the cyberlifestyle-routine activities perspective (Eck & Clarke, 2003; Reyns, 2013; Reyns, Henson, & Fisher, 2011) overcomes some of these problems. Most notably, while cybervictims and offenders do not converge in time and space as victims and offenders do in the offline world, they nevertheless come into virtual contact through their networked devices (Reyns et al., 2011). The asynchronous nature of cyberviolence is clearly seen with exposure to hate material. Those posting hate materials can offend people across spaces and time because, once materials are posted, people can become exposed to them without ever directly interacting with offenders. The primary factor likely resulting in hate material exposure is proximity to “offenders.” More precisely, given the asynchronous nature of the Internet, proximity to the virtual places where offenders have been is the primary determinant of exposure. In the language of RAT, victimization should be related to factors leading one into dangerous places. As noted above, one need not directly encounter an offender; instead, one only need a", "title": "" }, { "docid": "e5dc8960875484b4e5f6e8470aa415c2", "text": "Step 2. Search for a good low-rank representation X = WC> in terms of linguistic metrics, where W is a matrix of word embeddings and C is a matrix of context embeddings. 5. RESULTS • Three different methods: RO-SGNS [1], SVD-SPPMI [2] and SGD-SGNS (original “word2vec”). • Three popular benchmarks for semantic similarity evaluation (“wordsim-353”, “simlex”, “men”) • Each dataset contains word pairs together with assessor-assigned similarity scores for each pair • Original “wordsim-353” is a mixture of the word pairs for both word similarity and word relatedness tasks which we also use in our experiments (“ws-sim” and “ws-rel”)", "title": "" }, { "docid": "c66069fc52e1d6a9ab38f699b6a482c6", "text": "An understanding of the age of the Acheulian and the transition to the Middle Stone Age in southern Africa has been hampered by a lack of reliable dates for key sequences in the region. A number of researchers have hypothesised that the Acheulian first occurred simultaneously in southern and eastern Africa at around 1.7-1.6 Ma. A chronological evaluation of the southern African sites suggests that there is currently little firm evidence for the Acheulian occurring before 1.4 Ma in southern Africa. Many researchers have also suggested the occurrence of a transitional industry, the Fauresmith, covering the transition from the Early to Middle Stone Age, but again, the Fauresmith has been poorly defined, documented, and dated. Despite the occurrence of large cutting tools in these Fauresmith assemblages, they appear to include all the technological components characteristic of the MSA. New data from stratified Fauresmith bearing sites in southern Africa suggest this transitional industry maybe as old as 511-435 ka and should represent the beginning of the MSA as a broad entity rather than the terminal phase of the Acheulian. The MSA in this form is a technology associated with archaic H. sapiens and early modern humans in Africa with a trend of greater complexity through time.", "title": "" }, { "docid": "1fcdfd02a6ecb12dec5799d6580c67d4", "text": "One of the major problems in developing countries is maintenance of roads. Well maintained roads contribute a major portion to the country's economy. Identification of pavement distress such as potholes and humps not only helps drivers to avoid accidents or vehicle damages, but also helps authorities to maintain roads. This paper discusses previous pothole detection methods that have been developed and proposes a cost-effective solution to identify the potholes and humps on roads and provide timely alerts to drivers to avoid accidents or vehicle damages. Ultrasonic sensors are used to identify the potholes and humps and also to measure their depth and height, respectively. The proposed system captures the geographical location coordinates of the potholes and humps using a global positioning system receiver. The sensed-data includes pothole depth, height of hump, and geographic location, which is stored in the database (cloud). This serves as a valuable source of information to the government authorities and vehicle drivers. An android application is used to alert drivers so that precautionary measures can be taken to evade accidents. Alerts are given in the form of a flash messages with an audio beep.", "title": "" }, { "docid": "06f9780257311891f54c5d0c03e73c1a", "text": "This essay extends Simon's arguments in the Sciences of the Artificial to a critical examination of how theorizing in Information Technology disciplines should occur. The essay is framed around a number of fundamental questions that relate theorizing in the artificial sciences to the traditions of the philosophy of science. Theorizing in the artificial sciences is contrasted with theorizing in other branches of science and the applicability of the scientific method is questioned. The paper argues that theorizing should be considered in a holistic manner that links two modes of theorizing: an interior mode with the how of artifact construction studied and an exterior mode with the what of existing artifacts studied. Unlike some representations in the design science movement the paper argues that the study of artifacts once constructed can not be passed back uncritically to the methods of traditional science. Seven principles for creating knowledge in IT disciplines are derived: (i) artifact system centrality; (ii) artifact purposefulness; (iii) need for design theory; (iv) induction and abduction in theory building; (v) artifact construction as theory building; (vi) interior and exterior modes for theorizing; and (viii) issues with generality. The implicit claim is that consideration of these principles will improve knowledge creation and theorizing in design disciplines, for both design science researchers and also for researchers using more traditional methods. Further, attention to these principles should lead to the creation of more useful and relevant knowledge.", "title": "" }, { "docid": "d22390e43aa4525d810e0de7da075bbf", "text": "information, including knowledge management and e-business applications. Next-generation knowledge management systems will likely rely on conceptual models in the form of ontologies to precisely define the meaning of various symbols. For example, FRODO (a Framework for Distributed Organizational Memories) uses ontologies for knowledge description in organizational memories,1 CoMMA (Corporate Memory Management through Agents) investigates agent technologies for maintaining ontology-based knowledge management systems,2 and Steffen Staab and his colleagues have discussed the methodologies and processes for building ontology-based systems.3 Here we present an integrated enterprise-knowledge management architecture for implementing an ontology-based knowledge management system (OKMS). We focus on two critical issues related to working with ontologies in real-world enterprise applications. First, we realize that imposing a single ontology on the enterprise is difficult if not impossible. Because organizations must devise multiple ontologies and thus require integration mechanisms, we consider means for combining distributed and heterogeneous ontologies using mappings. Additionally, a system’s ontology often must reflect changes in system requirements and focus, so we developed guidelines and an approach for managing the difficult and complex ontology-evolution process.", "title": "" }, { "docid": "df0e13e1322a95046a91fb7c867d968a", "text": "Taking into consideration both external (i.e. technology acceptance factors, website service quality) as well as internal factors (i.e. specific holdup cost) , this research explores how the customers’ satisfaction and loyalty, when shopping and purchasing on the internet , can be associated with each other and how they are affected by the above dynamics. This research adopts the Structural Equation Model (SEM) as the main analytical tool. It investigates those who used to have shopping experiences in major shopping websites of Taiwan. The research results point out the following: First, customer satisfaction will positively influence customer loyalty directly; second, technology acceptance factors will positively influence customer satisfaction and loyalty directly; third, website service quality can positively influence customer satisfaction and loyalty directly; and fourth, specific holdup cost can positively influence customer loyalty directly, but cannot positively influence customer satisfaction directly. This paper draws on the research results for implications of managerial practice, and then suggests some empirical tactics in order to help enhancing management performance for the website shopping industry.", "title": "" }, { "docid": "44e7859ae527003a9979884fabe022f9", "text": "Recurrent neural networks (RNN) are a widely used tool for the prediction of time series. In this paper we use the dynamic behaviour of the RNN to categorize input sequences into different specified classes. These two tasks do not seem to have much in common. However, the prediction task strongly supports the development of a suitable internal structure, representing the main features of the input sequence, to solve the classification problem. Therefore, the speed and success of the training as well as the generalization ability of the trained RNN are significantly improved. The trained RNN provides good classification performance and enables the user to assess efficiently the degree of reliability of the classification result.", "title": "" }, { "docid": "394c8f7a708d69ca26ab0617ab1530ab", "text": "Developing wireless sensor networks can enable information gathering, information processing and reliable monitoring of a variety of environments for both civil and military applications. It is however necessary to agree upon a basic architecture for building sensor network applications. This paper presents a general classification of sensor network applications based on their network configurations and discusses some of their architectural requirements. We propose a generic architecture for a specific subclass of sensor applications which we define as self-configurable systems where a large number of sensors coordinate amongst themselves to achieve a large sensing task. Throughout this paper we assume a certain subset of the sensors to be immobile. This paper lists the general architectural and infra-structural components necessary for building this class of sensor applications. Given the various architectural components, we present an algorithm that self-organizes the sensors into a network in a transparent manner. Some of the basic goals of our algorithm include minimizing power utilization, localizing operations and tolerating node and link failures.", "title": "" }, { "docid": "a06269431a16347154cf18d87b5c2ee8", "text": "1367-5788/$ see front matter Crown Copyright 2 http://dx.doi.org/10.1016/j.arcontrol.2012.03.012 q An earlier version of this paper appeared in Broggi control system for the VisLab Intercontinental Auton Transportation Systems (ITSC), Madeira, Portugal, Oct ⇑ Corresponding author. E-mail addresses: broggi@ce.unipr.it (A. Broggi), m zani@ce.unipr.it (P. Zani), coati@ce.unipr.it (A. Coati), pa 1 This test, carried out by VisLab in summer 2010 vehicles drive themselves from Parma, Italy, to Shang route, mostly across regions for which digital map information were not available.", "title": "" }, { "docid": "ca34e7cef347237a370fbf4772c77f3e", "text": "Given a set P of n points in the plane, we consider the problem of covering P with a minimum number of unit disks. This problem is known to be NP-hard. We present a simple 4-approximation algorithm for this problem which runs in O(n log n)-time. We also show how to extend this algorithm to other metrics, and to three dimensions.", "title": "" }, { "docid": "fee64e0be9a5db75c3f259aae01b6a12", "text": "A simple method, based on elementary fourth-order cumulants, is proposed for the classification of digital modulation schemes. These statistics are natural in this setting as they characterize the shape of the distribution of the noisy baseband I and Q samples. It is shown that cumulant-based classification is particularly effective when used in a hierarchical scheme, enabling separation into subclasses at low signal-to-noise ratio with small sample size. Thus, the method can be used as a preliminary classifier if desired. Computational complexity is order N , whereN is the number of complex baseband data samples. This method is robust in the presence of carrier phase and frequency offsets and can be implemented recursively. Theoretical arguments are verified via extensive simulations and comparisons with existing approaches.", "title": "" }, { "docid": "beb90397ff3d1ef0d71463fb2d9b1b97", "text": "Due to the strong competition that exists today, most manufacturing organizations are in a continuous effort for increasing their profits and reducing their costs. Accurate sales forecasting is certainly an inexpensive way to meet the aforementioned goals, since this leads to improved customer service, reduced lost sales and product returns and more efficient production planning. Especially for the food industry, successful sales forecasting systems can be very beneficial, due to the short shelf-life of many food products and the importance of the product quality which is closely related to human health. In this paper we present a complete framework that can be used for developing nonlinear time series sales forecasting models. The method is a combination of two artificial intelligence technologies, namely the radial basis function (RBF) neural network architecture and a specially designed genetic algorithm (GA). The methodology is applied successfully to sales data of fresh milk provided by a major manufacturing company of dairy products. 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cec6e899c23dd65881f84cca81205eb0", "text": "A fuzzy graph (f-graph) is a pair G : ( σ, μ) where σ is a fuzzy subset of a set S and μ is a fuzzy relation on σ. A fuzzy graph H : ( τ, υ) is called a partial fuzzy subgraph of G : (σ, μ) if τ (u) ≤ σ(u) for every u and υ (u, v) ≤ μ(u, v) for every u and v . In particular we call a partial fuzzy subgraph H : ( τ, υ) a fuzzy subgraph of G : ( σ, μ ) if τ (u) = σ(u) for every u in τ * and υ (u, v) = μ(u, v) for every arc (u, v) in υ*. A connected f-graph G : ( σ, μ) is a fuzzy tree(f-tree) if it has a fuzzy spannin g subgraph F : (σ, υ), which is a tree, where for all arcs (x, y) not i n F there exists a path from x to y in F whose strength is more than μ(x, y). A path P of length n is a sequence of disti nct nodes u0, u1, ..., un such that μ(ui−1, ui) > 0, i = 1, 2, ..., n and the degree of membershi p of a weakest arc is defined as its strength. If u 0 = un and n≥ 3, then P is called a cycle and a cycle P is called a fuzzy cycle(f-cycle) if it cont ains more than one weakest arc . The strength of connectedness between two nodes x and y is efined as the maximum of the strengths of all paths between x and y and is denot ed by CONNG(x, y). An x − y path P is called a strongest x − y path if its strength equal s CONNG(x, y). An f-graph G : ( σ, μ) is connected if for every x,y in σ ,CONNG(x, y) > 0. In this paper, we offer a survey of selected recent results on fuzzy graphs.", "title": "" }, { "docid": "86a78e909da53a6eb8a073978e25d489", "text": "In this paper, we evaluate the accuracy of personality-based recommendations using a real-world data set from Amazon.com. We automatically infer the personality traits, needs, and values of users based on unstructured user-generated content in social media, rather than administering questionnaires or explicitly asking the users to self-report their characteristics. We find that personality characteristics significantly increase the performance of recommender systems, in general, while different personality models exhibit statistically significant differences in predictive performance.", "title": "" }, { "docid": "912f5be62efd6b8f28054859d6c86aee", "text": "This work uses deep learning methods for intraday directional movements prediction of Standard & Poor's 500 index using financial news titles and a set of technical indicators as input. Deep learning methods can detect and analyze complex patterns and interactions in the data automatically allowing speed up the trading process. This paper focus on architectures such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), which have had good results in traditional NLP tasks. Results has shown that CNN can be better than RNN on catching semantic from texts and RNN is better on catching the context information and modeling complex temporal characteristics for stock market forecasting. The proposed method shows some improvement when compared with similar previous studies.", "title": "" }, { "docid": "8654f6e707c77a0f46ee993f1f27a287", "text": "The DeepQ tricorder device developed by HTC from 2013 to 2016 was entered in the Qualcomm Tricorder XPRIZE competition and awarded the second prize in April 2017. This paper presents DeepQ»s three modules powered by artificial intelligence: symptom checker, optical sense, and vital sense. We depict both their initial design and ongoing enhancements.", "title": "" }, { "docid": "987c19879542f16702c5026d1c417c35", "text": "Ultracapacitor usually use as a short-term duration electrical energy storage because it has several advantages, like high power density (5kW/kg), long lifecycle and very good charge/discharge efficiency. Unlike batteries, ultracapacitors may be charged and discharged at similar rates. This is very useful in energy recovery systems such as dynamic braking of transport systems. Here are a few characteristics of ultracapacitors that should be kept in mind when integrating/designing a charging system for the intended application. An ultracapacitor with zero charge looks like a short circuit to the charging source. Most of low cost power supplies fold back the output current in response to a perceived short circuit, making them unsuitable for charging of ultracapacitors. Ultracapacitors have a low series inductance allowing easy stabilizing with switch mode chargers. The RC time constant of passive charging networks is usually too long. Therefore, linear regulators are inefficient components for ultracapacitor charging. In this paper, the development of a current control ultracapacitor charger based on Digital Signal Processing (DSP) is presented. Keyword: current control, DSP, ultracapacitor, ultracapacitor charger", "title": "" }, { "docid": "d411b5b732f9d7eec4fc065bc410ae1b", "text": "What do you do to start reading robot hands and the mechanics of manipulation? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this robot hands and the mechanics of manipulation.", "title": "" } ]
scidocsrr
0ff418266e628784da3c3420baa86796
Predictive Entropy Search for Multi-objective Bayesian Optimization
[ { "docid": "687ac21bd828ae6d559ef9f55064dec0", "text": "We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments—active user modelling with preferences, and hierarchical reinforcement learning— and a discussion of the pros and cons of Bayesian optimization based on our experiences.", "title": "" } ]
[ { "docid": "28ab07763d682ae367b5c9ebd9c9ef13", "text": "Nowadays, the teaching-learning processes are constantly changing, one of the latest modifications promises to strengthen the development of digital skills and thinking in the participants, from an early age. In this sense, the present article shows the advances of a study oriented to the formation of programming abilities, computational thinking and collaborative learning in an initial education context. As part of the study it was initially proposed to conduct a training day for teachers who will participate in the experimental phase of the research, considering this human resource as a link of great importance to achieve maximum use of students in the development of curricular themes of the level, using ICT resources and programmable educational robots. The criterion and the positive acceptance expressed by the teaching group after the evaluation applied at the end of the session, constitute a good starting point for the development of the following activities that make up the research in progress.", "title": "" }, { "docid": "45d808ef2824bb57e4c1dd8d75960e63", "text": "The use of game-based learning in the classroom has turned out to be a trend nowadays. Most game-based learning tools and platforms are based on a quiz concept where the students can score points if they can choose the correct answer among multiple answers. Based on our experience in Faculty of Electrical Engineering, Universiti Teknologi MARA, most undergraduate students have difficulty to appreciate the Computer Programming course thus demotivating them in learning any programming related courses. Game-based learning approach using a Game-based Classroom Response System (GCRS) tool known as Kahoot is used to address this issue. This paper presents students' perceptions on Kahoot activity that they experienced in the classroom. The study was carried out by distributing a survey form to 120 students. Based on the feedback, majority of students enjoyed the activity and able to attract their interest in computer programming.", "title": "" }, { "docid": "ee9d84f08326cf48116337595dbe07f7", "text": "Facial fractures were described as early as the seventeenth century BC in the Edwin Smith surgical papyrus. In the eighteenth century, the French surgeon Desault described the unique propensity of the mandible to fracture in the narrow subcondylar region, which is commonly observed to this day. In a recent 5-year review of the National Trauma Data Base with more than 13,000 mandible fractures, condylar and subcondylar fractures made up 14.8% and 12.6% of all fractures respectively; taken together, more than any other site alone. This study, along with others, have confirmed that most modern-age condylar fractures occur in men, and are most often caused by motor vehicle accidents, and assaults. Historically, condylar fractures were managed in a closed fashion with various forms of immobilization or maxillomandibular fixation, with largely favorable results. Although the goals of treatment are the restoration of form and function, closed treatment relies on patient adaptation to an altered anatomy, because anatomic repositioning of the proximal segment is not achieved. However, the human body has a remarkable ability to adapt, and it remains an appropriate treatment of a large number of condylar fractures, including intracapsular fractures, fractures with minimal or no displacement, almost all pediatric condylar fractures, and fractures in patients whose medical or social situations preclude other forms of treatment. With advances in the understanding of osteosynthesis and an appreciation of surgical anatomy, open", "title": "" }, { "docid": "545de0009c9bba3538df2d9061c3ecb8", "text": "Attendance is one of the work ethics which is valued by most employers. In educational institutions also, attendance and academic success are directly related. Therefore, proper attendance management systems must be in place. Most of the educational institutions and government organizations in developing countries still use paper based attendance method to monitor the attendance. There is a need to replace these traditional methods of attendance recording with a more secure and robust system. Fingerprint based automated identification system based are gaining popularity due to unique nature of fingerprints. In this paper, a novel approach for fingerprint based attendance system using LabVIEW and GSM technology is proposed. Optical fingerprint module is used for capturing and processing fingerprints. Features such as recording of attendance in a text file along with the date and time of attendance are also incorporated in the system. GSM technology is used to intimate the parents about student’s attendance. The proposed system is implemented in the university and its performance is evaluated based upon user friendliness, accuracy, speed, security and cost.", "title": "" }, { "docid": "205c0c94d3f2dbadbc7024c9ef868d97", "text": "Solid dispersions (SD) of curcuminpolyvinylpyrrolidone in the ratio of 1:2, 1:4, 1:5, 1:6, and 1:8 were prepared in an attempt to increase the solubility and dissolution. Solubility, dissolution, powder X-ray diffraction (XRD), differential scanning calorimetry (DSC) and Fourier transform infrared spectroscopy (FTIR) of solid dispersions, physical mixtures (PM) and curcumin were evaluated. Both solubility and dissolution of curcumin solid dispersions were significantly greater than those observed for physical mixtures and intact curcumin. The powder X-ray diffractograms indicated that the amorphous curcumin was obtained from all solid dispersions. It was found that the optimum weight ratio for curcumin:PVP K-30 is 1:6. The 1:6 solid dispersion still in the amorphous from after storage at ambient temperature for 2 years and the dissolution profile did not significantly different from freshly prepared. Keywords—Curcumin, polyvinylpyrrolidone K-30, solid dispersion, dissolution, physicochemical.", "title": "" }, { "docid": "d59c6a2dd4b6bf7229d71f3ae036328a", "text": "Community search over large graphs is a fundamental problem in graph analysis. Recent studies propose to compute top-k influential communities, where each reported community not only is a cohesive subgraph but also has a high influence value. The existing approaches to the problem of top-k influential community search can be categorized as index-based algorithms and online search algorithms without indexes. The index-based algorithms, although being very efficient in conducting community searches, need to pre-compute a specialpurpose index and only work for one built-in vertex weight vector. In this paper, we investigate online search approaches and propose an instance-optimal algorithm LocalSearch whose time complexity is linearly proportional to the size of the smallest subgraph that a correct algorithm needs to access without indexes. In addition, we also propose techniques to make LocalSearch progressively compute and report the communities in decreasing influence value order such that k does not need to be specified. Moreover, we extend our framework to the general case of top-k influential community search regarding other cohesiveness measures. Extensive empirical studies on real graphs demonstrate that our algorithms outperform the existing online search algorithms by several orders of magnitude.", "title": "" }, { "docid": "88b7d4d545605783d27048987cdb8765", "text": "In this paper we analyze the security and usability of the state-of-the-art secure mobile messenger SIGNAL. In the first part of this paper we discuss the threat model current secure mobile messengers face. In the following, we conduct a user study to examine the usability of SIGNAL’s security features. Specifically, our study assesses if users are able to detect and deter man-in-the-middle attacks on the SIGNAL protocol. Our results show that the majority of users failed to correctly compare keys with their conversation partner for verification purposes due to usability problems and incomplete mental models. Hence users are very likely to fall for attacks on the essential infrastructure of today’s secure messaging apps: the central services to exchange cryptographic keys. We expect that our findings foster research into the unique usability and security challenges of state-of-theart secure mobile messengers and thus ultimately result in strong protection measures for the average user.", "title": "" }, { "docid": "64de73be55c4b594934b0d1bd6f47183", "text": "Smart grid has emerged as the next-generation power grid via the convergence of power system engineering and information and communication technology. In this article, we describe smart grid goals and tactics, and present a threelayer smart grid network architecture. Following a brief discussion about major challenges in smart grid development, we elaborate on smart grid cyber security issues. We define a taxonomy of basic cyber attacks, upon which sophisticated attack behaviors may be built. We then introduce fundamental security techniques, whose integration is essential for achieving full protection against existing and future sophisticated security attacks. By discussing some interesting open problems, we finally expect to trigger more research efforts in this emerging area.", "title": "" }, { "docid": "27f3060ef96f1656148acd36d50f02ce", "text": "Video sensors become particularly important in traffic applications mainly due to their fast response, easy installation, operation and maintenance, and their ability to monitor wide areas. Research in several fields of traffic applications has resulted in a wealth of video processing and analysis methods. Two of the most demanding and widely studied applications relate to traffic monitoring and automatic vehicle guidance. In general, systems developed for these areas must integrate, amongst their other tasks, the analysis of their static environment (automatic lane finding) and the detection of static or moving obstacles (object detection) within their space of interest. In this paper we present an overview of image processing and analysis tools used in these applications and we relate these tools with complete systems developed for specific traffic applications. More specifically, we categorize processing methods based on the intrinsic organization of their input data (feature-driven, area-driven, or model-based) and the domain of processing (spatial/frame or temporal/video). Furthermore, we discriminate between the cases of static and mobile camera. Based on this categorization of processing tools, we present representative systems that have been deployed for operation. Thus, the purpose of the paper is threefold. First, to classify image-processing methods used in traffic applications. Second, to provide the advantages and disadvantages of these algorithms. Third, from this integrated consideration, to attempt an evaluation of shortcomings and general needs in this field of active research. q 2003 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "c7f27d8172233b0581286d756a482186", "text": "We propose cw2vec, a novel method for learning Chinese word embeddings. It is based on our observation that exploiting stroke-level information is crucial for improving the learning of Chinese word embeddings. Specifically, we design a minimalist approach to exploit such features, by using stroke n-grams, which capture semantic and morphological level information of Chinese words. Through qualitative analysis, we demonstrate that our model is able to extract semantic information that cannot be captured by existing methods. Empirical results on the word similarity, word analogy, text classification and named entity recognition tasks show that the proposed approach consistently outperforms state-of-the-art approaches such as word-based word2vec and GloVe, character-based CWE, component-based JWE and pixel-based GWE.", "title": "" }, { "docid": "683edd67fe4b1919228253fe5dd461cb", "text": "In oncology, the term 'hyperthermia' refers to the treatment of malignant diseases by administering heat in various ways. Hyperthermia is usually applied as an adjunct to an already established treatment modality (especially radiotherapy and chemotherapy), where tumor temperatures in the range of 40-43 degrees C are aspired. In several clinical phase-III trials, an improvement of both local control and survival rates have been demonstrated by adding local/regional hyperthermia to radiotherapy in patients with locally advanced or recurrent superficial and pelvic tumors. In addition, interstitial hyperthermia, hyperthermic chemoperfusion, and whole-body hyperthermia (WBH) are under clinical investigation, and some positive comparative trials have already been completed. In parallel to clinical research, several aspects of heat action have been examined in numerous pre-clinical studies since the 1970s. However, an unequivocal identification of the mechanisms leading to favorable clinical results of hyperthermia have not yet been identified for various reasons. This manuscript deals with discussions concerning the direct cytotoxic effect of heat, heat-induced alterations of the tumor microenvironment, synergism of heat in conjunction with radiation and drugs, as well as, the presumed cellular effects of hyperthermia including the expression of heat-shock proteins (HSP), induction and regulation of apoptosis, signal transduction, and modulation of drug resistance by hyperthermia.", "title": "" }, { "docid": "1af028a0cf88d0ac5c52e84019554d51", "text": "Robots exhibit life-like behavior by performing intelligent actions. To enhance human-robot interaction it is necessary to investigate and understand how end-users perceive such animate behavior. In this paper, we report an experiment to investigate how people perceived different robot embodiments in terms of animacy and intelligence. iCat and Robovie II were used as the two embodiments in this experiment. We conducted a between-subject experiment where robot type was the independent variable, and perceived animacy and intelligence of the robot were the dependent variables. Our findings suggest that a robots perceived intelligence is significantly correlated with animacy. The correlation between the intelligence and the animacy of a robot was observed to be stronger in the case of the iCat embodiment. Our results also indicate that the more animated the face of the robot, the more likely it is to attract the attention of a user. We also discuss the possible and probable explanations of the results obtained.", "title": "" }, { "docid": "fcf0ac3b52a1db116463e7376dae4950", "text": "Although the ability to perform complex cognitive operations is assumed to be impaired following acute marijuana smoking, complex cognitive performance after acute marijuana use has not been adequately assessed under experimental conditions. In the present study, we used a within-participant double-blind design to evaluate the effects acute marijuana smoking on complex cognitive performance in experienced marijuana smokers. Eighteen healthy research volunteers (8 females, 10 males), averaging 24 marijuana cigarettes per week, completed this three-session outpatient study; sessions were separated by at least 72-hrs. During sessions, participants completed baseline computerized cognitive tasks, smoked a single marijuana cigarette (0%, 1.8%, or 3.9% Δ9-THC w/w), and completed additional cognitive tasks. Blood pressure, heart rate, and subjective effects were also assessed throughout sessions. Marijuana cigarettes were administered in a double-blind fashion and the sequence of Δ9-THC concentration order was balanced across participants. Although marijuana significantly increased the number of premature responses and the time participants required to complete several tasks, it had no effect on accuracy on measures of cognitive flexibility, mental calculation, and reasoning. Additionally, heart rate and several subjective-effect ratings (e.g., “Good Drug Effect,” “High,” “Mellow”) were significantly increased in a Δ9-THC concentration-dependent manner. These data demonstrate that acute marijuana smoking produced minimal effects on complex cognitive task performance in experienced marijuana users.", "title": "" }, { "docid": "2f9d5235bac1d8b3a9c26cd00e843fb9", "text": "K-SVD is a signal representation method which, from a set of signals, can derive a dictionary able to approximate each signal with a sparse combination of the atoms. This paper focuses on the K-SVD-based image denoising algorithm. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.", "title": "" }, { "docid": "dc5de8502003abd95420b89c7791b48b", "text": "Location tagging, also known as geotagging or geolocation, is the process of assigning geographical coordinates to input data. In this paper we present an algorithm for location tagging of textual documents. Our approach makes use of previous work in natural language processing by using a state-of-the-art part-of-speech tagger and named entity recognizer to find blocks of text which may refer to locations. A knowledge base (OpenStreatMap) is then used to find a list of possible locations for each of these blocks of text. Finally, one location is chosen for each block of text by assigning distance-based scores to each location and repeatedly selecting the location and block of text with the best score. We tested our geolocation algorithm with Wikipedia articles about topics with a well-defined geographical location that are geotagged by the articles’ authors, where classification approaches have achieved median errors as low as 11 km. However, the maximum accuracy of these approaches is limited by the class size, so future work may not yield significant improvement. Our algorithm tags a location to each block of text that was identified as a possible location reference, meaning a text is typically assigned multiple tags. When we considered only the tag with the highest distancebased score, we achieved a 10th percentile error of 490 metres and median error of 54 kilometres on the Wikipedia dataset we used. When we considered the five location tags with the greatest scores, we found that 50% of articles were assigned at least one tag within 8.5 kilometres of the article’s author-assigned true location. We also tested our approach on a set of Twitter messages that are tagged with the location from which the message was sent. This dataset is more challenging than the geotagged Wikipedia articles, because Twitter texts are shorter, tend to contain unstructured text, and may not contain information about the location from where the message was sent in the first place. Nevertheless, we make some interesting observations about potential use of our geolocation algorithm for this type of input. We explain how we use the Spark framework for data analytics to collect and process our test data. In general, classification-based approaches for location tagging may be reaching their upper limit for accuracy, but our precision-focused approach has high accuracy for some texts and shows significant potential for improvement overall.", "title": "" }, { "docid": "cec10dde2a3988b39d8b2e7655e92a3c", "text": "As the performance gap between the CPU and main memory continues to grow, techniques to hide memory latency are essential to deliver a high performance computer system. Prefetching can often overlap memory latency with computation for array-based numeric applications. However, prefetching for pointer-intensive applications still remains a challenging problem. Prefetching linked data structures (LDS) is difficult because the address sequence of LDS traversal does not present the same arithmetic regularity as array-based applications and the data dependence of pointer dereferences can serialize the address generation process.\nIn this paper, we propose a cooperative hardware/software mechanism to reduce memory access latencies for linked data structures. Instead of relying on the past address history to predict future accesses, we identify the load instructions that traverse the LDS, and execute them ahead of the actual computation. To overcome the serial nature of the LDS address generation, we attach a prefetch controller to each level of the memory hierarchy and push, rather than pull, data to the CPU. Our simulations, using four pointer-intensive applications, show that the push model can achieve between 4% and 30% larger reductions in execution time compared to the pull model.", "title": "" }, { "docid": "84a01029714dfef5d14bc4e2be78921e", "text": "Integrating frequent pattern mining with interactive visualization for temporal event sequence analysis poses many interesting research questions and challenges. We review and reflect on some of these challenges based on our experiences working on event sequence data from two domains: web analytics and application logs. These challenges can be organized using a three-stage framework: pattern mining, pattern pruning and interactive visualization.", "title": "" }, { "docid": "ce03a26947b37829043406fe671869c5", "text": "Diagnosing students' knowledge proficiency, i.e., the mastery degrees of a particular knowledge point in exercises, is a crucial issue for numerous educational applications, e.g., targeted knowledge training and exercise recommendation. Educational theories have converged that students learn and forget knowledge from time to time. Thus, it is necessary to track their mastery of knowledge over time. However, traditional methods in this area either ignored the explanatory power of the diagnosis results on knowledge points or relied on a static assumption. To this end, in this paper, we devise an explanatory probabilistic approach to track the knowledge proficiency of students over time by leveraging educational priors. Specifically, we first associate each exercise with a knowledge vector in which each element represents an explicit knowledge point by leveraging educational priors (i.e., Q-matrix ). Correspondingly, each student is represented as a knowledge vector at each time in a same knowledge space. Second, given the student knowledge vector over time, we borrow two classical educational theories (i.e., Learning curve and Forgetting curve ) as priors to capture the change of each student's proficiency over time. After that, we design a probabilistic matrix factorization framework by combining student and exercise priors for tracking student knowledge proficiency. Extensive experiments on three real-world datasets demonstrate both the effectiveness and explanatory power of our proposed model.", "title": "" }, { "docid": "221970fad528f2538930556dde7a0062", "text": "The recent explosive growth in convolutional neural network (CNN) research has produced a variety of new architectures for deep learning. One intriguing new architecture is the bilinear CNN (B-CNN), which has shown dramatic performance gains on certain fine-grained recognition problems [15]. We apply this new CNN to the challenging new face recognition benchmark, the IARPA Janus Benchmark A (IJB-A) [12]. It features faces from a large number of identities in challenging real-world conditions. Because the face images were not identified automatically using a computerized face detection system, it does not have the bias inherent in such a database. We demonstrate the performance of the B-CNN model beginning from an AlexNet-style network pre-trained on ImageNet. We then show results for fine-tuning using a moderate-sized and public external database, FaceScrub [17]. We also present results with additional fine-tuning on the limited training data provided by the protocol. In each case, the fine-tuned bilinear model shows substantial improvements over the standard CNN. Finally, we demonstrate how a standard CNN pre-trained on a large face database, the recently released VGG-Face model [20], can be converted into a B-CNN without any additional feature training. This B-CNN improves upon the CNN performance on the IJB-A benchmark, achieving 89.5% rank-1 recall.", "title": "" }, { "docid": "159610206e175126fa07f87a5fb28ab2", "text": "BACKGROUND\nThe aim of this review was to further define the clinical condition triquetrohamate (TH) impaction syndrome (THIS), an entity underreported and missed often. Its presentation, physical findings, and treatment are presented.\n\n\nMETHODS\nBetween 2009 and 2014, 18 patients were diagnosed with THIS. The age, sex, hand involved, activity responsible for symptoms, and defining characteristics were recorded. The physical findings, along with ancillary studies, were reviewed. Delay in diagnosis and misdiagnoses were assessed. Treatment, either conservative or surgical, is presented. Follow-up outcomes are presented.\n\n\nRESULTS\nThere were 15 male and 3 females, average age of 42 years. Two-handed sports such as golf and baseball accounted for more than 60% of the cases, and these cases were the only ones that involved the lead nondominant hand, pain predominantly at impact. Delay in diagnosis averaged greater than 7 months, with triangular fibrocartilage (TFCC) and extensor carpi ulnaris (ECU) accounting for more than 50% of misdiagnoses. Physical findings of note included pain over the TH joint, worse with passive dorsiflexion and ulnar deviation. Radiographic findings are described. Instillation of lidocaine with the wrist in radial deviation under fluoroscopic imaging with relief of pain helped to confirm the diagnosis. Conservative treatment was successful in 9 of 18 patients (50%), whereas in the remaining, surgical intervention allowed approximately 80% return to full activities without limitation.\n\n\nCONCLUSION\nTriquetrohamate impaction syndrome remains an underreported and often unrecognized cause of ulnar-sided wrist pain. In this report, the largest series to date, its presentation, defining characteristics, and treatment options are further elucidated.", "title": "" } ]
scidocsrr
837874147b77b30c53ced917c9faecb6
Semantic Annotation and Retrieval of Music and Sound Effects
[ { "docid": "f2603a583b63c1c8f350b3ddabe16642", "text": "We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval.", "title": "" } ]
[ { "docid": "a7f81672b718b7f5990330e3a77663a9", "text": "CHARLES A. THIGPEN: Effects Of Forward Head And Rounded Shoulder Posture On Scapular Kinematics, Muscle Activity, And Shoulder Coordination (Under the direction of Dr. Darin A. Padua) Forward head and rounded shoulder posture (FHRSP) has been identified as a potential risk factor for the development of shoulder pain. The mechanism through which forward head and rounded shoulder can facilitate shoulder injury is not well understood. Altered scapular kinematics, muscle activity, and shoulder joint coordination due to FHRSP may lead to the development of shoulder pain. However, there is little evidence to support the influence of FHRSP on scapular kinematics, muscle activity, and shoulder joint coordination. Therefore, the purpose of this study was to compare scapular kinematics, muscle activity, and shoulder joint coordination in individuals with and without FHRSP. Eighty volunteers without shoulder pain were classified as having FHRSP or ideal posture. An electromagnetic tracking system together with hard-wired surface electromyography was used to collect three-dimensional scapular kinematics concurrently with muscle activity of the upper and lower trapezius as well as the serratus anterior during", "title": "" }, { "docid": "e69f71cc98bce195d0cfb77ecdc31088", "text": "Wheat grass juice is the juice extracted from the pulp of wheat grass and has been used as a general-purpose health tonic for several years. Several of our patients in the thalassemia unit began consuming wheat grass juice after anecdotal accounts of beneficial effects on transfusion requirements. These encouraging experiences prompted us to evaluate the effect of wheat grass juice on transfusion requirements in patients with transfusion dependent beta thalassemia. Families of patients raised the wheat grass at home in kitchen garden/pots. The patients consumed about 100 mL of wheat grass juice daily. Each patient acted as his own control. Observations recorded during the period of intake of wheat grass juice were compared with one-year period preceding it. Variables recorded were the interval between transfusions, pre-transfusion hemoglobin, amount of blood transfused and the body weight. A beneficial effect of wheat grass juice was defined as decrease in the requirement of packed red cells (measured as grams/Kg body weight/year) by 25% or more. 16 cases were analyzed. Blood transfusion requirement fell by >25% in 8 (50%) patients with a decrease of >40% documented in 3 of these. No perceptible adverse effects were recognized.", "title": "" }, { "docid": "59daeea2c602a1b1d64bae95185f9505", "text": "Traumatic brain injury (TBI) triggers endoplasmic reticulum (ER) stress and impairs autophagic clearance of damaged organelles and toxic macromolecules. In this study, we investigated the effects of the post-TBI administration of docosahexaenoic acid (DHA) on improving hippocampal autophagy flux and cognitive functions of rats. TBI was induced by cortical contusion injury in Sprague–Dawley rats, which received DHA (16 mg/kg in DMSO, intraperitoneal administration) or vehicle DMSO (1 ml/kg) with an initial dose within 15 min after the injury, followed by a daily dose for 3 or 7 days. First, RT-qPCR reveals that TBI induced a significant elevation in expression of autophagy-related genes in the hippocampus, including SQSTM1/p62 (sequestosome 1), lysosomal-associated membrane proteins 1 and 2 (Lamp1 and Lamp2), and cathepsin D (Ctsd). Upregulation of the corresponding autophagy-related proteins was detected by immunoblotting and immunostaining. In contrast, the DHA-treated rats did not exhibit the TBI-induced autophagy biogenesis and showed restored CTSD protein expression and activity. T2-weighted images and diffusion tensor imaging (DTI) of ex vivo brains showed that DHA reduced both gray matter and white matter damages in cortical and hippocampal tissues. DHA-treated animals performed better than the vehicle control group on the Morris water maze test. Taken together, these findings suggest that TBI triggers sustained stimulation of autophagy biogenesis, autophagy flux, and lysosomal functions in the hippocampus. Swift post-injury DHA administration restores hippocampal lysosomal biogenesis and function, demonstrating its therapeutic potential.", "title": "" }, { "docid": "bc4fa6a77bf0ea02456947696dc6dca3", "text": "We propose a constraint programming approach for the optimization of inventory routing in the liquefied natural gas industry. We present two constraint programming models that rely on a disjunctive scheduling representation of the problem. We also propose an iterative search heuristic to generate good feasible solutions for these models. Computational results on a set of largescale test instances demonstrate that our approach can find better solutions than existing approaches based on mixed integer programming, while being 4 to 10 times faster on average.", "title": "" }, { "docid": "55d584440f6925f12dd3a28917b10c85", "text": "Bitcoin and other similar digital currencies on blockchains are not ideal means for payment, because their prices tend to go up in the long term (thus people are incentivized to hoard those currencies), and to fluctuate widely in the short term (thus people would want to avoid risks of losing values). The reason why those blockchain currencies based on proof of work are unstable may be found in their designs that the supplies of currencies do not respond to their positive and negative demand shocks, as the authors have formulated in our past work. Continuing from our past work, this paper proposes minimal changes to the design of blockchain currencies so that their market prices are automatically stabilized, absorbing both positive and negative demand shocks of the currencies by autonomously controlling their supplies. Those changes are: 1) limiting re-adjustment of proof-of-work targets, 2) making mining rewards variable according to the observed over-threshold changes of block intervals, and 3) enforcing negative interests to remove old coins in circulation. We have made basic design checks of these measures through simple simulations. In addition to stabilization of prices, the proposed measures may have effects of making those currencies preferred means for payment by disincentivizing hoarding, and improving sustainability of the currency systems by making rewards to miners perpetual.", "title": "" }, { "docid": "e1008ecca5798a7c5c6048a945b2d25d", "text": "In this paper, we show for the first time how gradient TD (GTD) reinforcement learning methods can be formally derived as true stochastic gradient algorithms, not with respect to their original objective functions as previously attempted, but rather using derived primal-dual saddle-point objective functions. We then conduct a saddle-point error analysis to obtain finite-sample bounds on their performance. Previous analyses of this class of algorithms use stochastic approximation techniques to prove asymptotic convergence, and no finite-sample analysis had been attempted. Two novel GTD algorithms are also proposed, namely projected GTD2 and GTD2-MP, which use proximal “mirror maps” to yield improved convergence guarantees and acceleration, respectively. The results of our theoretical analysis imply that the GTD family of algorithms are comparable and may indeed be preferred over existing least squares TD methods for off-policy learning, due to their linear complexity. We provide experimental results showing the improved performance of our accelerated gradient TD methods.", "title": "" }, { "docid": "de5b79a5debac750a4970516778d926c", "text": "Vertical channel (VC) 3D NAND Flash may be categorized into two types of channel formation: (1) \"U-turn\" string, where both BL and source are connected at top thus channel current flows in a U-turn way; (2) \"Bottom source\", where source is connected at the bottom thus channel current flows only in one way. For the single-gate vertical channel (SGVC) 3D NAND architecture [1], it is also possible to develop a bottom source structure. The detailed array decoding method is illustrated. In this work, the challenges of bottom source processing and thin poly channel formation are extensively studied. It is found that the two-step poly formation and the bottom recess control are two key factors governing the device initial performance. In general, the two-step poly formation with additional poly spacer etching technique seems to cause degradation of both the poly mobility and device subthreshold slope. Sufficient thermal annealing is needed to recover the damage. Moreover, the bottom connection needs an elegant recess control for better read current as well as bottom ground-select transistor (GSL) device optimizations.", "title": "" }, { "docid": "4bc6e04b71ba3f6c2f79e1a2c99a9002", "text": "The SMAPH system implements a pipeline of four main steps: (1) Fetching -- it fetches the search results returned by a search engine given the query to be annotated; (2) Spotting -- search result snippets are parsed to identify candidate mentions for the entities to be annotated. This is done in a novel way by detecting the keywords-in-context by looking at the bold parts of the search snippets; (3) Candidate generation -- candidate entities are generated in two ways: from the Wikipedia pages occurring in the search results, and from an existing annotator, using the mentions identified in the spotting step as input; (4) Pruning -- a binary SVM classifier is used to decide which entities to keep/discard in order to generate the final annotation set for the query. The SMAPH system ranked third on the development set and first on the final blind test of the 2014 ERD Challenge short text track.", "title": "" }, { "docid": "a0b8c5f8c9c8592a9d59502d0a4014d1", "text": "OBJECTIVE\nPolymerase epsilon (POLE) is a DNA polymerase with a proofreading (exonuclease) domain, responsible for the recognition and excision of mispaired bases, thereby allowing high-fidelity DNA replication to occur. The Cancer Genome Atlas research network recently identified an ultramutated group of endometrial carcinomas, characterized by mutations in POLE, and exceptionally high substitution mutation rates. These POLE mutated endometrial tumors were almost exclusively of the endometrioid histotype. The prevalence and patterns of POLE mutated tumors in endometrioid carcinomas of the ovary, however, have not been studied in detail.\n\n\nMATERIALS AND METHODS\nIn this study, we investigate the frequency of POLE exonuclease domain mutations in a series of 89 ovarian endometrioid carcinomas.\n\n\nRESULTS\nWe found POLE mutations in 4 of 89 (4.5%) cases, occurring in 3 of 23 (13%) International Federation of Gynecology and Obstetrics (FIGO) grade 1, 1 of 43 (2%) FIGO grade 2, and 0 of 23 (0%) FIGO grade 3 tumors. All mutations were somatic missense point mutations, occurring at the commonly reported hotspots, P286R and V411L. All 3 POLE-mutated FIGO grade 1 tumors displayed prototypical histology, and the POLE-mutated FIGO grade 2 tumor displayed morphologic heterogeneity with focally high-grade features. All 4 patients with POLE-mutated tumors followed an uneventful clinical course with no disease recurrence; however, this finding was not statistically significant (P = 0.59).\n\n\nCONCLUSIONS\nThe low rate of POLE mutations in ovarian endometrioid carcinoma and their predominance within the low FIGO grade tumors are in contrast to the findings in the endometrium.", "title": "" }, { "docid": "aa88086e527a2da737eb1d5968a1f4a9", "text": "Video analytics will drive a wide range of applications with great potential to impact society. A geographically distributed architecture of public clouds and edges that extend down to the cameras is the only feasible approach to meeting the strict real-time requirements of large-scale live video analytics.", "title": "" }, { "docid": "238e1ce2e1611a3c5f28c77239f7fd87", "text": "Based on the power system's stability and control theory, the paper carries out intelligence stability analysis of power system with big data technology. With big data and data mining technology, the paper takes advantage of the useful information from massive data, and makes out the corresponding security and stability of power system intelligent analysis and decision making through the intelligent information processing. The paper studies the integrated strategy of power system multi-source heterogeneous data, and establishes the structure of power system panoramic data. Then the power system Intelligent Stability Analysis System(ISAS) framework is established online, including stability assessment system and control-ability evaluation system. Based on the study of the large power system network characteristics and topology intelligent analysis, the actual regional power system's ISAS is researched and analyzed.", "title": "" }, { "docid": "9fa05fdcaeb09d881e4bcc7e92cf8311", "text": "A new broadband in-phase power divider based on multilayer technology is presented. A simple design procedure is developed for the proposed multilayer power divider. An S-band four-way multilayer power divider was designed and measured. The simulated results are compared with the measured data, and good agreement is reported. The measured 15 dB return loss bandwidth is demonstrated to be about 72%, and its phase difference between the output signals is less than 38.", "title": "" }, { "docid": "459de602bf6e46ad4b752f2e51c81ffa", "text": "Self-adaptation is an essential feature of natural evolution. However, in the context of function optimization, self-adaptation features of evolutionary search algorithms have been explored mainly with evolution strategy (ES) and evolutionary programming (EP). In this paper, we demonstrate the self-adaptive feature of real-parameter genetic algorithms (GAs) using a simulated binary crossover (SBX) operator and without any mutation operator. The connection between the working of self-adaptive ESs and real-parameter GAs with the SBX operator is also discussed. Thereafter, the self-adaptive behavior of real-parameter GAs is demonstrated on a number of test problems commonly used in the ES literature. The remarkable similarity in the working principle of real-parameter GAs and self-adaptive ESs shown in this study suggests the need for emphasizing further studies on self-adaptive GAs.", "title": "" }, { "docid": "a17aac6a2ba9000e71eccbb21e31f529", "text": "Model-free reinforcement learning methods such as the Proximal Policy Optimization algorithm (PPO) have successfully applied in complex decision-making problems such as Atari games. However, these methods suffer from high variances and high sample complexity. On the other hand, model-based reinforcement learning methods that learn the transition dynamics are more sample efficient, but they often suffer from the bias of the transition estimation. How to make use of both model-based and model-free learning is a central problem in reinforcement learning. In this paper, we present a new technique to address the tradeoff between exploration and exploitation, which regards the difference between model-free and model-based estimations as a measure of exploration value. We apply this new technique to the PPO algorithm and arrive at a new policy optimization method, named Policy Optimization with Modelbased Explorations (POME). POME uses two components to predict the actions’ target values: a model-free one estimated by Monte-Carlo sampling and a model-based one which learns a transition model and predicts the value of the next state. POME adds the error of these two target estimations as the additional exploration value for each state-action pair, i.e, encourages the algorithm to explore the states with larger target errors which are hard to estimate. We compare POME with PPO on Atari 2600 games, and it shows that POME outperforms PPO on 33 games out of 49 games.", "title": "" }, { "docid": "3ba011d181a4644c8667b139c63f50ff", "text": "Recent studies have suggested that positron emission tomography (PET) imaging with 68Ga-labelled DOTA-somatostatin analogues (SST) like octreotide and octreotate is useful in diagnosing neuroendocrine tumours (NETs) and has superior value over both CT and planar and single photon emission computed tomography (SPECT) somatostatin receptor scintigraphy (SRS). The aim of the present study was to evaluate the role of 68Ga-DOTA-1-NaI3-octreotide (68Ga-DOTANOC) in patients with SST receptor-expressing tumours and to compare the results of 68Ga-DOTA-D-Phe1-Tyr3-octreotate (68Ga-DOTATATE) in the same patient population. Twenty SRS were included in the study. Patients’ age (n = 20) ranged from 25 to 75 years (mean 55.4 ± 12.7 years). There were eight patients with well-differentiated neuroendocrine tumour (WDNET) grade1, eight patients with WDNET grade 2, one patient with poorly differentiated neuroendocrine carcinoma (PDNEC) grade 3 and one patient with mixed adenoneuroendocrine tumour (MANEC). All patients had two consecutive PET studies with 68Ga-DOTATATE and 68Ga-DOTANOC. All images were evaluated visually and maximum standardized uptake values (SUVmax) were also calculated for quantitative evaluation. On visual evaluation both tracers produced equally excellent image quality and similar body distribution. The physiological uptake sites of pituitary and salivary glands showed higher uptake in 68Ga-DOTATATE images. Liver and spleen uptake values were evaluated as equal. Both 68Ga-DOTATATE and 68Ga-DOTANOC were negative in 6 (30 %) patients and positive in 14 (70 %) patients. In 68Ga-DOTANOC images only 116 of 130 (89 %) lesions could be defined and 14 lesions were missed because of lack of any uptake. SUVmax values of lesions were significantly higher on 68Ga-DOTATATE images. Our study demonstrated that the images obtained by 68Ga-DOTATATE and 68Ga-DOTANOC have comparable diagnostic accuracy. However, 68Ga-DOTATATE seems to have a higher lesion uptake and may have a potential advantage.", "title": "" }, { "docid": "01a57e4a8bcc91fd5d172280a6b47577", "text": "Recommendation System Using Collaborative Filtering by Yunkyoung Lee Collaborative filtering is one of the well known and most extensive techniques in recommendation system its basic idea is to predict which items a user would be interested in based on their preferences. Recommendation systems using collaborative filtering are able to provide an accurate prediction when enough data is provided, because this technique is based on the user’s preference. User-based collaborative filtering has been very successful in the past to predict the customer’s behavior as the most important part of the recommendation system. However, their widespread use has revealed some real challenges, such as data sparsity and data scalability, with gradually increasing the number of users and items. To improve the execution time and accuracy of the prediction problem, this paper proposed item-based collaborative filtering applying dimension reduction in a recommendation system. It demonstrates that the proposed approach can achieve better performance and execution time for the recommendation system in terms of existing challenges, according to evaluation metrics using Mean Absolute Error (MAE).", "title": "" }, { "docid": "7515938d82cf5f9e6682cdf4793ac27d", "text": "Glioblastoma is an immunosuppressive, fatal brain cancer that contains glioblastoma stem-like cells (GSCs). Oncolytic herpes simplex virus (oHSV) selectively replicates in cancer cells while inducing anti-tumor immunity. oHSV G47Δ expressing murine IL-12 (G47Δ-mIL12), antibodies to immune checkpoints (CTLA-4, PD-1, PD-L1), or dual combinations modestly extended survival of a mouse glioma model. However, the triple combination of anti-CTLA-4, anti-PD-1, and G47Δ-mIL12 cured most mice in two glioma models. This treatment was associated with macrophage influx and M1-like polarization, along with increased T effector to T regulatory cell ratios. Immune cell depletion studies demonstrated that CD4+ and CD8+ T cells as well as macrophages are required for synergistic curative activity. This combination should be translatable to the clinic and other immunosuppressive cancers.", "title": "" }, { "docid": "509c4b0d3cfd457b1ef22ee5de1830b8", "text": "Convolutional neural nets (convnets) trained from massive labeled datasets [1] have substantially improved the state-of-the-art in image classification [2] and object detection [3]. However, visual understanding requires establishing correspondence on a finer level than object category. Given their large pooling regions and training from whole-image labels, it is not clear that convnets derive their success from an accurate correspondence model which could be used for precise localization. In this paper, we study the effectiveness of convnet activation features for tasks requiring correspondence. We present evidence that convnet features localize at a much finer scale than their receptive field sizes, that they can be used to perform intraclass aligment as well as conventional hand-engineered features, and that they outperform conventional features in keypoint prediction on objects from PASCAL VOC 2011 [4].", "title": "" }, { "docid": "257eca5511b1657f4a3cd2adff1989f8", "text": "The monitoring of volcanoes is mainly performed by sensors installed on their structures, aiming at recording seismic activities and reporting them to observatories to be later analyzed by specialists. However, due to the high volume of data continuously collected, the use of automatic techniques is an important requirement to support real time analyses. In this sense, a basic but challenging task is the classification of seismic activities to identify signals yielded by different sources as, for instance, the movement of magmatic fluids. Although there exists several approaches proposed to perform such task, they were mainly designed to deal with raw signals. In this paper, we present a 2D approach developed considering two main steps. Firstly, spectrograms for every collected signal are calculated by using Fourier Transform. Secondly, we set a deep neural network to discriminate seismic activities by analyzing the spectrogram shapes. As a consequence, our classifier provided outstanding results with accuracy rates greater than 95%.", "title": "" }, { "docid": "c3eec24d9e7e051a34c72bdc301b3894", "text": "Scheduling has a significant influence on application performance. Deciding on a quantum length can be very tricky, especially when concurrent applications have various characteristics. This is actually the case in virtualized cloud computing environments where virtual machines from different users are colocated on the same physical machine. We claim that in a multi-core virtualized platform, different quantum lengths should be associated with different application types. We apply this principle in a new scheduler called AQL_Sched. We identified 5 main application types and experimentally found the best quantum length for each of them. Dynamically, AQL_Sched associates an application type with each virtual CPU (vCPU) and schedules vCPUs according to their type on physical CPU (pCPU) pools with the best quantum length. Therefore, each vCPU is scheduled on a pCPU with the best quantum length. We implemented a prototype of AQL_Sched in Xen and we evaluated it with various reference benchmarks (SPECweb2009, SPECmail2009, SPEC CPU2006, and PARSEC). The evaluation results show that AQL_Sched outperforms Xen's credit scheduler. For instance, up to 20%, 10% and 15% of performance improvements have been obtained with SPECweb2009, SPEC CPU2006 and PARSEC, respectively.", "title": "" } ]
scidocsrr
13cefe805419a1c3e889333347883769
A Joint Model of Language and Perception for Grounded Attribute Learning
[ { "docid": "47faebac1eecb05bc749f3e820c55486", "text": "Current approaches for semantic parsing take a supervised approach requiring a considerable amount of training data which is expensive and difficult to obtain. This supervision bottleneck is one of the major difficulties in scaling up semantic parsing. We argue that a semantic parser can be trained effectively without annotated data, and introduce an unsupervised learning algorithm. The algorithm takes a self training approach driven by confidence estimation. Evaluated over Geoquery, a standard dataset for this task, our system achieved 66% accuracy, compared to 80% of its fully supervised counterpart, demonstrating the promise of unsupervised approaches for this task.", "title": "" }, { "docid": "0670d09e35907b1d2efd29370b117b4c", "text": "Consumer depth cameras, such as the Microsoft Kinect, are capable of providing frames of dense depth values at real time. One fundamental question in utilizing depth cameras is how to best extract features from depth frames. Motivated by local descriptors on images, in particular kernel descriptors, we develop a set of kernel features on depth images that model size, 3D shape, and depth edges in a single framework. Through extensive experiments on object recognition, we show that (1) our local features capture different aspects of cues from a depth frame/view that complement one another; (2) our kernel features significantly outperform traditional 3D features (e.g. Spin images); and (3) we significantly improve the capabilities of depth and RGB-D (color+depth) recognition, achieving 10–15% improvement in accuracy over the state of the art.", "title": "" }, { "docid": "6b7daba104f8e691dd32cba0b4d66ecd", "text": "This paper presents the first empirical results to our knowledge on learning synchronous grammars that generate logical forms. Using statistical machine translation techniques, a semantic parser based on a synchronous context-free grammar augmented with λoperators is learned given a set of training sentences and their correct logical forms. The resulting parser is shown to be the bestperforming system so far in a database query domain.", "title": "" } ]
[ { "docid": "8fe7a08de96768ea04b89bd6eefd96bc", "text": "This paper introduces a new, unsupervised algorithm for noun phrase coreference resolution. It differs from existing methods in that it views coreference resolution as a clustering task. In an evaluation on the MUC-6 coreference resolution corpus, the algorithm achieves an F-measure of 53.6%, placing it firmly between the worst (40%) and best (65%) systems in the MUC-6 evaluation. More importantly, the clustering approach outperforms the only MUC-6 system to treat coreference resolution as a learning problem. The clustering algorithm appears to provide a flexible mechanism for coordinating the application of context-independent and context-dependent constraints and preferences for accurate partitioning of noun phrases into coreference equivalence classes.", "title": "" }, { "docid": "52da82decb732b3782ad1e3877fe6568", "text": "Deep learning algorithms require large amounts of labeled data which is difficult to attain for medical imaging. Even if a particular dataset is accessible, a learned classifier struggles to maintain the same level of performance on a different medical imaging dataset from a new or never-seen data source domain. Utilizing generative adversarial networks in a semi-supervised learning architecture, we address both problems of labeled data scarcity and data domain overfitting. For cardiac abnormality classification in chest X-rays, we demonstrate that an order of magnitude less data is required with semi-supervised learning generative adversarial networks than with conventional supervised learning convolutional neural networks. In addition, we demonstrate its robustness across different datasets for similar classification tasks.", "title": "" }, { "docid": "4bd161b3e91dea05b728a72ade72e106", "text": "Julio Rodriguez∗ Faculté des Sciences et Techniques de l’Ingénieur (STI), Institut de Microtechnique (IMT), Laboratoire de Production Microtechnique (LPM), Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland and Fakultät für Physik, Universität Bielefeld, D-33501 Bielefeld, Germany ∗Corresponding author. Email: julio.rodriguez@epfl.ch and jrodrigu@physik.uni-bielefeld.de", "title": "" }, { "docid": "b4978b2fbefc79fba6e69ad8fd55ebf9", "text": "This paper proposes an approach based on Least Squares Suppo rt Vect r Machines (LS-SVMs) for solving second order parti al differential equations (PDEs) with variable coe fficients. Contrary to most existing techniques, the proposed m thod provides a closed form approximate solution. The optimal representat ion of the solution is obtained in the primal-dual setting. T he model is built by incorporating the initial /boundary conditions as constraints of an optimization prob lem. The developed method is well suited for problems involving singular, variable and const a t coefficients as well as problems with irregular geometrical domai ns. Numerical results for linear and nonlinear PDEs demonstrat e he efficiency of the proposed method over existing methods.", "title": "" }, { "docid": "96bc9c8fa154d8e6cc7d0486c99b43d5", "text": "A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the output. In the ideal case such structures achieve a voltage gain which equals the number of transmission lines used. To achieve maximum efficiency, mismatch and secondary modes must be suppressed. Here we describe a TLT based on parallel plate transmission lines. The chosen geometry results in a high efficiency, due to good matching and minimized secondary modes. A second advantage of this design is that the electric field strength between the conductors is the same throughout the entire TLT. This makes the design suitable for high voltage applications. To investigate the concept of this TLT design, measurements are done on two different TLT designs. One TLT consists of 4 transmission lines, while the other one has 8 lines. Both designs are constructed of DiBond™. This material consists of a flat polyethylene inner core with an aluminum sheet on both sides. Both TLT's have an input impedance of 3.125 Ω. Their output impedances are 50 and 200 Ω, respectively. The measurements show that, on a matched load, this structure achieves a voltage gain factor of 3.9 when using 4 transmission lines and 7.9 when using 8 lines.", "title": "" }, { "docid": "528eded044a3567ed2a8b123767d473e", "text": "In our previous study, we presented a nonverbal interface that used biopotential signals, such as electrooculargraphic (EOG) and electromyographic (EMG), captured by a simple brain-computer interface. In this paper, we apply the nonverbal interface to hands-free control of an electric wheelchair. Based on the biopotential signals, the interface recognizes the operator's gestures, such as closing the jaw, wrinkling the forehead, and looking towards left and right. By combining these gestures, the operator controls linear and turning motions, velocity, and the steering angle of the wheelchair. Experimental results for navigating the wheelchair in a hallway environment confirmed the feasibility of the proposed method.", "title": "" }, { "docid": "f6783c1f37bb125fd35f4fbfedfde648", "text": "This paper presents an attributed graph-based approach to an intricate data mining problem of revealing affiliated, interdependent entities that might be at risk of being tempted into fraudulent transfer pricing. We formalize the notions of controlled transactions and interdependent parties in terms of graph theory. We investigate the use of clustering and rule induction techniques to identify candidate groups (hot spots) of suspect entities. Further, we find entities that require special attention with respect to transfer pricing audits using network analysis and visualization techniques in IBM i2 Analyst's Notebook.", "title": "" }, { "docid": "1a65b9d35bce45abeefe66882dcf4448", "text": "Data is nowadays an invaluable resource, indeed it guides all business decisions in most of the computer-aided human activities. Threats to data integrity are thus of paramount relevance, as tampering with data may maliciously affect crucial business decisions. This issue is especially true in cloud computing environments, where data owners cannot control fundamental data aspects, like the physical storage of data and the control of its accesses. Blockchain has recently emerged as a fascinating technology which, among others, provides compelling properties about data integrity. Using the blockchain to face data integrity threats seems to be a natural choice, but its current limitations of low throughput, high latency, and weak stability hinder the practical feasibility of any blockchain-based solutions. In this paper, by focusing on a case study from the European SUNFISH project, which concerns the design of a secure by-design cloud federation platform for the public sector, we precisely delineate the actual data integrity needs of cloud computing environments and the research questions to be tackled to adopt blockchain-based databases. First, we detail the open research questions and the difficulties inherent in addressing them. Then, we outline a preliminary design of an effective blockchain-based database for cloud computing environments.", "title": "" }, { "docid": "eb92c76e00ed0970bbec416e49607394", "text": "This paper proposes an air-core transformer integration method, which mounts the transformer straightly into the multi-layer PCB, and maintains the proper distance between the inner transformer and other components on the top layer. Compared with other 3D integration method, the air-core transformer is optimized and modeled carefully to avoid the electromagnetic interference (EMI) of the magnetic fields. The integration method reduces the PCB area significantly, ensuring higher power density and similar efficiency as the conventional planar layout because the air-core transformer magnetic field does not affect other components. Moreover, the converters with the integrated PCB transformer can be manufactured with high consistency. With the air-core transformer, the overall height is only the sum of twice the PCB thickness and components height. In addition, the proposed integration method reduces the power loop inductance by 64%. It is applied to two resonant flyback converters operating at 20 MHz with Si MOSFETs, and 30 MHz with eGaN HEMTs respectively. The full load efficiency of the 30 MHz prototype is 80.1% with 5 V input and 5 V/ 2 W output. It achieves the power density of 32 W/in3.", "title": "" }, { "docid": "357a7c930f3beb730533e2220a94a022", "text": "The fused Lasso penalty enforces sparsity in both the coefficients and their successive differences, which is desirable for applications with features ordered in some meaningful way. The resulting problem is, however, challenging to solve, as the fused Lasso penalty is both non-smooth and non-separable. Existing algorithms have high computational complexity and do not scale to large-size problems. In this paper, we propose an Efficient Fused Lasso Algorithm (EFLA) for optimizing this class of problems. One key building block in the proposed EFLA is the Fused Lasso Signal Approximator (FLSA). To efficiently solve FLSA, we propose to reformulate it as the problem of finding an \"appropriate\" subgradient of the fused penalty at the minimizer, and develop a Subgradient Finding Algorithm (SFA). We further design a restart technique to accelerate the convergence of SFA, by exploiting the special \"structures\" of both the original and the reformulated FLSA problems. Our empirical evaluations show that, both SFA and EFLA significantly outperform existing solvers. We also demonstrate several applications of the fused Lasso.", "title": "" }, { "docid": "7956e5fd3372716cb5ae16c6f9e846fb", "text": "Understanding query intent helps modern search engines to improve search results as well as to display instant answers to the user. In this work, we introduce an accurate query classification method to detect the intent of a user search query. We propose using convolutional neural networks (CNN) to extract query vector representations as the features for the query classification. In this model, queries are represented as vectors so that semantically similar queries can be captured by embedding them into a vector space. Experimental results show that the proposed method can effectively detect intents of queries with higher precision and recall compared to current methods.", "title": "" }, { "docid": "438ad24a900164555542b7dbec65b929", "text": "This paper presents a method for sentiment analysis specifically designed to work with Twitter data (tweets), taking into account their structure, length and specific language. The approach employed makes it easily extendible to other languages and makes it able to process tweets in near real time. The main contributions of this work are: a) the pre-processing of tweets to normalize the language and generalize the vocabulary employed to express sentiment; b) the use minimal linguistic processing, which makes the approach easily portable to other languages; c) the inclusion of higher order n-grams to spot modifications in the polarity of the sentiment expressed; d) the use of simple heuristics to select features to be employed; e) the application of supervised learning using a simple Support Vector Machines linear classifier on a set of realistic data. We show that using the training models generated with the method described we can improve the sentiment classification performance, irrespective of the domain and distribution of the test sets.", "title": "" }, { "docid": "b2fb874fa2dadb8d3b2a23b111a85660", "text": "The aim of the present research is to study the rel ationship between “internet addiction” and “meta-co gnitive skills” with “academic achievement” in students of Islamic Azad University, Hamedan branch. This is de criptive – correlational method is used. To measure meta-cogni tive skills and internet addiction of students Well s questionnaire and Young questionnaire are used resp ectively. The population of the study is students o f Islamic Azad University of Hamedan. Using proportional stra tified random sampling the sample size was 375 stud ents. The results of the study showed that there is no signif icant relationship between two variables of “meta-c ognition” and “Internet addiction”(P >0.184).However, there is a significant relationship at 5% level between the tw o variables \"meta-cognition\" and \"academic achievement\" (P<0.00 2). Also, a significant inverse relationship was ob served between the average of two variables of \"Internet a ddiction\" and \"academic achievement\" at 5% level (P <0.031). There is a significant difference in terms of metacognition among the groups of different fields of s tudies. Furthermore, there is a significant difference in t erms of internet addiction scores among students be longing to different field of studies. In explaining the acade mic achievement variable variance of “meta-cognitio ” and “Internet addiction” using combined regression, it was observed that the above mentioned variables exp lain 16% of variable variance of academic achievement simultane ously.", "title": "" }, { "docid": "df4b4119653789266134cf0b7571e332", "text": "Automatic detection of lymphocyte in H&E images is a necessary first step in lots of tissue image analysis algorithms. An accurate and robust automated lymphocyte detection approach is of great importance in both computer science and clinical studies. Most of the existing approaches for lymphocyte detection are based on traditional image processing algorithms and/or classic machine learning methods. In the recent years, deep learning techniques have fundamentally transformed the way that a computer interprets images and have become a matchless solution in various pattern recognition problems. In this work, we design a new deep neural network model which extends the fully convolutional network by combining the ideas in several recent techniques, such as shortcut links. Also, we design a new training scheme taking the prior knowledge about lymphocytes into consideration. The training scheme not only efficiently exploits the limited amount of free-form annotations from pathologists, but also naturally supports efficient fine-tuning. As a consequence, our model has the potential of self-improvement by leveraging the errors collected during real applications. Our experiments show that our deep neural network model achieves good performance in the images of different staining conditions or different types of tissues.", "title": "" }, { "docid": "b52fb324287ec47860e189062f961ad8", "text": "In this paper we reexamine the place and role of stable model semantics in logic programming and contrast it with a least Herbrand model approach to Horn programs. We demonstrate that inherent features of stable model semantics naturally lead to a logic programming system that offers an interesting alternative to more traditional logic programming styles of Horn logic programming, stratified logic programming and logic programming with well-founded semantics. The proposed approach is based on the interpretation of program clauses as constraints. In this setting programs do not describe a single intended model, but a family of stable models. These stable models encode solutions to the constraint satisfaction problem described by the program. Our approach imposes restrictions on the syntax of logic programs. In particular, function symbols are eliminated from the language. We argue that the resulting logic programming system is well-attuned to problems in the class NP, has a well-defined domain of applications, and an emerging methodology of programming. We point out that what makes the whole approach viable is recent progress in implementations of algorithms to compute stable models of propositional logic programs.", "title": "" }, { "docid": "d4bd583808c9e105264c001cbcb6b4b0", "text": "It is common for clinicians, researchers, and public policymakers to describe certain drugs or objects (e.g., games of chance) as “addictive,” tacitly implying that the cause of addiction resides in the properties of drugs or other objects. Conventional wisdom encourages this view by treating different excessive behaviors, such as alcohol dependence and pathological gambling, as distinct disorders. Evidence supporting a broader conceptualization of addiction is emerging. For example, neurobiological research suggests that addictive disorders might not be independent:2 each outwardly unique addiction disorder might be a distinctive expression of the same underlying addiction syndrome. Recent research pertaining to excessive eating, gambling, sexual behaviors, and shopping also suggests that the existing focus on addictive substances does not adequately capture the origin, nature, and processes of addiction. The current view of separate addictions is similar to the view espoused during the early days of AIDS diagnosis, when rare diseases were not", "title": "" }, { "docid": "fb1a178c7c097fbbf0921dcef915dc55", "text": "AIMS\nThe management of open lower limb fractures in the United Kingdom has evolved over the last ten years with the introduction of major trauma networks (MTNs), the publication of standards of care and the wide acceptance of a combined orthopaedic and plastic surgical approach to management. The aims of this study were to report recent changes in outcome of open tibial fractures following the implementation of these changes.\n\n\nPATIENTS AND METHODS\nData on all patients with an open tibial fracture presenting to a major trauma centre between 2011 and 2012 were collected prospectively. The treatment and outcomes of the 65 Gustilo Anderson Grade III B tibial fractures were compared with historical data from the same unit.\n\n\nRESULTS\nThe volume of cases, the proportion of patients directly admitted and undergoing first debridement in a major trauma centre all increased. The rate of limb salvage was maintained at 94% and a successful limb reconstruction rate of 98.5% was achieved. The rate of deep bone infection improved to 1.6% (one patient) in the follow-up period.\n\n\nCONCLUSION\nThe reasons for these improvements are multifactorial, but the major trauma network facilitating early presentation to the major trauma centre, senior orthopaedic and plastic surgical involvement at every stage and proactive microbiological management, may be important factors.\n\n\nTAKE HOME MESSAGE\nThis study demonstrates that a systemised trauma network combined with evidence based practice can lead to improvements in patient care.", "title": "" }, { "docid": "fbffbfcd9121ae576879e4021696f020", "text": "Segmenting semantic objects from images and parsing them into their respective semantic parts are fundamental steps towards detailed object understanding in computer vision. In this paper, we propose a joint solution that tackles semantic object and part segmentation simultaneously, in which higher object-level context is provided to guide part segmentation, and more detailed part-level localization is utilized to refine object segmentation. Specifically, we first introduce the concept of semantic compositional parts (SCP) in which similar semantic parts are grouped and shared among different objects. A two-stream fully convolutional network (FCN) is then trained to provide the SCP and object potentials at each pixel. At the same time, a compact set of segments can also be obtained from the SCP predictions of the network. Given the potentials and the generated segments, in order to explore long-range context, we finally construct an efficient fully connected conditional random field (FCRF) to jointly predict the final object and part labels. Extensive evaluation on three different datasets shows that our approach can mutually enhance the performance of object and part segmentation, and outperforms the current state-of-the-art on both tasks.", "title": "" }, { "docid": "959352af8a9517da7e53347ecfa17585", "text": "OBJECTIVE\nElectronic health records (EHRs) are an increasingly common data source for clinical risk prediction, presenting both unique analytic opportunities and challenges. We sought to evaluate the current state of EHR based risk prediction modeling through a systematic review of clinical prediction studies using EHR data.\n\n\nMETHODS\nWe searched PubMed for articles that reported on the use of an EHR to develop a risk prediction model from 2009 to 2014. Articles were extracted by two reviewers, and we abstracted information on study design, use of EHR data, model building, and performance from each publication and supplementary documentation.\n\n\nRESULTS\nWe identified 107 articles from 15 different countries. Studies were generally very large (median sample size = 26 100) and utilized a diverse array of predictors. Most used validation techniques (n = 94 of 107) and reported model coefficients for reproducibility (n = 83). However, studies did not fully leverage the breadth of EHR data, as they uncommonly used longitudinal information (n = 37) and employed relatively few predictor variables (median = 27 variables). Less than half of the studies were multicenter (n = 50) and only 26 performed validation across sites. Many studies did not fully address biases of EHR data such as missing data or loss to follow-up. Average c-statistics for different outcomes were: mortality (0.84), clinical prediction (0.83), hospitalization (0.71), and service utilization (0.71).\n\n\nCONCLUSIONS\nEHR data present both opportunities and challenges for clinical risk prediction. There is room for improvement in designing such studies.", "title": "" }, { "docid": "3224233a8a91c8d44e366b7b2ab8e7a1", "text": "In this work we describe the scenario of fully-immersive desktop VR, which serves the overall goal to seamlessly integrate with existing workflows and workplaces of data analysts and researchers, such that they can benefit from the gain in productivity when immersed in their data-spaces. Furthermore, we provide a literature review showing the status quo of techniques and methods available for realizing this scenario under the raised restrictions. Finally, we propose a concept of an analysis framework and the decisions made and the decisions still to be taken, to outline how the described scenario and the collected methods are feasible in a real use case.", "title": "" } ]
scidocsrr
bfda19d343bdf1a8d9a29e47626de9a5
Towards a secure network architecture for smart grids in 5G era
[ { "docid": "002aec0b09bbd2d0e3453c9b3aa8d547", "text": "It is often appealing to assume that existing solutions can be directly applied to emerging engineering domains. Unfortunately, careful investigation of the unique challenges presented by new domains exposes its idiosyncrasies, thus often requiring new approaches and solutions. In this paper, we argue that the “smart” grid, replacing its incredibly successful and reliable predecessor, poses a series of new security challenges, among others, that require novel approaches to the field of cyber security. We will call this new field cyber-physical security. The tight coupling between information and communication technologies and physical systems introduces new security concerns, requiring a rethinking of the commonly used objectives and methods. Existing security approaches are either inapplicable, not viable, insufficiently scalable, incompatible, or simply inadequate to address the challenges posed by highly complex environments such as the smart grid. A concerted effort by the entire industry, the research community, and the policy makers is required to achieve the vision of a secure smart grid infrastructure.", "title": "" }, { "docid": "e5de9d00055e011fbe25636f12b467e6", "text": "The development of a trustworthy smart grid requires a deeper understanding of potential impacts resulting from successful cyber attacks. Estimating feasible attack impact requires an evaluation of the grid's dependency on its cyber infrastructure and its ability to tolerate potential failures. A further exploration of the cyber-physical relationships within the smart grid and a specific review of possible attack vectors is necessary to determine the adequacy of cybersecurity efforts. This paper highlights the significance of cyber infrastructure security in conjunction with power application security to prevent, mitigate, and tolerate cyber attacks. A layered approach is introduced to evaluating risk based on the security of both the physical power applications and the supporting cyber infrastructure. A classification is presented to highlight dependencies between the cyber-physical controls required to support the smart grid and the communication and computations that must be protected from cyber attack. The paper then presents current research efforts aimed at enhancing the smart grid's application and infrastructure security. Finally, current challenges are identified to facilitate future research efforts.", "title": "" } ]
[ { "docid": "62f52788757b0e9de06f124e162c3491", "text": "Throughout the evolution process, Earth's magnetic field (MF, about 50 microT) was a natural component of the environment for living organisms. Biological objects, flying on planned long-term interplanetary missions, would experience much weaker magnetic fields, since galactic MF is known to be 0.1-1 nT. However, the role of weak magnetic fields and their influence on functioning of biological organisms are still insufficiently understood, and is actively studied. Numerous experiments with seedlings of different plant species placed in weak magnetic field have shown that the growth of their primary roots is inhibited during early germination stages in comparison with control. The proliferative activity and cell reproduction in meristem of plant roots are reduced in weak magnetic field. Cell reproductive cycle slows down due to the expansion of G1 phase in many plant species (and of G2 phase in flax and lentil roots), while other phases of cell cycle remain relatively stable. In plant cells exposed to weak magnetic field, the functional activity of genome at early pre-replicate period is shown to decrease. Weak magnetic field causes intensification of protein synthesis and disintegration in plant roots. At ultrastructural level, changes in distribution of condensed chromatin and nucleolus compactization in nuclei, noticeable accumulation of lipid bodies, development of a lytic compartment (vacuoles, cytosegresomes and paramural bodies), and reduction of phytoferritin in plastids in meristem cells were observed in pea roots exposed to weak magnetic field. Mitochondria were found to be very sensitive to weak magnetic field: their size and relative volume in cells increase, matrix becomes electron-transparent, and cristae reduce. Cytochemical studies indicate that cells of plant roots exposed to weak magnetic field show Ca2+ over-saturation in all organelles and in cytoplasm unlike the control ones. The data presented suggest that prolonged exposures of plants to weak magnetic field may cause different biological effects at the cellular, tissue and organ levels. They may be functionally related to systems that regulate plant metabolism including the intracellular Ca2+ homeostasis. However, our understanding of very complex fundamental mechanisms and sites of interactions between weak magnetic fields and biological systems is still incomplete and still deserve strong research efforts.", "title": "" }, { "docid": "e054c2d3b52441eaf801e7d2dd54dce9", "text": "The concept of centrality is often invoked in social network analysis, and diverse indices have been proposed to measure it. This paper develops a unified framework for the measurement of centrality. All measures of centrality assess a node’s involvement in the walk structure of a network. Measures vary along four key dimensions: type of nodal involvement assessed, type of walk considered, property of walk assessed, and choice of summary measure. If we cross-classify measures by type of nodal involvement (radial versus medial) and property of walk assessed (volume versus length), we obtain a four-fold polychotomization with one cell empty which mirrors Freeman’s 1979 categorization. At a more substantive level, measures of centrality summarize a node’s involvement in or contribution to the cohesiveness of the network. Radial measures in particular are reductions of pair-wise proximities/cohesion to attributes of nodes or actors. The usefulness and interpretability of radial measures depend on the fit of the cohesion matrix to the onedimensional model. In network terms, a network that is fit by a one-dimensional model has a core-periphery structure in which all nodes revolve more or less closely around a single core. This in turn implies that the network does not contain distinct cohesive subgroups. Thus, centrality is shown to be intimately connected with the cohesive subgroup structure of a network. © 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0ff727ff06c02d2e371798ad657153c9", "text": "Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.", "title": "" }, { "docid": "114affaf4e25819aafa1c11da26b931f", "text": "We propose a coherent mathematical model for human fingerprint images. Fingerprint structure is represented simply as a hologram - namely a phase modulated fringe pattern. The holographic form unifies analysis, classification, matching, compression, and synthesis of fingerprints in a self-consistent formalism. Hologram phase is at the heart of the method; a phase that uniquely decomposes into two parts via the Helmholtz decomposition theorem. Phase also circumvents the infinite frequency singularities that always occur at minutiae. Reliable analysis is possible using a recently discovered two-dimensional demodulator. The parsimony of this model is demonstrated by the reconstruction of a fingerprint image with an extreme compression factor of 239.", "title": "" }, { "docid": "d7bb22eefbff0a472d3e394c61788be2", "text": "Crowd evacuation of a building has been studied over the last decades. In this paper, seven methodological approaches for crowd evacuation have been identified. These approaches include cellular automata models, lattice gas models, social force models, fluid-dynamic models, agent-based models, game theoretic models, and approaches based on experiments with animals. According to available literatures, we discuss the advantages and disadvantages of these approaches, and conclude that a variety of different kinds of approaches should be combined to study crowd evacuation. Psychological and physiological elements affecting individual and collective behaviors should be also incorporated into the evacuation models. & 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6838d497f81c594cb1760c075b0f5d48", "text": "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson $x^{2}$ divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We train LSGANs on several datasets, and the experimental results show that the images generated by LSGANs are of better quality than regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. The other one is to compare between LSGANs with gradient penalty and WGANs with gradient penalty. We conduct four experiments to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty (LSGANs-GP) and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs-GP succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.", "title": "" }, { "docid": "c49ed75ce48fb92db6e80e4fe8af7127", "text": "The One Class Classification (OCC) problem is different from the conventional binary/multi-class classification problem in the sense that in OCC, the negative class is either not present or not properly sampled. The problem of classifying positive (or target) cases in the absence of appropriately-characterized negative cases (or outliers) has gained increasing attention in recent years. Researchers have addressed the task of OCC by using different methodologies in a variety of application domains. In this paper we formulate a taxonomy with three main categories based on the way OCC has been envisaged, implemented and applied by various researchers in different application domains. We also present a survey of current state-of-the-art OCC algorithms, their importance, applications and limitations.", "title": "" }, { "docid": "124f40ccd178e6284cc66b88da98709d", "text": "The tripeptide glutathione is the thiol compound present in the highest concentration in cells of all organs. Glutathione has many physiological functions including its involvement in the defense against reactive oxygen species. The cells of the human brain consume about 20% of the oxygen utilized by the body but constitute only 2% of the body weight. Consequently, reactive oxygen species which are continuously generated during oxidative metabolism will be generated in high rates within the brain. Therefore, the detoxification of reactive oxygen species is an essential task within the brain and the involvement of the antioxidant glutathione in such processes is very important. The main focus of this review article will be recent results on glutathione metabolism of different brain cell types in culture. The glutathione content of brain cells depends strongly on the availability of precursors for glutathione. Different types of brain cells prefer different extracellular glutathione precursors. Glutathione is involved in the disposal of peroxides by brain cells and in the protection against reactive oxygen species. In coculture astroglial cells protect other neural cell types against the toxicity of various compounds. One mechanism for this interaction is the supply by astroglial cells of glutathione precursors to neighboring cells. Recent results confirm the prominent role of astrocytes in glutathione metabolism and the defense against reactive oxygen species in brain. These results also suggest an involvement of a compromised astroglial glutathione system in the oxidative stress reported for neurological disorders.", "title": "" }, { "docid": "eae04aa2942bfd3752fb596f645e2c2e", "text": "PURPOSE\nHigh fasting blood glucose (FBG) can lead to chronic diseases such as diabetes mellitus, cardiovascular and kidney diseases. Consuming probiotics or synbiotics may improve FBG. A systematic review and meta-analysis of controlled trials was conducted to clarify the effect of probiotic and synbiotic consumption on FBG levels.\n\n\nMETHODS\nPubMed, Scopus, Cochrane Library, and Cumulative Index to Nursing and Allied Health Literature databases were searched for relevant studies based on eligibility criteria. Randomized or non-randomized controlled trials which investigated the efficacy of probiotics or synbiotics on the FBG of adults were included. Studies were excluded if they were review articles and study protocols, or if the supplement dosage was not clearly mentioned.\n\n\nRESULTS\nA total of fourteen studies (eighteen trials) were included in the analysis. Random-effects meta-analyses were conducted for the mean difference in FBG. Overall reduction in FBG observed from consumption of probiotics and synbiotics was borderline statistically significant (-0.18 mmol/L 95 % CI -0.37, 0.00; p = 0.05). Neither probiotic nor synbiotic subgroup analysis revealed a significant reduction in FBG. The result of subgroup analysis for baseline FBG level ≥7 mmol/L showed a reduction in FBG of 0.68 mmol/L (-1.07, -0.29; ρ < 0.01), while trials with multiple species of probiotics showed a more pronounced reduction of 0.31 mmol/L (-0.58, -0.03; ρ = 0.03) compared to single species trials.\n\n\nCONCLUSION\nThis meta-analysis suggests that probiotic and synbiotic supplementation may be beneficial in lowering FBG in adults with high baseline FBG (≥7 mmol/L) and that multispecies probiotics may have more impact on FBG than single species.", "title": "" }, { "docid": "097f1a491b7266b5d3baf7c7d1331bbe", "text": "A polysilicon transistor based active matrix organic light emitting diode (AMOLED) pixel with high pixel to pixel luminance uniformity is reported. The new pixel powers the OLEDs with small constant currents to ensure consistent brightness and extended life. Excellent pixel to pixel current drive uniformity is obtained despite the threshold voltage variation inherent in polysilicon transistors. Other considerations in the design of pixels for high information content AMOLED displays are discussed.", "title": "" }, { "docid": "8ee3d3200ed95cad5ff4ed77c08bb608", "text": "We present a rare case of a non-fatal impalement injury of the brain. A 13-year-old boy was found in his classroom unconsciously lying on floor. His classmates reported that they had been playing, and throwing building bricks, when suddenly the boy collapsed. The emergency physician did not find significant injuries. Upon admission to a hospital, CT imaging revealed a \"blood path\" through the brain. After clinical forensic examination, an impalement injury was diagnosed, with the entry wound just below the left eyebrow. Eventually, the police presented a variety of pointers that were suspected to have caused the injury. Forensic trace analysis revealed human blood on one of the pointers, and subsequent STR analysis linked the blood to the injured boy. Confronted with the results of the forensic examination, the classmates admitted that they had been playing \"sword fights\" using the pointers, and that the boy had been hit during the game. The case illustrates the difficulties of diagnosing impalement injuries, and identifying the exact cause of the injury.", "title": "" }, { "docid": "4cdf0df648d3ee5e8cf07001924f73ae", "text": "Electronic Health Records (EHR) narratives are a rich source of information, embedding high-resolution information of value to secondary research use. However, because the EHRs are mostly in natural language free-text and highly ambiguity-ridden, many natural language processing algorithms have been devised around them to extract meaningful structured information about clinical entities. The performance of the algorithms however, largely varies depending on the training dataset as well as the effectiveness of the use of background knowledge to steer the learning process.\n In this paper we study the impact of initializing the training of a neural network natural language processing algorithm with pre-defined clinical word embeddings to improve feature extraction and relationship classification between entities. We add our embedding framework to a bi-directional long short-term memory (Bi-LSTM) neural network, and further study the effect of using attention weights in neural networks for sequence labelling tasks to extract knowledge of Adverse Drug Reactions (ADRs). We incorporate unsupervised word embeddings using Word2Vec and GloVe from widely available medical resources such as Multiparameter Intelligent Monitoring in Intensive Care (MIMIC) II corpora, Unified Medical Language System (UMLS) as well as embed pharmaco lexicon from available EHRs. Our algorithm, implemented using two datasets, shows that our architecture outperforms baseline Bi-LSTM or Bi-LSTM networks using linear chain and Skip-Chain conditional random fields (CRF).", "title": "" }, { "docid": "73104192eb7d098d15d14c347ba4b60e", "text": "The launching of Microsoft Kinect with skeleton tracking technique opens up new potentials for skeleton based human action recognition. However, the 3D human skeletons, generated via skeleton tracking from the depth map sequences, are generally very noisy and unreliable. In this paper, we introduce a robust informative joints based human action recognition method. Inspired by the instinct of the human vision system, we analyze the mean contributions of human joints for each action class via differential entropy of the joint locations. There is significant difference between most of the actions, and the contribution ratio is highly in accordance with common sense. We present a novel approach named skeleton context to measure similarity between postures and exploit it for action recognition. The similarity is calculated by extracting the multi-scale pairwise position distribution for each informative joint. Then feature sets are evaluated in a bag-of-words scheme using a linear CRFs. We report experimental results and validate the method on two public action dataset. Experiments results have shown that the proposed approach is discriminative for similar human action recognition and well adapted to the intra-class variation. & 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4c12d04ce9574aab071964e41f0c5f4e", "text": "The complete genome sequence of Treponema pallidum was determined and shown to be 1,138,006 base pairs containing 1041 predicted coding sequences (open reading frames). Systems for DNA replication, transcription, translation, and repair are intact, but catabolic and biosynthetic activities are minimized. The number of identifiable transporters is small, and no phosphoenolpyruvate:phosphotransferase carbohydrate transporters were found. Potential virulence factors include a family of 12 potential membrane proteins and several putative hemolysins. Comparison of the T. pallidum genome sequence with that of another pathogenic spirochete, Borrelia burgdorferi, the agent of Lyme disease, identified unique and common genes and substantiates the considerable diversity observed among pathogenic spirochetes.", "title": "" }, { "docid": "f53a2ca0fda368d0e90cbb38076658af", "text": "RNAi therapeutics is a powerful tool for treating diseases by sequence-specific targeting of genes using siRNA. Since its discovery, the need for a safe and efficient delivery system for siRNA has increased. Here, we have developed and characterized a delivery platform for siRNA based on the natural polysaccharide starch in an attempt to address unresolved delivery challenges of RNAi. Modified potato starch (Q-starch) was successfully obtained by substitution with quaternary reagent, providing Q-starch with cationic properties. The results indicate that Q-starch was able to bind siRNA by self-assembly formation of complexes. For efficient and potent gene silencing we monitored the physical characteristics of the formed nanoparticles at increasing N/P molar ratios. The minimum ratio for complete entrapment of siRNA was 2. The resulting complexes, which were characterized by a small diameter (~30 nm) and positive surface charge, were able to protect siRNA from enzymatic degradation. Q-starch/siRNA complexes efficiently induced P-glycoprotein (P-gp) gene silencing in the human ovarian adenocarcinoma cell line, NCI-ADR/Res (NAR), over expressing the targeted gene and presenting low toxicity. Additionally, Q-starch-based complexes showed high cellular uptake during a 24-hour study, which also suggested that intracellular siRNA delivery barriers governed the kinetics of siRNA transfection. In this study, we have devised a promising siRNA delivery vector based on a starch derivative for efficient and safe RNAi application.", "title": "" }, { "docid": "7267e5082c890dfa56a745d3b28425cc", "text": "Natural Orifice Translumenal Endoscopic Surgery (NOTES) has recently attracted lots of attention, promising surgical procedures with fewer complications, better cosmesis, lower pains and faster recovery. Several robotic systems were developed aiming to enable abdominal surgeries in a NOTES manner. Although these robotic systems demonstrated the surgical concept, characteristics which could fully enable NOTES procedures remain unclear. This paper presents the development of an endoscopic continuum testbed for finalizing system characteristics of a surgical robot for NOTES procedures, which include i) deployability (the testbed can be deployed in a folded endoscope configuration and then be unfolded into a working configuration), ii) adequate workspace, iii) sufficient distal dexterity (e.g. suturing capability), and iv) desired mechanics properties (e.g. enough load carrying capability). Continuum mechanisms were implemented in the design and a diameter of 12mm of this testbed in its endoscope configuration was achieved. Results of this paper could be used to form design references for future development of NOTES robots.", "title": "" }, { "docid": "25828231caaf3288ed4fdb27df7f8740", "text": "This paper reports on an algorithm to support autonomous vehicles in reasoning about occluded regions of their environment to make safe, reliable decisions. In autonomous driving scenarios, other traffic participants are often occluded from sensor measurements by buildings or large vehicles like buses or trucks, which makes tracking dynamic objects challenging.We present a method to augment standard dynamic object trackers with means to 1) estimate the occluded state of other traffic agents and 2) robustly associate the occluded estimates with new observations after the tracked object reenters the visible region of the sensor horizon. We perform occluded state estimation using a dynamics model that accounts for the driving behavior of traffic agents and a hybrid Gaussian mixture model (hGMM) to capture multiple hypotheses over discrete behavior, such as driving along different lanes or turning left or right at an intersection. Upon new observations, we associate them to existing estimates in terms of the Kullback-Leibler divergence (KLD). We evaluate the proposed method in simulation and using a real-world traffic-tracking dataset from an autonomous vehicle platform. Results show that our method can handle significantly prolonged occlusions when compared to a standard dynamic object tracking system.", "title": "" }, { "docid": "8a0ee163723b4e0c2fa531669af3ae39", "text": "As the computer becomes more ubiquitous throughout society, the security of networks and information technologies is a growing concern. Recent research has found hackers making use of social media platforms to form communities where sharing of knowledge and tools that enable cybercriminal activity is common. However, past studies often report only generalized community behaviors and do not scrutinize individual members; in particular, current research has yet to explore the mechanisms in which some hackers become key actors within their communities. Here we explore two major hacker communities from the United States and China in order to identify potential cues for determining key actors. The relationships between various hacker posting behaviors and reputation are observed through the use of ordinary least squares regression. Results suggest that the hackers who contribute to the cognitive advance of their community are generally considered the most reputable and trustworthy among their peers. Conversely, the tenure of hackers and their discussion quality were not significantly correlated with reputation. Results are consistent across both forums, indicating the presence of a common hacker culture that spans multiple geopolitical regions.", "title": "" }, { "docid": "3ce0ea80f7ae945a4fef8cbde458c644", "text": "Deficits in 'executive function' (EF) are characteristic of several clinical disorders, most notably Autism Spectrum Disorders (ASD) and Attention-Deficit/Hyperactivity Disorder (ADHD). In this study, age- and IQ-matched groups with ASD, ADHD, or typical development (TD) were compared on a battery of EF tasks tapping three core domains: response selection/inhibition, flexibility, and planning/working memory. Relations between EF, age and everyday difficulties (rated by parents and teachers) were also examined. Both clinical groups showed significant EF impairments compared with TD peers. The ADHD group showed greater inhibitory problems on a Go-no-Go task, while the ASD group was significantly worse on response selection/monitoring in a cognitive estimates task. Age-related improvements were clearer in ASD and TD than in ADHD. At older (but not younger) ages, the ASD group outperformed the ADHD group, performing as well as the TD group on many EF measures. EF scores were related to specific aspects of communicative and social adaptation, and negatively correlated with hyperactivity in ASD and TD. Within the present groups, the overall findings suggested less severe and persistent EF deficits in ASD (including Asperger Syndrome) than in ADHD.", "title": "" } ]
scidocsrr
a927a6b74fcdbbd2233c46d7e695ce46
Sample Path Generation for Probabilistic Demand Forecasting
[ { "docid": "da1f4117851762bfb5ef80c0893248c3", "text": "The recently-developed WaveNet architecture (van den Oord et al., 2016a) is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today’s massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, a 1000x speed up relative to the original WaveNet, and capable of serving multiple English and Japanese voices in a production setting.", "title": "" }, { "docid": "5d6cb50477423bf9fc1ea6c27ad0f1b9", "text": "We propose a framework for general probabilistic multi-step time series regression. Specifically, we exploit the expressiveness and temporal nature of Sequence-to-Sequence Neural Networks (e.g. recurrent and convolutional structures), the nonparametric nature of Quantile Regression and the efficiency of Direct Multi-Horizon Forecasting. A new training scheme, forking-sequences, is designed for sequential nets to boost stability and performance. We show that the approach accommodates both temporal and static covariates, learning across multiple related series, shifting seasonality, future planned event spikes and coldstarts in real life large-scale forecasting. The performance of the framework is demonstrated in an application to predict the future demand of items sold on Amazon.com, and in a public probabilistic forecasting competition to predict electricity price and load.", "title": "" } ]
[ { "docid": "24c49ac0ed56f27982cfdad18054e466", "text": "This paper examines two alternative approaches to supporting code scheduling for multiple-instruction-issue processors. One is to provide a set of non-trapping instructions so that the compiler can perform aggressive static code scheduling. The application of this approach to existing commercial architectures typically requires extending the instruction set. The other approach is to support out-of-order execution in the microarchitecture so that the hardware can perform aggressive dynamic code scheduling. This approach usually does not require modifying the instruction set but requires complex hardware support. In this paper, we analyze the performance of the two alternative approaches using a set of important nonnumerical C benchmark programs. A distinguishing feature of the experiment is that the code for the dynamic approach has been optimized and scheduled as much as allowed by the architecture. The hardware is only responsible for the additional reordering that cannot be performed by the compiler. The overall result is that the clynamic and static approaches are comparable in performance. When applied to a four-instruction-issue processor, both methods achieve more than two times speedup over a high performance single-instruction-issue processor. However, the performance of each scheme varies among the benchmark programs. To explain this variation, we have identified the conditions in these programs that make one approach perform better than the other.", "title": "" }, { "docid": "1f5610ef68514343c8b04defc9c65c64", "text": "This paper examines the relationship between strategy and Total Quality Management (TQM) implementation, as well as the impact of the adaptation of both to organizational performance. We have used the emphasis on cost leadership, differentiation on marketing and differentiation on innovation as strategic dimensions to develop four great strategic configurations. The degrees of implementation of the TQM elements in each of them, as well as their associations to the various types of performances have been studied. Our results significantly support the hypotheses proposed, and suggest differences in TQM implementation depending on the selected strategy. It is also noticed that companies with greater degrees of co-alignment between their strategies and TQM are those with the highest levels of performance.", "title": "" }, { "docid": "be3c8186c6e818e7cdba74cc4e7148e2", "text": "A network latency emulator allows IT architects to thoroughly investigate how network latencies impact workload performance. Software-based emulation tools have been widely used by researchers and engineers. It is possible to use commodity server computers for emulation and set up an emulation environment quickly without outstanding hardware cost. However, existing software-based tools built in the network stack of an operating system are not capable of supporting the bandwidth of today's standard interconnects (e.g., 10GbE) and emulating sub-milliseconds latencies likely caused by network virtualization in a datacenter. In this paper, we propose a network latency emulator (DEMU) supporting broad bandwidth traffic with sub-milliseconds accuracy, which is based on an emerging packet processing framework, DPDK. It avoids the overhead of the network stack by directly interacting with NIC hardware. Through experiments, we confirmed that DEMU can emulate latencies on the order of 10 µs for short-packet traffic at the line rate of 10GbE. The standard deviation of inserted delays was only 2–3 µs. This is a significant improvement from a network emulator built in the Linux Kernel (i.e., NetEm), which loses more than 50% of its packets for the same 10GbE traffic. For 1 Gbps traffic, the latency deviation of NetEm was approximately 20 µs, while that of our mechanism was 2 orders of magnitude smaller (i.e., only 0.3 µs).", "title": "" }, { "docid": "9710abc9bc114470e25a4c12af58dc90", "text": "The growth of the mobile phone users has led to a dramatic increase in SMS spam messages. Though in most parts of the world, mobile messaging channel is currently regarded as “clean” and trusted, on the contrast recent reports clearly indicate that the volume of mobile phone spam is dramatically increasing year by year. It is an evolving setback especially in the Middle East and Asia. SMS spam filtering is a comparatively recent errand to deal such a problem. It inherits many concerns and quick fixes from Email spam filtering. However it fronts its own certain issues and problems. This paper inspires to work on the task of filtering mobile messages as Ham or Spam for the Indian Users by adding Indian messages to the worldwide available SMS dataset. The paper analyses different machine learning classifiers on large corpus of SMS messages for Indian people.", "title": "" }, { "docid": "40e73596d477cf9282e9142785c71066", "text": "The broaden-and-build theory of positive emotions predicts that positive emotions broaden the scopes of attention and cognition, and, by consequence, initiate upward spirals toward increasing emotional well-being. The present study assessed this prediction by testing whether positive affect and broad-minded coping reciprocally and prospectively predict one another. One hundred thirty-eight college students completed self-report measures of affect and coping at two assessment periods 5 weeks apart. As hypothesized, regression analyses showed that initial positive affect, but not negative affect, predicted improved broad-minded coping, and initial broad-minded coping predicted increased positive affect, but not reductions in negative affect. Further mediational analyses showed that positive affect and broad-minded coping serially enhanced one another. These findings provide prospective evidence to support the prediction that positive emotions initiate upward spirals toward enhanced emotional wellbeing. Implications for clinical practice and health promotion are discussed.", "title": "" }, { "docid": "bd76b8e1e57f4e38618cf56f4b8d33e2", "text": "For impartial division, each participant reports only her opinion about the fair relative shares of the other participants, and this report has no effect on her own share. If a specific division is compatible with all reports, it is implemented. We propose a natural method meeting these requirements, for a division among four or more participants. No such method exists for a division among three participants.", "title": "" }, { "docid": "a71c53aed6a6805a5ebf0f69377411c0", "text": "We here illustrate a new indoor navigation system. It is an outcome of creativity, which merges an imaginative scenario and new technologies. The system intends to guide a person in unknown building by relying on technologies which do not depend on infrastructures. The system includes two key components, namely positioning and path planning. Positioning is based on geomagnetic fields, and it overcomes the several limits of WIFI and Bluetooth, etc. Path planning is based on a new and optimized Ant Colony algorithm, called Ant Colony Optimization (ACO), which offers better performances than the classic A* algorithms. The paper illustrates the logic and the architecture of the system, and also presents experimental results.", "title": "" }, { "docid": "b95776a33ab5ff12d405523a90cbfb93", "text": "In this paper, we introduce the splitter placement problem in wavelength-routed networks (SP-WRN). Given a network topology, a set of multicast sessions, and a fixed number of multicast-capable cross-connects, the SP-WRN problem entails the placement of the multicast-capable cross-connects so that the blocking probability is minimized. The SP-WRN problem is NP-complete as it includes as a subproblem the routing and wavelength assignment problem which is NP-complete. To gain a deeper insight into the computational complexity of the SP-WRN problem, we define a graph-theoretic version of the splitter placement problem (SPG), and show that even SPG is NP-complete. We develop three heuristics for the SP-WRN problem with different degrees of trade-off between computation time and quality of solution. The first heuristic uses the CPLEX general solver to solve an integer-linear program (ILP) of the problem. The second heuristic is based on a greedy approach and is called most-saturated node first (MSNF). The third heuristic employs simulated annealing (SA) with route-coordination. Through numerical examples on a wide variety of network topologies we demonstrate that: (1) no more than 50% of the cross-connects need to be multicast-capable, (2) the proposed SA heuristic provides fast near-optimal solutions, and (3) it is not practical to use general solvers such as CPLEX for solving the SP-WRN problem.", "title": "" }, { "docid": "e0a8035f9e61c78a482f2e237f7422c6", "text": "Aims: This paper introduces how substantial decision-making and leadership styles relates with each other. Decision-making styles are connected with leadership practices and institutional arrangements. Study Design: Qualitative research approach was adopted in this study. A semi structure interview was use to elicit data from the participants on both leadership styles and decision-making. Place and Duration of Study: Institute of Education international Islamic University", "title": "" }, { "docid": "037d8aa430923ddaaf5f7d280f5ea0c2", "text": "We describe a system that recognizes human postures with heavy self-occlusion. In particular, we address posture recognition in a robot assisted-living scenario, where the environment is equipped with a top-view camera for monitoring human activities. This setup is very useful because top-view cameras lead to accurate localization and limited inter-occlusion between persons, but conversely they suffer from body parts being frequently self-occluded. The conventional way of posture recognition relies on good estimation of body part positions, which turns out to be unstable in the top-view due to occlusion and foreshortening. In our approach, we learn a posture descriptor for each specific posture category. The posture descriptor encodes how well the person in the image can be `explained' by the model. The postures are subsequently recognized from the matching scores returned by the posture descriptors. We select the state-of-the-art approach of pose estimation as our posture descriptor. The results show that our method is able to correctly classify 79.7% of the test sample, which outperforms the conventional approach by over 23%.", "title": "" }, { "docid": "b1a6c3765fe7194503e2b77e79a4a52c", "text": "Knowledge base population (KBP) systems take in a large document corpus and extract entities and their relations. Thus far, KBP evaluation has relied on judgements on the pooled predictions of existing systems. We show that this evaluation is problematic: when a new system predicts a previously unseen relation, it is penalized even if it is correct. This leads to significant bias against new systems, which counterproductively discourages innovation in the field. Our first contribution is a new importance-sampling based evaluation which corrects for this bias by annotating a new system’s predictions ondemand via crowdsourcing. We show this eliminates bias and reduces variance using data from the 2015 TAC KBP task. Our second contribution is an implementation of our method made publicly available as an online KBP evaluation service. We pilot the service by testing diverse state-ofthe-art systems on the TAC KBP 2016 corpus and obtain accurate scores in a cost effective manner.", "title": "" }, { "docid": "3596cd78712e41d5da0b5bfd3e5df4e2", "text": "In recent years, chip multiprocessors (CMP) have emerged as a solution for high-speed computing demands. However, power dissipation in CMPs can be high if numerous cores are simultaneously active. Dynamic voltage and frequency scaling (DVFS) is widely used to reduce the active power, but its effectiveness and cost depends on the granularity at which it is applied. Per-core DVFS allows the greatest flexibility in controlling power, but incurs the expense of an unrealistically large number of on-chip voltage regulators. Per-chip DVFS, where all cores are controlled by a single regulator overcomes this problem at the expense of greatly reduced flexibility. This work considers the problem of building an intermediate solution, clustering the cores of a multicore processor into DVFS domains and implementing DVFS on a per-cluster basis. Based on a typical workload, we propose a scheme to find similarity among the cores and cluster them based on this similarity. We also provide an algorithm to implement DVFS for the clusters, and evaluate the effectiveness of per-cluster DVFS in power reduction.", "title": "" }, { "docid": "5fd1be2414777efafc369000a816e3fc", "text": "Findings in the social psychology literatures on attitudes, social perception, and emotion demonstrate that social information processing involves embodiment, where embodiment refers both to actual bodily states and to simulations of experience in the brain's modality-specific systems for perception, action, and introspection. We show that embodiment underlies social information processing when the perceiver interacts with actual social objects (online cognition) and when the perceiver represents social objects in their absence (offline cognition). Although many empirical demonstrations of social embodiment exist, no particularly compelling account of them has been offered. We propose that theories of embodied cognition, such as the Perceptual Symbol Systems (PSS) account (Barsalou, 1999), explain and integrate these findings, and that they also suggest exciting new directions for research. We compare the PSS account to a variety of related proposals and show how it addresses criticisms that have previously posed problems for the general embodiment approach.", "title": "" }, { "docid": "be5e98bb924a81baa561a3b3870c4a76", "text": "Objective: Mastitis is one of the most costly diseases in dairy cows, which greatly decreases milk production. Use of antibiotics in cattle leads to antibiotic-resistance of mastitis-causing bacteria. The present study aimed to investigate synergistic effect of silver nanoparticles (AgNPs) with neomycin or gentamicin antibiotic on mastitis-causing Staphylococcus aureus. Materials and Methods: In this study, 46 samples of milk were taken from the cows with clinical and subclinical mastitis during the august-October 2015 sampling period. In addition to biochemical tests, nuc gene amplification by PCR was used to identify strains of Staphylococcus aureus. Disk diffusion test and microdilution were performed to determine minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC). Fractional Inhibitory Concentration (FIC) index was calculated to determine the interaction between a combination of AgNPs and each one of the antibiotics. Results: Twenty strains of Staphylococcus aureus were isolated from 46 milk samples and were confirmed by PCR. Based on disk diffusion test, 35%, 10% and 55% of the strains were respectively susceptible, moderately susceptible and resistant to gentamicin. In addition, 35%, 15% and 50% of the strains were respectively susceptible, moderately susceptible and resistant to neomycin. According to FIC index, gentamicin antibiotic and AgNPs had synergistic effects in 50% of the strains. Furthermore, neomycin antibiotic and AgNPs had synergistic effects in 45% of the strains. Conclusion: It could be concluded that a combination of AgNPs with either gentamicin or neomycin showed synergistic antibacterial properties in Staphylococcus aureus isolates from mastitis. In addition, some hypotheses were proposed to explain antimicrobial mechanism of the combination.", "title": "" }, { "docid": "62fa3b06e4fe2e0ac47efc991bbe612e", "text": "Drones are increasingly flying in sensitive airspace where their presence may cause harm, such as near airports, forest fires, large crowded events, secure buildings, and even jails. This problem is likely to expand given the rapid proliferation of drones for commerce, monitoring, recreation, and other applications. A cost-effective detection system is needed to warn of the presence of drones in such cases. In this paper, we explore the feasibility of inexpensive RF-based detection of the presence of drones. We examine whether physical characteristics of the drone, such as body vibration and body shifting, can be detected in the wireless signal transmitted by drones during communication. We consider whether the received drone signals are uniquely differentiated from other mobile wireless phenomena such as cars equipped with Wi- Fi or humans carrying a mobile phone. The sensitivity of detection at distances of hundreds of meters as well as the accuracy of the overall detection system are evaluated using software defined radio (SDR) implementation.", "title": "" }, { "docid": "dab9b12d72a639d1002f9eed0cce3c51", "text": "©Copyright 1996 by International Business Machines Corporation. Copying in printed form for private use is permitted without payment of royalty provided that (1) each reproduction is done without alteration and (2) the Journal reference and IBM copyright notice are included on the first page. The title and abstract, but no other portions, of this paper may be copied or distributed royalty free without further permission by computer-based and other information-service systems. Permission to republish any other portion of this paper must be obtained from the Editor.", "title": "" }, { "docid": "6a3fa2304cf3143d1809ee93f7f7b99d", "text": "Monaural singing voice separation task focuses on the prediction of the singing voice from a single channel music mixture signal. Current state of the art (SOTA) results in monaural singing voice separation are obtained with deep learning based methods. In this work we present a novel recurrent neural approach that learns long-term temporal patterns and structures of a musical piece. We build upon the recently proposed Masker-Denoiser (MaD) architecture and we enhance it with the Twin Networks, a technique to regularize a recurrent generative network using a backward running copy of the network. We evaluate our method using the Demixing Secret Dataset and we obtain an increment to signal-to-distortion ratio (SDR) of 0.37 dB and to signal-to-interference ratio (SIR) of 0.23 dB, compared to previous SOTA results.", "title": "" }, { "docid": "019f4534383668216108a456ac086610", "text": "Cloud computing is an emerging paradigm for large scale infrastructures. It has the advantage of reducing cost by sharing computing and storage resources, combined with an on-demand provisioning mechanism relying on a pay-per-use business model. These new features have a direct impact on the budgeting of IT budgeting but also affect traditional security, trust and privacy mechanisms. Many of these mechanisms are no longer adequate, but need to be rethought to fit this new paradigm. In this paper we assess how security, trust and privacy issues occur in the context of cloud computing and discuss ways in which they may be addressed.", "title": "" }, { "docid": "a261f7df775cbcc1f2b3a5f68fba6029", "text": "As the role of virtual teams in organizations becomes increasingly important, it is crucial that companies identify and leverage team members’ knowledge. Yet, little is known of how virtual team members come to recognize one another’s knowledge, trust one another’s expertise, and coordinate their knowledge effectively. In this study, we develop a model of how three behavioral dimensions associated with transactive memory systems (TMS) in virtual teams—expertise location, Ritu Agarwal was the accepting senior editor for this paper. Alberto Espinosa and Susan Gasson served as reviewers. The associate editor and a third reviewer chose to remain anonymous. Authors are listed alphabetically. Each contributed equally to the paper. task–knowledge coordination, and cognition-based trust—and their impacts on team performance change over time. Drawing on the data from a study that involves 38 virtual teams of MBA students performing a complex web-based business simulation game over an 8-week period, we found that in the early stage of the project, the frequency and volume of task-oriented communications among team members played an important role in forming expertise location and cognition-based trust. Once TMS were established, however, task-oriented communication became less important. Instead, toward the end of the project, task–knowledge coordination emerges as a key construct that influences team performance, mediating the impact of all other constructs. Our study demonstrates that TMS can be formed even in virtual team environments where interactions take place solely through electronic media, although they take a relatively long time to develop. Furthermore, our findings show that, once developed, TMS become essential to performing tasks effectively in virtual teams.", "title": "" } ]
scidocsrr
468ca2613a1e5673aaaceaa50c2fed83
Leveraging Intra-User and Inter-User Representation Learning for Automated Hate Speech Detection
[ { "docid": "a986826041730d953dfbf9fbc1b115a6", "text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.", "title": "" } ]
[ { "docid": "1b55f94f93a34ac1acf79cedfae10cfd", "text": "PROBLEM/CONDITION\nEach year in the United States, an estimated one in six residents requires medical treatment for an injury, and an estimated one in 10 residents visits a hospital emergency department (ED) for treatment of a nonfatal injury. This report summarizes national data on fatal and nonfatal injuries in the United States for 2001, by age; sex; mechanism, intent, and type of injury; and other selected characteristics.\n\n\nREPORTING PERIOD COVERED\nJanuary-December 2001.\n\n\nDESCRIPTION OF SYSTEM\n\n\n\nDESCRIPTION OF THE SYSTEM\nFatal injury data are derived from CDC's National Vital Statistics System (NVSS) and include information obtained from official death certificates throughout the United States. Nonfatal injury data, other than gunshot injuries, are from the National Electronic Injury Surveillance System All Injury Program (NEISS-AIP), a national stratified probability sample of 66 U.S. hospital EDs. Nonfatal firearm and BB/pellet gunshot injury data are from CDC's Firearm Injury Surveillance Study, being conducted by using the National Electronic Injury Surveillance System (NEISS), a national stratified probability sample of 100 U.S. hospital EDs.\n\n\nRESULTS\nIn 2001, approximately 157,078 persons in the United States (age-adjusted injury death rate: 54.9/100,000 population; 95% confidence interval [CI] = 54.6-55.2/100,000) died from an injury, and an estimated 29,721,821 persons with nonfatal injuries (age-adjusted nonfatal injury rate: 10404.3/100,000; 95% CI = 10074.9-10733.7/ 100,000) were treated in U.S. hospital EDs. The overall injury-related case-fatality rate (CFR) was 0.53%, but CFRs varied substantially by age (rates for older persons were higher than rates for younger persons); sex (rates were higher for males than females); intent (rates were higher for self-harm-related than for assault and unintentional injuries); and mechanism (rates were highest for drowning, suffocation/inhalation, and firearm-related injury). Overall, fatal and nonfatal injury rates were higher for males than females and disproportionately affected younger and older persons. For fatal injuries, 101,537 (64.6%) were unintentional, and 51,326 (32.7%) were violence-related, including homicides, legal intervention, and suicide. For nonfatal injuries, 27,551,362 (92.7%) were unintentional, and 2,155,912 (7.3%) were violence-related, including assaults, legal intervention, and self-harm. Overall, the leading cause of fatal injury was unintentional motor-vehicle-occupant injuries. The leading cause of nonfatal injury was unintentional falls; however, leading causes vary substantially by sex and age. For nonfatal injuries, the majority of injured persons were treated in hospital EDs for lacerations (25.8%), strains/sprains (20.2%), and contusions/abrasions (18.3%); the majority of injuries were to the head/neck region (29.5%) and the extremities (47.9%). Overall, 5.5% of those treated for nonfatal injuries in hospital EDs were hospitalized or transferred to another facility for specialized care.\n\n\nINTERPRETATION\nThis report provides the first summary report of fatal and nonfatal injuries that combines death data from NVSS and nonfatal injury data from NEISS-AIP. These data indicate that mortality and morbidity associated with injuries affect all segments of the population, although the leading external causes of injuries vary substantially by age and sex of injured persons. Injury prevention efforts should include consideration of the substantial differences in fatal and nonfatal injury rates, CFRs, and the leading causes of unintentional and violence-related injuries, in regard to the sex and age of injured persons.", "title": "" }, { "docid": "d1b6091e010cba3abc340efeab77a97b", "text": "Recently, the term knowledge graph has been used frequently in research and business, usually in close association with Semantic Web technologies, linked data, large-scale data analytics and cloud computing. Its popularity is clearly influenced by the introduction of Google’s Knowledge Graph in 2012, and since then the term has been widely used without a definition. A large variety of interpretations has hampered the evolution of a common understanding of knowledge graphs. Numerous research papers refer to Google’s Knowledge Graph, although no official documentation about the used methods exists. The prerequisite for widespread academic and commercial adoption of a concept or technology is a common understanding, based ideally on a definition that is free from ambiguity. We tackle this issue by discussing and defining the term knowledge graph, considering its history and diversity in interpretations and use. Our goal is to propose a definition of knowledge graphs that serves as basis for discussions on this topic and contributes to a common vision.", "title": "" }, { "docid": "faf3967b2287b8bdfdf1ebc55bcd5910", "text": "As an essential step in many computer vision tasks, camera calibration has been studied extensively. In this paper, we propose a novel calibration technique that, based on geometric analysis, camera parameters can be estimated effectively and accurately from just one view of only five corresponding points. Our core contribution is the geometric analysis for deriving the basic equations to realize camera calibration from four coplanar corresponding points and a fifth noncoplanar one. The position, orientation, and focal length of a zooming camera can be directly estimated with unique solution. The estimated parameters are further optimized by the bundle adjustment technique. The proposed calibration method is examined and evaluated on both computer simulated data and real images. The experimental results confirm the validity of the proposed method that camera parameters can be estimated with sufficient accuracy using just five-point correspondences from a single image, even in the presence of image noise.", "title": "" }, { "docid": "438a9e517a98c6f98f7c86209e601f1b", "text": "One of the most challenging tasks in large-scale multi-label image retrieval is to map images into binary codes while preserving multilevel semantic similarity. Recently, several deep supervised hashing methods have been proposed to learn hash functions that preserve multilevel semantic similarity with deep convolutional neural networks. However, these triplet label based methods try to preserve the ranking order of images according to their similarity degrees to the queries while not putting direct constraints on the distance between the codes of very similar images. Besides, the current evaluation criteria are not able to measure the performance of existing hashing methods on preserving fine-grained multilevel semantic similarity. To tackle these issues, we propose a novel Deep Multilevel Semantic Similarity Preserving Hashing (DMSSPH) method to learn compact similarity-preserving binary codes for the huge body of multi-label image data with deep convolutional neural networks. In our approach, we make the best of the supervised information in the form of pairwise labels to maximize the discriminability of output binary codes. Extensive evaluations conducted on several benchmark datasets demonstrate that the proposed method significantly outperforms the state-of-the-art supervised and unsupervised hashing methods at the accuracies of top returned images, especially for shorter binary codes. Meanwhile, the proposed method shows better performance on preserving fine-grained multilevel semantic similarity according to the results under the Jaccard coefficient based evaluation criteria we propose.", "title": "" }, { "docid": "6838d497f81c594cb1760c075b0f5d48", "text": "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson $x^{2}$ divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We train LSGANs on several datasets, and the experimental results show that the images generated by LSGANs are of better quality than regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. The other one is to compare between LSGANs with gradient penalty and WGANs with gradient penalty. We conduct four experiments to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty (LSGANs-GP) and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs-GP succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.", "title": "" }, { "docid": "a2d97c2b71e6424d3f458b7730be0c90", "text": "Fault detection in solar photovoltaic (PV) arrays is an essential task for increasing reliability and safety in PV systems. Because of PV's nonlinear characteristics, a variety of faults may be difficult to detect by conventional protection devices, leading to safety issues and fire hazards in PV fields. To fill this protection gap, machine learning techniques have been proposed for fault detection based on measurements, such as PV array voltage, current, irradiance, and temperature. However, existing solutions usually use supervised learning models, which are trained by numerous labeled data (known as fault types) and therefore, have drawbacks: 1) the labeled PV data are difficult or expensive to obtain, 2) the trained model is not easy to update, and 3) the model is difficult to visualize. To solve these issues, this paper proposes a graph-based semi-supervised learning model only using a few labeled training data that are normalized for better visualization. The proposed model not only detects the fault, but also further identifies the possible fault type in order to expedite system recovery. Once the model is built, it can learn PV systems autonomously over time as weather changes. Both simulation and experimental results show the effective fault detection and classification of the proposed method.", "title": "" }, { "docid": "14cb6aa11fae4c370542b58a20b93da4", "text": "Stray-current corrosion has been a source of concern for the transit authorities and utility companies since the inception of the electrified rail transit system. The corrosion problem caused by stray current was noticed within ten years of the first dc-powered rail line in the United States in 1888 [1] in Richmond, Virginia, and ever since, the control of stray current has been a critical issue. Similarly, the effects of rail and utility-pipe corrosion caused by stray current had been observed in Europe.", "title": "" }, { "docid": "9f5ab2f666eb801d4839fcf8f0293ceb", "text": "In recent years, Wireless Sensor Networks (WSNs) have emerged as a new powerful technology used in many applications such as military operations, surveillance system, Intelligent Transport Systems (ITS) etc. These networks consist of many Sensor Nodes (SNs), which are not only used for monitoring but also capturing the required data from the environment. Most of the research proposals on WSNs have been developed keeping in view of minimization of energy during the process of extracting the essential data from the environment where SNs are deployed. The primary reason for this is the fact that the SNs are operated on battery which discharges quickly after each operation. It has been found in literature that clustering is the most common technique used for energy aware routing in WSNs. The most popular protocol for clustering in WSNs is Low Energy Adaptive Clustering Hierarchy (LEACH) which is based on adaptive clustering technique. This paper provides the taxonomy of various clustering and routing techniques in WSNs based upon metrics such as power management, energy management, network lifetime, optimal cluster head selection, multihop data transmission etc. A comprehensive discussion is provided in the text highlighting the relative advantages and disadvantages of many of the prominent proposals in this category which helps the designers to select a particular proposal based upon its merits over the others. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "98162dc86a2c70dd55e7e3e996dc492c", "text": "PURPOSE\nTo evaluate gastroesophageal reflux disease (GERD) symptoms, patient satisfaction, and antisecretory drug use in a large group of GERD patients treated with the Stretta procedure (endoluminal temperature-controlled radiofrequency energy for the treatment of GERD) at multiple centers since February 1999.\n\n\nMETHODS\nAll subjects provided informed consent. A health care provider from each institution administered a standardized GERD survey to patients who had undergone Stretta. Subjects provided (at baseline and follow-up) (1) GERD severity (none, mild, moderate, severe), (2) percentage of GERD symptom control, (3) satisfaction, and (4) antisecretory medication use. Outcomes were compared with the McNemar test, paired t test, and Wilcoxon signed rank test.\n\n\nRESULTS\nSurveys of 558 patients were evaluated (33 institutions, mean follow-up of 8 months). Most patients (76%) were dissatisfied with baseline antisecretory therapy for GERD. After treatment, onset of GERD relief was less than 2 months (68.7%) or 2 to 6 months (14.6%). The median drug requirement improved from proton pump inhibitors twice daily to antacids as needed (P < .0001). The percentage of patients with satisfactory GERD control (absent or mild) improved from 26.3% at baseline (on drugs) to 77.0% after Stretta (P < .0001). Median baseline symptom control on drugs was 50%, compared with 90% at follow-up (P < .0001). Baseline patient satisfaction on drugs was 23.2%, compared with 86.5% at follow-up (P < .0001). Subgroup analysis (<1 year vs. >1 year of follow-up) showed a superior effect on symptom control and drug use in those patients beyond 1 year of follow-up, supporting procedure durability.\n\n\nCONCLUSIONS\nThe Stretta procedure results in significant GERD symptom control and patient satisfaction, superior to that derived from drug therapy in this study group. The treatment effect is durable beyond 1 year, and most patients were off all antisecretory drugs at follow-up. These results support the use of the Stretta procedure for patients with GERD, particularly those with inadequate control of symptoms on medical therapy.", "title": "" }, { "docid": "5c227388ee404354692ffa0b2f3697f3", "text": "Automotive surround view camera system is an emerging automotive ADAS (Advanced Driver Assistance System) technology that assists the driver in parking the vehicle safely by allowing him/her to see a top-down view of the 360 degree surroundings of the vehicle. Such a system normally consists of four to six wide-angle (fish-eye lens) cameras mounted around the vehicle, each facing a different direction. From these camera inputs, a composite bird-eye view of the vehicle is synthesized and shown to the driver in real-time during parking. In this paper, we present a surround view camera solution that consists of three key algorithm components: geometric alignment, photometric alignment, and composite view synthesis. Our solution produces a seamlessly stitched bird-eye view of the vehicle from four cameras. It runs real-time on DSP C66x producing an 880x1080 output video at 30 fps.", "title": "" }, { "docid": "77371cfa61dbb3053f3106f5433d23a7", "text": "We present a new noniterative approach to synthetic aperture radar (SAR) autofocus, termed the multichannel autofocus (MCA) algorithm. The key in the approach is to exploit the multichannel redundancy of the defocusing operation to create a linear subspace, where the unknown perfectly focused image resides, expressed in terms of a known basis formed from the given defocused image. A unique solution for the perfectly focused image is then directly determined through a linear algebraic formulation by invoking an additional image support condition. The MCA approach is found to be computationally efficient and robust and does not require prior assumptions about the SAR scene used in existing methods. In addition, the vector-space formulation of MCA allows sharpness metric optimization to be easily incorporated within the restoration framework as a regularization term. We present experimental results characterizing the performance of MCA in comparison with conventional autofocus methods and discuss the practical implementation of the technique.", "title": "" }, { "docid": "edfc15795f1f69d31c36f73c213d2b7d", "text": "Three studies tested whether adopting strong (relative to weak) approach goals in relationships (i.e., goals focused on the pursuit of positive experiences in one's relationship such as fun, growth, and development) predict greater sexual desire. Study 1 was a 6-month longitudinal study with biweekly assessments of sexual desire. Studies 2 and 3 were 2-week daily experience studies with daily assessments of sexual desire. Results showed that approach relationship goals buffered against declines in sexual desire over time and predicted elevated sexual desire during daily sexual interactions. Approach sexual goals mediated the association between approach relationship goals and daily sexual desire. Individuals with strong approach goals experienced even greater desire on days with positive relationship events and experienced less of a decrease in desire on days with negative relationships events than individuals who were low in approach goals. In two of the three studies, the association between approach relationship goals and sexual desire was stronger for women than for men. Implications of these findings for maintaining sexual desire in long-term relationships are discussed.", "title": "" }, { "docid": "981b4977ed3524545d9ae5016d45c8d6", "text": "Related to different international activities in the Optical Wireless Communications (OWC) field Graz University of Technology (TUG) has high experience on developing different high data rate transmission systems and is well known for measurements and analysis of the OWC-channel. In this paper, a novel approach for testing Free Space Optical (FSO) systems in a controlled laboratory condition is proposed. Based on fibre optics technology, TUG testbed could effectively emulate the operation of real wireless optical communication systems together with various atmospheric perturbation effects such as fog and clouds. The suggested architecture applies an optical variable attenuator as a main device representing the tropospheric influences over the launched Gaussian beam in the free space channel. In addition, the current scheme involves an attenuator control unit with an external Digital Analog Converter (DAC) controlled by self-developed software. To obtain optimal results in terms of the presented setup, a calibration process including linearization of the non-linear attenuation versus voltage graph is performed. Finally, analytical results of the attenuation based on real measurements with the hardware channel emulator under laboratory conditions are shown. The implementation can be used in further activities to verify OWC-systems, before testing under real conditions.", "title": "" }, { "docid": "30e93cb20194b989b26a8689f06b8343", "text": "We present a robust method for solving the map matching problem exploiting massive GPS trace data. Map matching is the problem of determining the path of a user on a map from a sequence of GPS positions of that user --- what we call a trajectory. Commonly obtained from GPS devices, such trajectory data is often sparse and noisy. As a result, the accuracy of map matching is limited due to ambiguities in the possible routes consistent with trajectory samples. Our approach is based on the observation that many regularity patterns exist among common trajectories of human beings or vehicles as they normally move around. Among all possible connected k-segments on the road network (i.e., consecutive edges along the network whose total length is approximately k units), a typical trajectory collection only utilizes a small fraction. This motivates our data-driven map matching method, which optimizes the projected paths of the input trajectories so that the number of the k-segments being used is minimized. We present a formulation that admits efficient computation via alternating optimization. Furthermore, we have created a benchmark for evaluating the performance of our algorithm and others alike. Experimental results demonstrate that the proposed approach is superior to state-of-art single trajectory map matching techniques. Moreover, we also show that the extracted popular k-segments can be used to process trajectories that are not present in the original trajectory set. This leads to a map matching algorithm that is as efficient as existing single trajectory map matching algorithms, but with much improved map matching accuracy.", "title": "" }, { "docid": "3d2200cc6b71995c6a4f88897bb73ea0", "text": "With biomedical literature increasing at a rate of several thousand papers per week, it is impossible to keep abreast of all developments; therefore, automated means to manage the information overload are required. Text mining techniques, which involve the processes of information retrieval, information extraction and data mining, provide a means of solving this. By adding meaning to text, these techniques produce a more structured analysis of textual knowledge than simple word searches, and can provide powerful tools for the production and analysis of systems biology models.", "title": "" }, { "docid": "def6762457fd4e95a35e3c83990c4943", "text": "The possibility of controlling dexterous hand prostheses by using a direct connection with the nervous system is particularly interesting for the significant improvement of the quality of life of patients, which can derive from this achievement. Among the various approaches, peripheral nerve based intrafascicular electrodes are excellent neural interface candidates, representing an excellent compromise between high selectivity and relatively low invasiveness. Moreover, this approach has undergone preliminary testing in human volunteers and has shown promise. In this paper, we investigate whether the use of intrafascicular electrodes can be used to decode multiple sensory and motor information channels with the aim to develop a finite state algorithm that may be employed to control neuroprostheses and neurocontrolled hand prostheses. The results achieved both in animal and human experiments show that the combination of multiple sites recordings and advanced signal processing techniques (such as wavelet denoising and spike sorting algorithms) can be used to identify both sensory stimuli (in animal models) and motor commands (in a human volunteer). These findings have interesting implications, which should be investigated in future experiments.", "title": "" }, { "docid": "8750e04065d8f0b74b7fee63f4966e59", "text": "The Customer churn is a crucial activity in rapidly growing and mature competitive telecommunication sector and is one of the greatest importance for a project manager. Due to the high cost of acquiring new customers, customer churn prediction has emerged as an indispensable part of telecom sectors’ strategic decision making and planning process. It is important to forecast customer churn behavior in order to retain those customers that will churn or possible may churn. This study is another attempt which makes use of rough set theory, a rule-based decision making technique, to extract rules for churn prediction. Experiments were performed to explore the performance of four different algorithms (Exhaustive, Genetic, Covering, and LEM2). It is observed that rough set classification based on genetic algorithm, rules generation yields most suitable performance out of the four rules generation algorithms. Moreover, by applying the proposed technique on publicly available dataset, the results show that the proposed technique can fully predict all those customers that will churn or possibly may churn and also provides useful information to strategic decision makers as well.", "title": "" }, { "docid": "e63836b5053b7f56d5ad5081a7ef79b7", "text": "This paper presents interfaces for exploring large collections of fonts for design tasks. Existing interfaces typically list fonts in a long, alphabetically-sorted menu that can be challenging and frustrating to explore. We instead propose three interfaces for font selection. First, we organize fonts using high-level descriptive attributes, such as \"dramatic\" or \"legible.\" Second, we organize fonts in a tree-based hierarchical menu based on perceptual similarity. Third, we display fonts that are most similar to a user's currently-selected font. These tools are complementary; a user may search for \"graceful\" fonts, select a reasonable one, and then refine the results from a list of fonts similar to the selection. To enable these tools, we use crowdsourcing to gather font attribute data, and then train models to predict attribute values for new fonts. We use attributes to help learn a font similarity metric using crowdsourced comparisons. We evaluate the interfaces against a conventional list interface and find that our interfaces are preferred to the baseline. Our interfaces also produce better results in two real-world tasks: finding the nearest match to a target font, and font selection for graphic designs.", "title": "" }, { "docid": "e812bed02753b807d1e03a2e05e87cb8", "text": "ion level. It is especially useful in the case of expert-based estimation, where it is easier for experts to embrace and estimate smaller pieces of project work. Moreover, the increased level of detail during estimation—for instance, by breaking down software products and processes—implies higher transparency of estimates. In practice, there is a good chance that the bottom estimates would be mixed below and above the actual effort. As a consequence, estimation errors at the bottom level will cancel each other out, resulting in smaller estimation error than if a top-down approach were used. This phenomenon is related to the mathematical law of large numbers. However, the more granular the individual estimates, the more time-consuming the overall estimation process becomes. In industrial practice, a top-down strategy usually provides reasonably accurate estimates at relatively low overhead and without too much technical expertise. Although bottom-up estimation usually provides more accurate estimates, it requires the estimators involved to have expertise regarding the bottom activities and related product components that they estimate directly. In principle, applying bottom-up estimation pays off when the decomposed tasks can be estimated more accurately than the whole task. For instance, a bottom-up strategy proved to provide better results when applied to high-uncertainty or complex estimation tasks, which are usually underestimated when considered as a whole. Furthermore, it is often easy to forget activities and/or underestimate the degree of unexpected events, which leads to underestimation of total effort. However, from the mathematical point of view (law of large numbers mentioned), dividing the project into smaller work packages provides better data for estimation and reduces overall estimation error. Experiences presented by Jørgensen (2004b) suggest that in the context of expert-based estimation, software companies should apply a bottom-up strategy unless the estimators have experience from, or access to, very similar projects. In the context of estimation based on human judgment, typical threats of individual and group estimation should be considered. Refer to Sect. 6.4 for an overview of the strengths and weaknesses of estimation based on human judgment.", "title": "" }, { "docid": "3509f90848c45ad34ebbd30b9d357c29", "text": "Explaining underlying causes or effects about events is a challenging but valuable task. We define a novel problem of generating explanations of a time series event by (1) searching cause and effect relationships of the time series with textual data and (2) constructing a connecting chain between them to generate an explanation. To detect causal features from text, we propose a novel method based on the Granger causality of time series between features extracted from text such as N-grams, topics, sentiments, and their composition. The generation of the sequence of causal entities requires a commonsense causative knowledge base with efficient reasoning. To ensure good interpretability and appropriate lexical usage we combine symbolic and neural representations, using a neural reasoning algorithm trained on commonsense causal tuples to predict the next cause step. Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.", "title": "" } ]
scidocsrr
9330d2a67c5aa4141c81912a647641c5
On the combination of domain-specific heuristics for author name disambiguation: the nearest cluster method
[ { "docid": "ee79f55fe096b195984ecdc1fc570179", "text": "In bibliographies like DBLP and Citeseer, there are three kinds of entity-name problems that need to be solved. First, multiple entities share one name, which is called the name sharing problem. Second, one entity has different names, which is called the name variant problem. Third, multiple entities share multiple names, which is called the name mixing problem. We aim to solve these problems based on one model in this paper. We call this task complete entity resolution. Different from previous work, our work use global information based on data with two types of information, words and author names. We propose a generative latent topic model that involves both author names and words — the LDA-dual model, by extending the LDA (Latent Dirichlet Allocation) model. We also propose a method to obtain model parameters that is global information. Based on obtained model parameters, we propose two algorithms to solve the three problems mentioned above. Experimental results demonstrate the effectiveness and great potential of the proposed model and algorithms.", "title": "" }, { "docid": "d29f2b03b3ebe488a935e19d87c37226", "text": "Log analysis shows that PubMed users frequently use author names in queries for retrieving scientific literature. However, author name ambiguity may lead to irrelevant retrieval results. To improve the PubMed user experience with author name queries, we designed an author name disambiguation system consisting of similarity estimation and agglomerative clustering. A machine-learning method was employed to score the features for disambiguating a pair of papers with ambiguous names. These features enable the computation of pairwise similarity scores to estimate the probability of a pair of papers belonging to the same author, which drives an agglomerative clustering algorithm regulated by 2 factors: name compatibility and probability level. With transitivity violation correction, high precision author clustering is achieved by focusing on minimizing false-positive pairing. Disambiguation performance is evaluated with manual verification of random samples of pairs from clustering results. When compared with a state-of-the-art system, our evaluation shows that among all the pairs the lumping error rate drops from 10.1% to 2.2% for our system, while the splitting error rises from 1.8% to 7.7%. This results in an overall error rate of 9.9%, compared with 11.9% for the state-of-the-art method. Other evaluations based on gold standard data also show the increase in accuracy of our clustering. We attribute the performance improvement to the machine-learning method driven by a large-scale training set and the clustering algorithm regulated by a name compatibility scheme preferring precision. With integration of the author name disambiguation system into the PubMed search engine, the overall click-through-rate of PubMed users on author name query results improved from 34.9% to 36.9%.", "title": "" }, { "docid": "7f57322b6e998d629d1a67cd5fb28da9", "text": "Background: We recently described “Author-ity,” a model for estimating the probability that two articles in MEDLINE, sharing the same author name, were written by the same individual. Features include shared title words, journal name, coauthors, medical subject headings, language, affiliations, and author name features (middle initial, suffix, and prevalence in MEDLINE). Here we test the hypothesis that the Author-ity model will suffice to disambiguate author names for the vast majority of articles in MEDLINE. Methods: Enhancements include: (a) incorporating first names and their variants, email addresses, and correlations between specific last names and affiliation words; (b) new methods of generating large unbiased training sets; (c) new methods for estimating the prior probability; (d) a weighted least squares algorithm for correcting transitivity violations; and (e) a maximum likelihood based agglomerative algorithm for computing clusters of articles that represent inferred author-individuals. Results: Pairwise comparisons were computed for all author names on all 15.3 million articles in MEDLINE (2006 baseline), that share last name and first initial, to create Author-ity 2006, a database that has each name on each article assigned to one of 6.7 million inferred author-individual clusters. Recall is estimated at ∼98.8%. Lumping (putting two different individuals into the same cluster) affects ∼0.5% of clusters, whereas splitting (assigning articles written by the same individual to >1 cluster) affects ∼2% of articles. Impact: The Author-ity model can be applied generally to other bibliographic databases. Author name disambiguation allows information retrieval and data integration to become person-centered, not just document-centered, setting the stage for new data mining and social network tools that will facilitate the analysis of scholarly publishing and collaboration behavior. Availability: The Author-ity 2006 database is available for nonprofit academic research, and can be freely queried via http://arrowsmith.psych.uic.edu.", "title": "" } ]
[ { "docid": "4599791edd82107f40afc86a1367bf19", "text": "Acknowledgements It is a great pleasure to have an opportunity to thanks valuable beings for their continuous support and inspiration throughout the thesis work. I would like to extend my gratitude towards Dr. for all the guidance and great knowledge he shared during our course. The abundance of knowledge he has always satisfied our queries at every point. Thanks to Mr. Sumit Miglani, My guide for his contribution for timely reviews and suggestions in completing the thesis. Every time he provided the needed support and guidance. At last but not the least, a heartiest thanks to all my family and friends for being there every time I needed them. Abstract Address Resolution Protocol (ARP) is a protocol having simple architecture and have been in use since the advent of Open System Interconnection (OSI) network architecture. Its been working at network layer for the important dynamic conversion of network address i.e. Internet Protocol (IP) address to physical address or Media Access Control (MAC) address. Earlier it was sufficiently providing its services but in today \" s complex and more sophisticated unreliable network, security being one major issue, standard ARP protocol is vulnerable to many different kinds of attacks. These attacks lead to devastating loss of important information. With certain loopholes it has become easy to be attacked and with not so reliable security mechanism, confidentiality of data is being compromised. Therefore, a strong need is felt to harden the security system. Since, LAN is used in maximum organizations to get the different computer connected. So, an attempt has been made to enhance the working of ARP protocol to work in a more secure way. Any kind of attempts to poison the ARP cache (it maintains the corresponding IP and MAC address associations in the LAN network) for redirecting the data to unreliable host, are prevented beforehand. New modified techniques are proposed which could efficiently guard our ARP from attacker and protect critical data from being sniffed both internally and externally. Efficiency of these methods has been shown mathematically without any major impact on the performance of network. Main idea behind how these methods actually work and proceed to achieve its task has been explained with the help of flow chart and pseudo codes. With the help of different tools ARP cache is being monitored regularly and if any malicious activity is encountered, it is intimidated to the administrator immediately. So, in …", "title": "" }, { "docid": "4285d9b4b9f63f22033ce9a82eec2c76", "text": "To ease large-scale realization of agent applications there is an urgent need for frameworks, methodologies and toolkits that support the effective development of agent systems. Moreover, since one of the main tasks for which agent systems were invented is the integration between heterogeneous software, independently developed agents should be able to interact successfully. In this paper, we present JADE (Java Agent Development Environment), a software framework to build agent systems for the management of networked information resources in compliance with the FIPA specifications for inter-operable intelligent multi-agent systems. The goal of JADE is to simplify development while ensuring standard compliance through a comprehensive set of system services and agents. JADE can then be considered to be an agent middle-ware that implements an efficient agent platform and supports the development of multi-agent systems. It deals with all the aspects that are not peculiar to agent internals and that are independent of the applications, such as message transport, encoding and parsing, or agent life-cycle management. Copyright  2001 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "9a7e491e4d4490f630b55a94703a6f00", "text": "Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.", "title": "" }, { "docid": "df331d60ab6560808e28e3813766b67b", "text": "Analyzing large graphs provides valuable insights for social networking and web companies in content ranking and recommendations. While numerous graph processing systems have been developed and evaluated on available benchmark graphs of up to 6.6B edges, they often face significant difficulties in scaling to much larger graphs. Industry graphs can be two orders of magnitude larger hundreds of billions or up to one trillion edges. In addition to scalability challenges, real world applications often require much more complex graph processing workflows than previously evaluated. In this paper, we describe the usability, performance, and scalability improvements we made to Apache Giraph, an open-source graph processing system, in order to use it on Facebook-scale graphs of up to one trillion edges. We also describe several key extensions to the original Pregel model that make it possible to develop a broader range of production graph applications and workflows as well as improve code reuse. Finally, we report on real-world operations as well as performance characteristics of several large-scale production applications.", "title": "" }, { "docid": "64d4776be8e2dbb0fa3b30d6efe5876c", "text": "This paper presents a novel method for hierarchically organizing large face databases, with application to efficient identity-based face retrieval. The method relies on metric learning with local binary pattern (LBP) features. On one hand, LBP features have proved to be highly resilient to various appearance changes due to illumination and contrast variations while being extremely efficient to calculate. On the other hand, metric learning (ML) approaches have been proved very successful for face verification ‘in the wild’, i.e. in uncontrolled face images with large amounts of variations in pose, expression, appearances, lighting, etc. While such ML based approaches compress high dimensional features into low dimensional spaces using discriminatively learned projections, the complexity of retrieval is still significant for large scale databases (with millions of faces). The present paper shows that learning such discriminative projections locally while organizing the database hierarchically leads to a more accurate and efficient system. The proposed method is validated on the standard Labeled Faces in the Wild (LFW) benchmark dataset with millions of additional distracting face images collected from photos on the internet.", "title": "" }, { "docid": "46b2f2dd5b17fd5108ac7f60144ff017", "text": "Accurately detecting pedestrians in images plays a critically important role in many computer vision applications. Extraction of effective features is the key to this task. Promising features should be discriminative, robust to various variations and easy to compute. In this work, we present novel features, termed dense center-symmetric local binary patterns (CS-LBP) and pyramid center-symmetric local binary/ternary patterns (CS-LBP/LTP), for pedestrian detection. The standard LBP proposed by Ojala et al. [1] mainly captures the texture information. The proposed CS-LBP feature, in contrast, captures the gradient information and some texture information. Moreover, the proposed dense CS-LBP and the pyramid CS-LBP/LTP are easy to implement and computationally efficient, which is desirable for real-time applications. Experiments on the INRIA pedestrian dataset show that the dense CS-LBP feature with linear supporct vector machines (SVMs) is comparable with the histograms of oriented gradients (HOG) feature with linear SVMs, and the pyramid CS-LBP/LTP features outperform both HOG features with linear SVMs and the start-of-the-art pyramid HOG (PHOG) feature with the histogram intersection kernel SVMs. We also demonstrate that the combination of our pyramid CS-LBP feature and the PHOG feature could significantly improve the detection performance—producing state-of-the-art accuracy on the INRIA pedestrian dataset.", "title": "" }, { "docid": "46f5a02253740bb9bc728ea5cbf8474a", "text": "The tongue is an elaborate complex of heterogeneous tissues with taste organs of diverse embryonic origins. The lingual taste organs are papillae, composed of an epithelium that includes specialized taste buds, the basal lamina, and a lamina propria core with matrix molecules, fibroblasts, nerves, and vessels. Because taste organs are dynamic in cell biology and sensory function, homeostasis requires tight regulation in specific compartments or niches. Recently, the Hedgehog (Hh) pathway has emerged as an essential regulator that maintains lingual taste papillae, taste bud and progenitor cell proliferation and differentiation, and neurophysiological function. Activating or suppressing Hh signaling, with genetic models or pharmacological agents used in cancer treatments, disrupts taste papilla and taste bud integrity and can eliminate responses from taste nerves to chemical stimuli but not to touch or temperature. Understanding Hh regulation of taste organ homeostasis contributes knowledge about the basic biology underlying taste disruptions in patients treated with Hh pathway inhibitors.", "title": "" }, { "docid": "2f6812663bd90381008ed4bcd16d58d5", "text": "In this paper, we propose a SIW fed circularly polarized (CP) tapered slot antenna which have a good return loss and a wide band axial-ratio. The performance of the antenna has been extensively optimized based on both CST Microwave Studio and Ansoft-HFSS simulation to obtain wide band CP operation. In the following sections, we introduce the theory of operation, the details of the design and the measured results.", "title": "" }, { "docid": "c9aa8e3ca2f1fc9b4f6b745970e55eee", "text": "Embedded systems for safety-critical applications often integrate multiple “functions” and must generally be fault-tolerant. These requirements lead to a need for mechanisms and services that provide protection against fault propagation and ease the construction of distributed fault-tolerant applications. A number of bus architectures have been developed to satisfy this need. This paper reviews the requirements on these architectures, the mechanisms employed, and the services provided. Four representative architectures (SAFEbus TM , SPIDER, TTA, and FlexRay) are briefly described.", "title": "" }, { "docid": "b48c48af88cf9fb0b7e1bef2952ef516", "text": "During last decades there has been a continuous growth of aquaculture industries all over the world and taking into consideration the spurt in freshwater ornamental fish aquaculture and trade in Kerala, the present study was aimed to assess the prevalence of various motile Aeromonas spp. in fresh water ornamental fishes and associated carriage water. The extracellular virulence factors and the antibiogram of the isolates were also elucidated. Various species of motile aeromonads such as Aeromonas caviae, A. hydrophila, A. jandaei, A. schubertii, A. sobria, A. trota and A. veronii were detected. Aeromonas sobria predominated both fish and water samples. Extracellular enzymes and toxins produced by motile aeromonds are important elements of bacterial virulence. The production of extracellular virulence factors proteases, lipase, DNase and haemolysin by the isolates were studied. All the isolates from both fish and water samples produced gelatinase and nuclease but the ability to produce lipase, caseinase and haemolysins was found to vary among isolates from different sources. Among the 15 antibiotics to which the isolates were tested, all the isolates were found to be sensitive to chloramphenicol, ciprofloxacin and gentamicin and resistant to amoxycillin. Local aquarists maintain the fish in crowded stressful conditions, which could trigger infections by the obligate/ opportunistic pathogenic members among motile aeromonads.", "title": "" }, { "docid": "e191dc25d17c79dbbfc5e6e09ad4e3e0", "text": "Capacitive touch-screen technology introduces new concepts to user interfaces, such as multi-touch, pinch zoom-in/out gestures, thus expanding the smartphone market. However, capacitive touch-screen technology still suffers from performance degradation like a low frame scan rate and poor accuracy, etc. One of the key performance factors is the immunity to external noise, which intrudes randomly into the touch-screen system. HUM, display noise, and SMPS are such noise sources. The main electrical power source produces HUM, one of the most important sources of noise, which has a 50 or 60Hz component. Display noise is emitted when an LCD or OLED is driven by the internal timing controller, which generates the driving signal in the tens of kHz range. The touch performance of On-Cell or In-Cell touch displays is seriously affected by this kind of noise, because the distance between the display pixel layer and the capacitive touchscreen panel is getting smaller. SMPS is another noise source that ranges up to 300kHz. The charger for a smart-phone, the USB port in a computer, a tri-phosphor fluorescent light bulb are all examples of sources of SMPS. There have been many attempts to remove such noise. Amplitude modulation with frequency hopping is proposed in [1]. However, when the noise environment changes, this method needs recalibration, resulting in non-constant touch response time. Another method tries to filter the noise from the display [2], but it does not remove other noise sources like HUM or SMPS.", "title": "" }, { "docid": "bb2f580127bdfb305dcb8ff1a2dd790f", "text": "Rapid growth of the demand for computational power by scientific, business and web-applications has led to the creation of large-scale data centers consuming enormous amounts of electrical power. We propose an energy efficient resource management system for virtualized Cloud data centers that reduces operational costs and provides required Quality of Service (QoS). Energy savings are achieved by continuous consolidation of VMs according to current utilization of resources, virtual network topologies established between VMs and thermal state of computing nodes. We present first results of simulation-driven evaluation of heuristics for dynamic reallocation of VMs using live migration according to current requirements for CPU performance. The results show that the proposed technique brings substantial energy savings, while ensuring reliable QoS. This justifies further investigation and development of the proposed resource management system.", "title": "" }, { "docid": "395e686b9de24b0dc203edbd03607551", "text": "Analysis and forecasting of sequential data, key problems in various domains of engineering and science, have attracted the attention of many researchers from different communities. When predicting the future probability of events using time series, recurrent neural networks (RNNs) are an effective tool that have the learning ability of feedforward neural networks and expand their expression ability using dynamic equations. Moreover, RNNs are able to model several computational structures. Researchers have developed various RNNs with different architectures and topologies. To summarize the work of RNNs in forecasting and provide guidelines for modeling and novel applications in future studies, this review focuses on applications of RNNs for time series forecasting in environmental factor forecasting. We present the structure, processing flow, and advantages of RNNs and analyze the applications of various RNNs in time series forecasting. In addition, we discuss limitations and challenges of applications based on RNNs and future research directions. Finally, we summarize applications of RNNs in forecasting.", "title": "" }, { "docid": "541c6cbc02461c743ff1573c2424e75b", "text": "Development of scheduling algorithms is directly related with the development of operating system which brings difficulties in implementation. Any modification on the scheduling algorithm will appear as modification on the operating system kernel code. Processor is an important source of cpu scheduling process, so it becomes very important on accomplishing of the operating system design goals. A delicate problem of the well-functioning of SO is the case when in CPU comes two or more processes which wants to be executed. Scheduling includes a range of mechanisms and policies that SO has to follows in order that all processes take the service. In this Paper we will discuss about two main batches algorithms, such as FCFS and SJF, and i will show a manner how to improve these algorithms in the future work.", "title": "" }, { "docid": "132bb5b7024de19f4160664edca4b4f5", "text": "Generic Competitive Strategy: Basically, strategy is about two things: deciding where you want your business to go, and deciding how to get there. A more complete definition is based on competitive advantage, the object of most corporate strategy: “Competitive advantage grows out of value a firm is able to create for its buyers that exceeds the firm's cost of creating it. Value is what buyers are willing to pay, and superior value stems from offering lower prices than competitors for equivalent benefits or providing unique benefits that more than offset a higher price. There are two basic types of competitive advantage: cost leadership and differentiation.” Michael Porter Competitive strategies involve taking offensive or defensive actions to create a defendable position in the industry. Generic strategies can help the organization to cope with the five competitive forces in the industry and do better than other organization in the industry. Generic strategies include ‘overall cost leadership’, ‘differentiation’, and ‘focus’. Generally firms pursue only one of the above generic strategies. However some firms make an effort to pursue only one of the above generic strategies. However some firms make an effort to pursue more than one strategy at a time by bringing out a differentiated product at low cost. Though approaches like these are successful in short term, they are hardly sustainable in the long term. If firms try to maintain cost leadership as well as differentiation at the same time, they may fail to achieve either.", "title": "" }, { "docid": "623e62e756321d14bb552a1ef364e4a5", "text": "With the wide deployment of smart card automated fare collection (SCAFC) systems, public transit agencies have been benefiting from huge volume of transit data, a kind of sequential data, collected every day. Yet, improper publishing and use of transit data could jeopardize passengers' privacy. In this paper, we present our solution to transit data publication under the rigorous differential privacy model for the Société de transport de Montréal (STM). We propose an efficient data-dependent yet differentially private transit data sanitization approach based on a hybrid-granularity prefix tree structure. Moreover, as a post-processing step, we make use of the inherent consistency constraints of a prefix tree to conduct constrained inferences, which lead to better utility. Our solution not only applies to general sequential data, but also can be seamlessly extended to trajectory data. To our best knowledge, this is the first paper to introduce a practical solution for publishing large volume of sequential data under differential privacy. We examine data utility in terms of two popular data analysis tasks conducted at the STM, namely count queries and frequent sequential pattern mining. Extensive experiments on real-life STM datasets confirm that our approach maintains high utility and is scalable to large datasets.", "title": "" }, { "docid": "494388072f3d7a62d00c5f3b5ad7a514", "text": "Recent years have seen an increasing interest in providing accurate prediction models for electrical energy consumption. In Smart Grids, energy consumption optimization is critical to enhance power grid reliability, and avoid supply-demand mismatches. Utilities rely on real-time power consumption data from individual customers in their service area to forecast the future demand and initiate energy curtailment programs. Currently however, little is known about the differences in consumption characteristics of various customer types, and their impact on the prediction method’s accuracy. While many studies have concentrated on aggregate loads, showing that accurate consumption prediction at the building level can be achieved, there is a lack of results regarding individual customers consumption prediction. In this study, we perform an empirical quantitative evaluation of various prediction methods of kWh energy consumption of two distinct customer types: 1) small, highly variable individual customers, and 2) aggregated, more stable consumption at the building level. We show that prediction accuracy heavily depends on customer type. Contrary to previous studies, we consider the consumption data granularity to be very small (i.e., 15-min interval), and focus on very short term predictions (next few hours). As Smart Grids move closer to dynamic curtailment programs, which enables demand response (DR) events not only on weekdays, but also during weekends, existing DR strategies prove to be inadequate. Here, we relax the constraint of workdays, and include weekends, where ISO models consistently under perform. Nonetheless, we show that simple ISO baselines, and short-term Time Series, which only depend on recent historical data, achieve superior prediction accuracy. This result suggests that large amounts of historical training data are not required, rather they should be avoided.", "title": "" }, { "docid": "2498b05e75da58f06870b4b6c44fa991", "text": "Baseline NMT decoder Self-attentive residual decoder p(yt|y1, ..., yt−1, ct) ≈ g(ht, ct, yt−1) p(yt|y1, ..., yt−1, ct) ≈ g(ht, ct, dt) ht = f (ht−1, yt−1) ht = f (ht−1, yt−1) dt = fa(y1, ..., yt−1) •The baseline NMT decoder uses a residual connection to the previously predicted word yt−1 •We propose to use residual connections from all previously translated words y1, ..., yt−1 with a summary vector dt.", "title": "" }, { "docid": "11d418decc0d06a3af74be77d4c71e5e", "text": "Automatic generation control (AGC) regulates mechanical power generation in response to load changes through local measurements. Its main objective is to maintain system frequency and keep energy balanced within each control area in order to maintain the scheduled net interchanges between control areas. The scheduled interchanges as well as some other factors of AGC are determined at a slower time scale by considering a centralized economic dispatch (ED) problem among different generators. However, how to make AGC more economically efficient is less studied. In this paper, we study the connections between AGC and ED by reverse engineering AGC from an optimization view, and then we propose a distributed approach to slightly modify the conventional AGC to improve its economic efficiency by incorporating ED into the AGC automatically and dynamically.", "title": "" } ]
scidocsrr
a4f301505b8c968882ad3479e8f0437e
Energy Efficient Computational Offloading Framework for Mobile Cloud Computing
[ { "docid": "8eb0f822b4e8288a6b78abf0bf3aecbb", "text": "Cloud computing enables access to the widespread services and resources in cloud datacenters for mitigating resource limitations in low-potential client devices. Computational cloud is an attractive platform for computational offloading due to the attributes of scalability and availability of resources. Therefore, mobile cloud computing (MCC) leverages the application processing services of computational clouds for enabling computational-intensive and ubiquitous mobile applications on smart mobile devices (SMDs). Computational offloading frameworks focus on offloading intensive mobile applications at different granularity levels which involve resource-intensive mechanism of application profiling and partitioning at runtime. As a result, the energy consumption cost (ECC) and turnaround time of the application is increased. This paper proposes an active service migration (ASM) framework for computational offloading to cloud datacenters, which employs lightweight procedure for the deployment of runtime distributed platform. The proposed framework employs coarse granularity level and simple developmental and deployment procedures for computational offloading in MCC. ASM is evaluated by benchmarking prototype application on the Android devices in the real MCC environment. It is found that the turnaround time of the application reduces up to 45 % and ECC of the application reduces up to 33 % in ASM-based computational offloading as compared to traditional offloading techniques which shows the lightweight nature of the proposed framework for computational offloading.", "title": "" } ]
[ { "docid": "c4b4e2da1f3686b195c32e8135f9c821", "text": "Understanding cognition has been a central focus for psychologists, neuroscientists and philosophers for thousands of years, but many of its most fundamental processes remain very poorly understood. Chief among these is the process of thought itself: the spontaneous emergence of specific ideas within the stream of consciousness. It is widely accepted that ideas, both familiar and novel, arise from the combination of existing concepts. From this perspective, thought is an emergent attribute of memory, arising from the intrinsic dynamics of the neural substrate in which information is embedded. An important issue in any understanding of this process is the relationship between the emergence of conceptual combinations and the dynamics of the underlying neural networks. Virtually all theories of ideation hypothesize that ideas arise during the thought process through association, each one triggering the next through some type of linkage, e.g., structural analogy, semantic similarity, polysemy, etc. In particular, it has been suggested that the creativity of ideation in individuals reflects the qualitative structure of conceptual associations in their minds. Interestingly, psycholinguistic studies have shown that semantic networks across many languages have a particular type of structure with small-world, scale free connectivity. So far, however, these related insights have not been brought together, in part because there has been no explicitly neural model for the dynamics of spontaneous thought. Recently, we have developed such a model. Though simplistic and abstract, this model attempts to capture the most basic aspects of the process hypothesized by theoretical models within a neurodynamical framework. It represents semantic memory as a recurrent semantic neural network with itinerant dynamics. Conceptual combinations arise through this dynamics as co-active groups of neural units, and either dissolve quickly or persist for a time as emergent metastable attractors and are recognized consciously as ideas. The work presented in this paper describes this model in detail, and uses it to systematically study the relationship between the structure of conceptual associations in the neural substrate and the ideas arising from this system's dynamics. In particular, we consider how the small-world and scale-free characteristics influence the effectiveness of the thought process under several metrics, and show that networks with both attributes indeed provide significant advantages in generating unique conceptual combinations.", "title": "" }, { "docid": "862d6e15fcf6768c0cff5e4a8fb2227c", "text": "The number of immune cells, especially dendritic cells and cytotoxic tumor infiltrating lymphocytes (TIL), particularly Th1 cells, CD8 T cells, and NK cells is associated with increased survival of cancer patients. Such antitumor cellular immune responses can be greatly enhanced by adoptive transfer of activated type 1 lymphocytes. Recently, adoptive cell therapy based on infusion of ex vivo expanded TILs has achieved substantial clinical success. Cytokine-induced killer (CIK) cells are a heterogeneous population of effector CD8 T cells with diverse TCR specificities, possessing non-MHC-restricted cytolytic activities against tumor cells. Preclinical studies of CIK cells in murine tumor models demonstrate significant antitumor effects against a number of hematopoietic and solid tumors. Clinical studies have confirmed benefit and safety of CIK cell-based therapy for patients with comparable malignancies. Enhancing the potency and specificity of CIK therapy via immunological and genetic engineering approaches and identifying robust biomarkers of response will significantly improve this therapy.", "title": "" }, { "docid": "42b8163ac8544dae2060f903c377b201", "text": "Cloud storage systems are currently very popular, generating a large amount of traffic. Indeed, many companies offer this kind of service, including worldwide providers such as Dropbox, Microsoft and Google. These companies, as well as new providers entering the market, could greatly benefit from knowing typical workload patterns that their services have to face in order to develop more cost-effective solutions. However, despite recent analyses of typical usage patterns and possible performance bottlenecks, no previous work investigated the underlying client processes that generate workload to the system. In this context, this paper proposes a hierarchical two-layer model for representing the Dropbox client behavior. We characterize the statistical parameters of the model using passive measurements gathered in 3 different network vantage points. Our contributions can be applied to support the design of realistic synthetic workloads, thus helping in the development and evaluation of new, well-performing personal cloud storage services.", "title": "" }, { "docid": "1de19775f0c32179f59674c7f0d8b540", "text": "As the most commonly used bots in first-person shooter (FPS) online games, aimbots are notoriously difficult to detect because they are completely passive and resemble excellent honest players in many aspects. In this paper, we conduct the first field measurement study to understand the status quo of aimbots and how they play in the wild. For data collection purpose, we devise a novel and generic technique called baittarget to accurately capture existing aimbots from the two most popular FPS games. Our measurement reveals that cheaters who use aimbots cannot play as skillful as excellent honest players in all aspects even though aimbots can help them to achieve very high shooting performance. To characterize the unskillful and blatant nature of cheaters, we identify seven features, of which six are novel, and these features cannot be easily mimicked by aimbots. Leveraging this set of features, we propose an accurate and robust server-side aimbot detector called AimDetect. The core of AimDetect is a cascaded classifier that detects the inconsistency between performance and skillfulness of aimbots. We evaluate the efficacy and generality of AimDetect using the real game traces. Our results show that AimDetect can capture almost all of the aimbots with very few false positives and minor overhead.", "title": "" }, { "docid": "aa907899bf41e35082641abdda1a3e85", "text": "This paper describes the measurement and analysis of the motion of a tennis swing. Over the past decade, people have taken a greater interest in their physical condition in an effort to avoid health problems due to aging. Exercise, especially sports, is an integral part of a healthy lifestyle. As a popular lifelong sport, tennis was selected as the subject of this study, with the focus on the correct form for playing tennis, which is difficult to learn. We used a 3D gyro sensor fixed at the waist to detect the angular velocity in the movement of the stroke and serve of expert and novice tennis players for comparison.", "title": "" }, { "docid": "9dda6c4714d5e0be6c3b9c21c92c0915", "text": "This paper presents design, implementation and performance of a new Variable Stiffness Actuator (VSA) based on Harmonic Drives (VSA-HD), which is an improvement over past work reported in [1], [2]. While previous prototypes have been developed to demonstrate the effectiveness of the variable stiffness actuation principle and the possibility to develop a compact and reliable actuator, the VSA-HD has been obtained by exploring the performance of the enumeration of all VSA made out a basic components set (i.e. two prime movers, two harmonic-drive gears, and the output shaft) and all the feasible interconnections between them as presented in [3]. Along this enumeration the VSA-HD conceptual layout has been selected as being good trade-off between mechanical complexity and overall performance. This paper discusses in depth the actuator mechanical layout, highlighting the main characteristics of the new design. A model for the actuator is introduced and validated by experimental results.", "title": "" }, { "docid": "01a5bc92db5ae56c3bae8ddc84a1aa9b", "text": "Accurate and automatic detection and delineation of cervical cells are two critical precursor steps to automatic Pap smear image analysis and detecting pre-cancerous changes in the uterine cervix. To overcome noise and cell occlusion, many segmentation methods resort to incorporating shape priors, mostly enforcing elliptical shapes (e.g. [1]). However, elliptical shapes do not accurately model cervical cells. In this paper, we propose a new continuous variational segmentation framework with star-shape prior using directional derivatives to segment overlapping cervical cells in Pap smear images. We show that our star-shape constraint better models the underlying problem and outperforms state-of-the-art methods in terms of accuracy and speed.", "title": "" }, { "docid": "4ba81ce5756f2311dde3fa438f81e527", "text": "To prevent password breaches and guessing attacks, banks increasingly turn to two-factor authentication (2FA), requiring users to present at least one more factor, such as a one-time password generated by a hardware token or received via SMS, besides a password. We can expect some solutions – especially those adding a token – to create extra work for users, but little research has investigated usability, user acceptance, and perceived security of deployed 2FA. This paper presents an in-depth study of 2FA usability with 21 UK online banking customers, 16 of whom had accounts with more than one bank. We collected a rich set of qualitative and quantitative data through two rounds of semi-structured interviews, and an authentication diary over an average of 11 days. Our participants reported a wide range of usability issues, especially with the use of hardware tokens, showing that the mental and physical workload involved shapes how they use online banking. Key targets for improvements are (i) the reduction in the number of authentication steps, and (ii) removing features that do not add any security but negatively affect the user experience.", "title": "" }, { "docid": "f89b282f58ac28975285a24194c209f2", "text": "Creating pixel art is a laborious process that requires artists to place individual pixels by hand. Although many image editors provide vector-to-raster conversions, the results produced do not meet the standards of pixel art: artifacts such as jaggies or broken lines frequently occur. We describe a novel Pixelation algorithm that rasterizes vector line art while adhering to established conventions used by pixel artists. We compare our results through a user study to those generated by Adobe Illustrator and Photoshop, as well as hand-drawn samples by both amateur and professional pixel artists.", "title": "" }, { "docid": "ca5eaacea8702798835ca585200b041d", "text": "ccupational Health Psychology concerns the application of psychology to improving the quality of work life and to protecting and promoting the safety, health, and well-being of workers. Contrary to what its name suggests, Occupational Health Psychology has almost exclusively dealt with ill health and poor wellbeing. For instance, a simple count reveals that about 95% of all articles that have been published so far in the leading Journal of Occupational Health Psychology have dealt with negative aspects of workers' health and well-being, such as cardiovascular disease, repetitive strain injury, and burnout. In contrast, only about 5% of the articles have dealt with positive aspects such as job satisfaction, commitment, and motivation. However, times appear to be changing. Since the beginning of this century, more attention has been paid to what has been coined positive psychology: the scientific study of human strength and optimal functioning. This approach is considered to supplement the traditional focus of psychology on psychopathology, disease, illness, disturbance, and malfunctioning. The emergence of positive (organizational) psychology has naturally led to the increasing popularity of positive aspects of health and well-being in Occupational Health Psychology. One of these positive aspects is work engagement, which is considered to be the antithesis of burnout. While burnout is usually defined as a syndrome of exhaustion, cynicism, and reduced professional efficacy, engagement is defined as a positive, fulfilling, work-related state of mind that is characterized by vigor, dedication, and absorption. Engaged employees have a sense of energetic and effective connection with their work activities. Since this new concept was proposed by Wilmar Schaufeli (Utrecht University, the Netherlands) in 2001, 93 academic articles mainly focusing on the measurement of work engagement and its possible antecedents and consequences have been published (see www.schaufeli.com). In addition, major international academic conferences organized by the International Commission on Occupational 171", "title": "" }, { "docid": "d9057298fc41a04099638733248034cb", "text": "A completely monolithic high-Q oscillator, fabricated via a combined CMOS plus surface micromachining technology, is described, for which the oscillation frequency is controlled by a polysilicon micromechanical resonator with the intent of achieving high stability. The operation and performance of micromechanical resonators are modeled, with emphasis on circuit and noise modeling of multiport resonators. A series resonant oscillator design is discussed that utilizes a unique, gain-controllable transresistance sustaining amplifier. We show that in the absence of an automatic level control loop, the closed-loop, steady-state oscillation amplitude of this oscillator depends strongly upon the dc-bias voltage applied to the capacitively driven and sensed resonator. Although the high-Q of the micromechanical resonator does contribute to improved oscillator stability, its limited power-handling ability outweighs the Q benefits and prevents this oscillator from achieving the high short-term stability normally expected of high-Q oscillators.", "title": "" }, { "docid": "f262e911b5254ad4d4419ed7114b8a4f", "text": "User Satisfaction is one of the most extensively used dimensions for Information Systems (IS) success evaluation with a large body of literature and standardized instruments of User Satisfaction. Despite the extensive literature on User Satisfaction, there exist much controversy over the measures of User Satisfaction and the adequacy of User Satisfaction measures to gauge the level of success in complex, contemporary IS. Recent studies in IS have suggested treating User Satisfaction as an overarching construct of success, rather than a measure of success. Further perplexity is introduced over the alleged overlaps between User Satisfaction measures and the measures of IS success (e.g. system quality, information quality) suggested in the literature. The following study attempts to clarify the aforementioned confusions by gathering data from 310 Enterprise System users and analyzing 16 User Satisfaction instruments. The statistical analysis of the 310 responses and the content analysis of the 16 instruments suggest the appropriateness of treating User Satisfaction as an overarching measure of success rather a dimension of success.", "title": "" }, { "docid": "811080d1bf24f041792d6895791242bb", "text": "We survey the use of weighted nite state transducers WFSTs in speech recognition We show that WFSTs provide a common and natural rep resentation for HMM models context dependency pronunciation dictio naries grammars and alternative recognition outputs Furthermore gen eral transducer operations combine these representations exibly and e ciently Weighted determinization and minimization algorithms optimize their time and space requirements and a weight pushing algorithm dis tributes the weights along the paths of a weighted transducer optimally for speech recognition As an example we describe a North American Business News NAB recognition system built using these techniques that combines the HMMs full cross word triphones a lexicon of forty thousand words and a large trigram grammar into a single weighted transducer that is only somewhat larger than the trigram word grammar and that runs NAB in real time on a very simple decoder In another example we show that the same techniques can be used to optimize lattices for second pass recognition In a third example we show how general automata operations can be used to assemble lattices from di erent recognizers to improve recognition performance Introduction Much of current large vocabulary speech recognition is based on models such as HMMs tree lexicons or n gram language models that can be represented by weighted nite state transducers Even when richer models are used for instance context free grammars for spoken dialog applications they are often restricted for e ciency reasons to regular subsets either by design or by approximation Pereira and Wright Nederhof Mohri and Nederhof M Mohri Weighted FSTs in Speech Recognition A nite state transducer is a nite automaton whose state transitions are labeled with both input and output symbols Therefore a path through the transducer encodes a mapping from an input symbol sequence to an output symbol sequence A weighted transducer puts weights on transitions in addition to the input and output symbols Weights may encode probabilities durations penalties or any other quantity that accumulates along paths to compute the overall weight of mapping an input sequence to an output sequence Weighted transducers are thus a natural choice to represent the probabilistic nite state models prevalent in speech processing We present a survey of the recent work done on the use of weighted nite state transducers WFSTs in speech recognition Mohri et al Pereira and Riley Mohri Mohri et al Mohri and Riley Mohri et al Mohri and Riley We show that common methods for combin ing and optimizing probabilistic models in speech processing can be generalized and e ciently implemented by translation to mathematically well de ned op erations on weighted transducers Furthermore new optimization opportunities arise from viewing all symbolic levels of ASR modeling as weighted transducers Thus weighted nite state transducers de ne a common framework with shared algorithms for the representation and use of the models in speech recognition that has important algorithmic and software engineering bene ts We start by introducing the main de nitions and notation for weighted nite state acceptors and transducers used in this work We then present introductory speech related examples and describe the most important weighted transducer operations relevant to speech applications Finally we give examples of the ap plication of transducer representations and operations on transducers to large vocabulary speech recognition with results that meet certain optimality criteria Weighted Finite State Transducer De nitions and Al gorithms The de nitions that follow are based on the general algebraic notion of semiring Kuich and Salomaa The semiring abstraction permits the de nition of automata representations and algorithms over a broad class of weight sets and algebraic operations A semiring K consists of a set K equipped with an associative and com mutative operation and an associative operation with identities and respectively such that distributes over and a a In other words a semiring is similar to the more familiar ring algebraic structure such as the ring of polynomials over the reals except that the additive operation may not have an inverse For example N is a semiring The weights used in speech recognition often represent probabilities the cor responding semiring is then the probability semiring R For numerical stability implementations may replace probabilities with log probabilities The appropriate semiring is then the image by log of the semiring R M Mohri Weighted FSTs in Speech Recognition and is called the log semiring When using log probabilities with a Viterbi best path approximation the appropriate semiring is the tropical semiring R f g min In the following de nitions we assume an arbitrary semiring K We will give examples with di erent semirings to illustrate the variety of useful computations that can be carried out in this framework by a judicious choice of semiring", "title": "" }, { "docid": "22f53a70d91cd12552a864f95bf02dd2", "text": "A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.", "title": "" }, { "docid": "8c9f82b50cd541ed0efe1089b098e426", "text": "This paper explores the intersection of emerging surface technologies, capable of sensing multiple contacts and of-ten shape information, and advanced games physics engines. We define a technique for modeling the data sensed from such surfaces as input within a physics simulation. This affords the user the ability to interact with digital objects in ways analogous to manipulation of real objects. Our technique is capable of modeling both multiple contact points and more sophisticated shape information, such as the entire hand or other physical objects, and of mapping this user input to contact forces due to friction and collisions within the physics simulation. This enables a variety of fine-grained and casual interactions, supporting finger-based, whole-hand, and tangible input. We demonstrate how our technique can be used to add real-world dynamics to interactive surfaces such as a vision-based tabletop, creating a fluid and natural experience. Our approach hides from application developers many of the complexities inherent in using physics engines, allowing the creation of applications without preprogrammed interaction behavior or gesture recognition.", "title": "" }, { "docid": "16a384727d6a323437a0b6ed3cdcc230", "text": "The ability to learn from a small number of examples has been a difficult problem in machine learning since its inception. While methods have succeeded with large amounts of training data, research has been underway in how to accomplish similar performance with fewer examples, known as one-shot or more generally few-shot learning. This technique has been shown to have promising performance, but in practice requires fixed-size inputs making it impractical for production systems where class sizes can vary. This impedes training and the final utility of few-shot learning systems. This paper describes an approach to constructing and training a network that can handle arbitrary example sizes dynamically as the system is used.", "title": "" }, { "docid": "8a22f454a657768a3d5fd6e6ec743f5f", "text": "In recent years, deep learning techniques have been developed to improve the performance of program synthesis from input-output examples. Albeit its significant progress, the programs that can be synthesized by state-of-the-art approaches are still simple in terms of their complexity. In this work, we move a significant step forward along this direction by proposing a new class of challenging tasks in the domain of program synthesis from input-output examples: learning a context-free parser from pairs of input programs and their parse trees. We show that this class of tasks are much more challenging than previously studied tasks, and the test accuracy of existing approaches is almost 0%. We tackle the challenges by developing three novel techniques inspired by three novel observations, which reveal the key ingredients of using deep learning to synthesize a complex program. First, the use of a non-differentiable machine is the key to effectively restrict the search space. Thus our proposed approach learns a neural program operating a domain-specific non-differentiable machine. Second, recursion is the key to achieve generalizability. Thus, we bake-in the notion of recursion in the design of our non-differentiable machine. Third, reinforcement learning is the key to learn how to operate the non-differentiable machine, but it is also hard to train the model effectively with existing reinforcement learning algorithms from a cold boot. We develop a novel two-phase reinforcement learningbased search algorithm to overcome this issue. In our evaluation, we show that using our novel approach, neural parsing programs can be learned to achieve 100% test accuracy on test inputs that are 500× longer than the training samples.", "title": "" }, { "docid": "74136e5c4090cc990f62c399781c9bb3", "text": "This paper compares statistical techniques for text classification using Naïve Bayes and Support Vector Machines, in context of Urdu language. A large corpus is used for training and testing purpose of the classifiers. However, those classifiers cannot directly interpret the raw dataset, so language specific preprocessing techniques are applied on it to generate a standardized and reduced-feature lexicon. Urdu language is morphological rich language which makes those tasks complex. Statistical characteristics of corpus and lexicon are measured which show satisfactory results of text preprocessing module. The empirical results show that Support Vector Machines outperform Naïve Bayes classifier in terms of classification accuracy.", "title": "" }, { "docid": "b46814fe7425ca376069e2871bdf431a", "text": "Colours seen in dreams by six observers were recorded from memory and plotted on a CIE u, v, chromaticity diagram. Only about half the dreams recorded contained colour, and in those in which colour appeared the more saturated purples, blues and blue greens were absent. It is suggested that during achromatic dreams the areas of the visual cortex which seem to respond only to colour may be inoperative. The paucity of blue in dreams could be anatomically related to the small population of blue units in the colour areas of the cortex.", "title": "" }, { "docid": "704d068f791a8911068671cb3dca7d55", "text": "Most models of visual search, whether involving overt eye movements or covert shifts of attention, are based on the concept of a saliency map, that is, an explicit two-dimensional map that encodes the saliency or conspicuity of objects in the visual environment. Competition among neurons in this map gives rise to a single winning location that corresponds to the next attended target. Inhibiting this location automatically allows the system to attend to the next most salient location. We describe a detailed computer implementation of such a scheme, focusing on the problem of combining information across modalities, here orientation, intensity and color information, in a purely stimulus-driven manner. The model is applied to common psychophysical stimuli as well as to a very demanding visual search task. Its successful performance is used to address the extent to which the primate visual system carries out visual search via one or more such saliency maps and how this can be tested.", "title": "" } ]
scidocsrr
be42e28abe754153276a48b49fc2136a
Indexing by Latent Dirichlet Allocation and an Ensemble Model
[ { "docid": "14838947ee3b95c24daba5a293067730", "text": "In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.", "title": "" }, { "docid": "e4f5b211598570faf43fc08e961faf86", "text": "Activities such as Web Services and the Semantic Web are working to create a web of distributed machine understandable data. In this paper we present an application called 'Semantic Search' which is built on these supporting technologies and is designed to improve traditional web searching. We provide an overview of TAP, the application framework upon which the Semantic Search is built. We describe two implemented Semantic Search systems which, based on the denotation of the search query, augment traditional search results with relevant data aggregated from distributed sources. We also discuss some general issues related to searching and the Semantic Web and outline how an understanding of the semantics of the search terms can be used to provide better results.", "title": "" }, { "docid": "80b173cf8dbd0bc31ba8789298bab0fa", "text": "This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to the latter method which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed technique uses a generative latent class model to perform a probabilistic mixture decomposition. This results in a more principled approach with a solid foundation in statistical inference. More precisely, we propose to make use of a temperature controlled version of the Expectation Maximization algorithm for model fitting, which has shown excellent performance in practice. Probabilistic Latent Semantic Analysis has many applications, most prominently in information retrieval, natural language processing, machine learning from text, and in related areas. The paper presents perplexity results for different types of text and linguistic data collections and discusses an application in automated document indexing. The experiments indicate substantial and consistent improvements of the probabilistic method over standard Latent Semantic Analysis.", "title": "" } ]
[ { "docid": "285f46045afe4ded9a2fcabfcfe9ef02", "text": "Spin-transfer torque magnetic memory (STT-MRAM) has gained significant research interest due to its nonvolatility and zero standby leakage, near unlimited endurance, excellent integration density, acceptable read and write performance, and compatibility with CMOS process technology. However, several obstacles need to be overcome for STT-MRAM to become the universal memory technology. This paper first reviews the fundamentals of STT-MRAM and discusses key experimental breakthroughs. The state of the art in STT-MRAM is then discussed, beginning with the device design concepts and challenges. The corresponding bit-cell design solutions are also presented, followed by the STT-MRAM cache architectures suitable for on-chip applications.", "title": "" }, { "docid": "f9bb24bb458866c9b73b761ac5463d6d", "text": "Sharp-wave-ripple (SPW-R) complexes are believed to mediate memory reactivation, transfer, and consolidation. However, their underlying neuronal dynamics at multiple scales remains poorly understood. Using concurrent hippocampal local field potential (LFP) recordings and functional MRI (fMRI), we study local changes in neuronal activity during SPW-R episodes and their brain-wide correlates. Analysis of the temporal alignment between SPW and ripple components reveals well-differentiated SPW-R subtypes in the CA1 LFP. SPW-R-triggered fMRI maps show that ripples aligned to the positive peak of their SPWs have enhanced neocortical metabolic up-regulation. In contrast, ripples occurring at the trough of their SPWs relate to weaker neocortical up-regulation and absent subcortical down-regulation, indicating differentiated involvement of neuromodulatory pathways in the ripple phenomenon mediated by long-range interactions. To our knowledge, this study provides the first evidence for the existence of SPW-R subtypes with differentiated CA1 activity and metabolic correlates in related brain areas, possibly serving different memory functions.", "title": "" }, { "docid": "34fb2f437c5135297ec2ad52556440e9", "text": "This study investigates self-disclosure in the novel context of online dating relationships. Using a national random sample of Match.com members (N = 349), the authors tested a model of relational goals, self-disclosure, and perceived success in online dating. The authors’findings provide support for social penetration theory and the social information processing and hyperpersonal perspectives as well as highlight the positive effect of anticipated future face-to-face interaction on online self-disclosure. The authors find that perceived online dating success is predicted by four dimensions of self-disclosure (honesty, amount, intent, and valence), although honesty has a negative effect. Furthermore, online dating experience is a strong predictor of perceived success in online dating. Additionally, the authors identify predictors of strategic success versus self-presentation success. This research extends existing theory on computer-mediated communication, selfdisclosure, and relational success to the increasingly important arena of mixed-mode relationships, in which participants move from mediated to face-to-face communication.", "title": "" }, { "docid": "dcf4de4629be22628f5b226a1dcee856", "text": "Paper prototyping offers unique affordances for interface design. However, due to its spontaneous nature and the limitations of paper, it is difficult to distill and communicate a paper prototype design and its user test findings to a wide audience. To address these issues, we created FrameWire, a computer vision-based system that automatically extracts interaction flows from the video recording of paper prototype user tests. Based on the extracted logic, FrameWire offers two distinct benefits for designers: a structural view of the video recording that allows a designer or a stakeholder to easily distill and understand the design concept and user interaction behaviors, and automatic generation of interactive HTML-based prototypes that can be easily tested with a larger group of users as well as \"walked through\" by other stakeholders. The extraction is achieved by automatically aggregating video frame sequences into an interaction flow graph based on frame similarities and a designer-guided clustering process. The results of evaluating FrameWire with realistic paper prototyping tests show that our extraction approach is feasible and FrameWire is a promising tool for enhancing existing prototyping practice.", "title": "" }, { "docid": "05874da7b27475377dcd8f7afdd1bc5a", "text": "The main aim of this paper is to provide automatic irrigation to the plants which helps in saving money and water. The entire system is controlled using 8051 micro controller which is programmed as giving the interrupt signal to the sprinkler.Temperature sensor and humidity sensor are connected to internal ports of micro controller via comparator,When ever there is a change in temperature and humidity of the surroundings these sensors senses the change in temperature and humidity and gives an interrupt signal to the micro-controller and thus the sprinkler is activated.", "title": "" }, { "docid": "8552f08b2c98bcf201f623e95073f9e3", "text": "The power sensitivity of passive Radio Frequency Identification (RFID) tags heavily affects the read reliability and range. Inventory tracking systems rely heavily on strong read reliability while animal tracking in large fields rely heavily on long read range. Power Optimized Waveforms (POWs) provide a solution to improving both read reliability and read range by increasing RFID tag RF to DC power conversion efficiency. This paper presents a survey of the increases and decreases to read range of common RFID tags from Alien and Impinj with Higgs, Higgs 2, Higgs 3, Monza 3, and Monza 4 RFICs. In addition, POWs are explained in detail with examples and methods of integration into a reader.", "title": "" }, { "docid": "3bdd30d2c6e63f2e5540757f1db878b6", "text": "The spreading of unsubstantiated rumors on online social networks (OSN) either unintentionally or intentionally (e.g., for political reasons or even trolling) can have serious consequences such as in the recent case of rumors about Ebola causing disruption to health-care workers. Here we show that indicators aimed at quantifying information consumption patterns might provide important insights about the virality of false claims. In particular, we address the driving forces behind the popularity of contents by analyzing a sample of 1.2M Facebook Italian users consuming different (and opposite) types of information (science and conspiracy news). We show that users’ engagement across different contents correlates with the number of friends having similar consumption patterns (homophily), indicating the area in the social network where certain types of contents are more likely to spread. Then, we test diffusion patterns on an external sample of 4,709 intentional satirical false claims showing that neither the presence of hubs (structural properties) nor the most active users (influencers) are prevalent in viral phenomena. Instead, we found out that in an environment where misinformation is pervasive, users’ aggregation around shared beliefs may make the usual exposure to conspiracy stories (polarization) a determinant for the virality of false information. ∗Corresponding author General Terms Misinformation, Virality, Attention Patterns", "title": "" }, { "docid": "eccebb7b83f0f21eb1e046ed39740149", "text": "Evolutionary computation is the study of non-deterministic search algorithms that are based on aspects of Darwin’s theory of evolution by natural selection [10]. The principal originators of evolutionary algorithms were John Holland, Ingo Rechenberg, Hans-Paul Schwefel and Lawrence Fogel. Holland proposed genetic algorithms and wrote about them in his 1975 book [19]. He emphasized the role of genetic recombination (often called ‘crossover’). Ingo Rechenberg and Hans-Paul Schwefel worked on the optimization of physical shapes in fluids and, after trying a variety of classical optimization techniques, discovered that altering physical variables in a random manner (ensuring small modifications were more frequent than larger ones) proved to be a very effective technique. This gave rise to a form of evolutionary algorithm that they termed an evolutionary strategy [38, 40]. Lawrence Fogel investigated evolving finite state machines to predict symbol strings of symbols generated by Markov processes and non-stationary time series [15]. However, as is so often the case in science, various scientists considered or suggested search algorithms inspired by Darwinian evolution much earlier. David Fogel, Lawrence Fogel’s son, offers a detailed account of such early pioneers in his book on the history of evolutionary computation [14]. However, it is interesting to note that the idea of artificial evolution was suggested by one of the founders of computer science, Alan Turing, in 1948. Turing wrote an essay while working on the construction of an electronic computer called the Automatic Computing Engine (ACE) at the National Physical Laboratory in the UK. His employer happened to be Sir Charles Darwin, the grandson of Charles Darwin, the author of ‘On the Origin of Species’. Sir Charles dismissed the article as a “schoolboy essay”! It has since been recognized that in the article Turing not only proposed artificial neural networks but the field of artificial intelligence itself [44].", "title": "" }, { "docid": "08f3e3a76808c546ed761a24fb10561c", "text": "We propose a pre-training technique for recurrent neural networks based on linear autoencoder networks for sequences, i.e. linear dynamical systems modelling the target sequences. We start by giving a closed form solution for the definition of the optimal weights of a linear autoencoder given a training set of sequences. This solution, however, is computationally very demanding, so we suggest a procedure to get an approximate solution for a given number of hidden units. The weights obtained for the linear autoencoder are then used as initial weights for the inputto-hidden connections of a recurrent neural network, which is then trained on the desired task. Using four well known datasets of sequences of polyphonic music, we show that the proposed pre-training approach is highly effective, since it allows to largely improve the state of the art results on all the considered datasets.", "title": "" }, { "docid": "b593e73a0be2c9a49430947413ce2d6b", "text": "In this paper, we propose a reusable and high-efficiency two-stage deep learning based method for surface defect inspection in industrial environment. Aiming to achieve trade-offs between efficiency and accuracy simultaneously, our method makes a novel combination of a segmentation stage (stage1) and a detection stage (stage2), which are consisted of two fully convolutional networks (FCN) separately. In the segmentation stage we use a lightweight FCN to make a spatially dense pixel-wise prediction to inference the area of defect coarsely and quickly. Those predicted defect areas act as the initialization of stage2, guiding the process of detection to refine the segmentation results. We also use an unusual training strategy: training with the patches cropped from the images. Such strategy has greatly utility in industrial inspection where training data may be scarce. We will validate our findings by analyzing the performance obtained on the dataset of DAGM 2007.", "title": "" }, { "docid": "d75ebc4041927b525d8f4937c760518e", "text": "Most current term frequency normalization approaches for information retrieval involve the use of parameters. The tuning of these parameters has an important impact on the overall performance of the information retrieval system. Indeed, a small variation in the involved parameter(s) could lead to an important variation in the precision/recall values. Most current tuning approaches are dependent on the document collections. As a consequence, the effective parameter value cannot be obtained for a given new collection without extensive training data. In this paper, we propose a novel and robust method for the tuning of term frequency normalization parameter(s), by measuring the normalization effect on the within document frequency of the query terms. As an illustration, we apply our method on Amati \\& Van Rijsbergen's so-called normalization 2. The experiments for the ad-hoc TREC-6,7,8 tasks and TREC-8,9,10 Web tracks show that the new method is independent of the collections and able to provide reliable and good performance.", "title": "" }, { "docid": "b806b8ef552ead8f6668eeb3c9b51095", "text": "In this paper, we analyze a workload trace from the Google cloud cluster and characterize the observed failures. The goal of our work is to improve the understanding of failures in compute clouds. We present the statistical properties of job and task failures, and attempt to correlate them with key scheduling constraints, node operations, and attributes of users in the cloud. We also explore the potential for early failure prediction, and anomaly detection for the jobs. Based on our results, we speculate that there are many opportunities to enhance the reliability of the applications running in the cloud, such as pro-active maintenance of nodes or limiting job resubmissions. We further find that resource usage patterns of the jobs can be leveraged by failure prediction techniques. Finally, we find that the termination statuses of jobs and tasks can be clustered into six dominant categories based on the user profiles.", "title": "" }, { "docid": "f79eca0cafc35ed92fd8ffd2e7a4ab60", "text": "We investigate the novel task of online dispute detection and propose a sentiment analysis solution to the problem: we aim to identify the sequence of sentence-level sentiments expressed during a discussion and to use them as features in a classifier that predicts the DISPUTE/NON-DISPUTE label for the discussion as a whole. We evaluate dispute detection approaches on a newly created corpus of Wikipedia Talk page disputes and find that classifiers that rely on our sentiment tagging features outperform those that do not. The best model achieves a very promising F1 score of 0.78 and an accuracy of 0.80.", "title": "" }, { "docid": "c65002f1c53fb8f08e3565fba5fc4b32", "text": "聚类算法是非常流行和通用的算法,但聚类结果 依赖于初始集和算法容易陷于局部最优解,是影响聚类 结果的两个主要因素,也是在使用聚类算法时必须要考 虑的问题。很多算法在这两个方面都作了改进,层次 聚类针对数据之间的内部联系,能为聚类提供很好的 初始集,在线聚类在聚类过程中按照某种规则将样本重 聚类,PCA等方法通过一个函数映射过程,对数据进 行线性组合,以达到降维划分。2007 年 2 月,加拿大 Toronto 大学的 Brendan J. Frey t和 Delbert Dueck 提出了 一种利用点对间的相似性进行近亲繁殖迭代,直到实现 聚类。该方法与其它聚类算法相比,具有较快的速度和 鲁棒性,有效地避免了初始集难以确定的问题[8]。在本 文算法中,针对聚类算法的两个问题提出了一个自适应 的方法,该方法不需要人工干预,根据图像自动选择初 始集,且采用颜色距离和空间距离两种尺度作为聚类准 则,避开了算法在不恰当的局部最优收敛的可能。本文 基于色彩的自适应聚类分割", "title": "" }, { "docid": "36537442a340363be73bbdfb319b91eb", "text": "Future sensor networks will be composed of a large number of densely deployed sensors/actuators. A key feature of such networks is that their nodes are untethered and unattended. Consequently, energy efficiency is an important design consideration for these networks. Motivated by the fac t that sensor network queries may often be geographical, we design and evaluate an energy efficient routing algorithm that propagates a query to the appropriate geographical region, without flooding. The proposed Geographic and Energy Aware Routing (GEAR) algorithm uses energy aware neighbor selection to route a packet towards the target regi on and Recursive Geographic Forwarding or Restricted Flooding algorithm to disseminate the packet inside the destina-", "title": "" }, { "docid": "6721d6fb3b2f97062303eb63e6e9de31", "text": "Business process modeling is a big part in the industry, mainly to document, analyze, and optimize workflows. Currently, the EPC process modeling notation is used very wide, because of the excellent integration in the ARIS Toolset and the long existence of this process language. But as a change of time, BPMN gets popular and the interest in the industry and companies gets growing up. It is standardized, has more expressiveness than EPC and the tool support increase very rapidly. With having tons of existing EPC process models; a big need from the industry is to have an automated transformation from EPC to BPMN. This paper specified a direct approach of a transformation from EPC process model elements to BPMN. Thereby it is tried to map every construct in EPC fully automated to BPMN. But as it is described, not for every process element works this out, so in addition, some extensions and semantics rules are defined.", "title": "" }, { "docid": "635d981a3f54735ccea336feb0ead45b", "text": "Keyphrase is an efficient representation of the main idea of documents. While background knowledge can provide valuable information about documents, they are rarely incorporated in keyphrase extraction methods. In this paper, we propose WikiRank, an unsupervised method for keyphrase extraction based on the background knowledge from Wikipedia. Firstly, we construct a semantic graph for the document. Then we transform the keyphrase extraction problem into an optimization problem on the graph. Finally, we get the optimal keyphrase set to be the output. Our method obtains improvements over other state-of-art models by more than 2% in F1-score.", "title": "" }, { "docid": "5793cf03753f498a649c417e410c325e", "text": "The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes.", "title": "" }, { "docid": "ecdd1d68113d792fc12c8501c65f982c", "text": "Infertility problem is an important issue in recent decades. Semen analysis is one of the principle tasks to evaluate male partner fertility potential. It has been seen in many researches that life habits and health status affect semen quality. Data mining as a decision support system can help to recognize this effect. The artificial neural network (ANN) is a powerful data mining tool that can be used for this goal. The performance of ANN depends heavily on network structure. It is a very difficult task to determine the appropriate structure and is a discussable matter. This paper utilizes a genetic algorithm to optimize the structure of artificial neural network to classify the semen samples. These samples usually suffer from unbalancing problem. Thus, this paper attempts to resolve it by using the bootstrap method. The performance of the proposed algorithm is significantly better than the previous works. We achieve accuracy equal to 93.86% in our experiments on a real fertility diagnosis dataset that is a good improvement compared with other classification methods.", "title": "" }, { "docid": "a29d666fe1135bb60a75f1cecf85e31c", "text": "Approximate computing aims for efficient execution of workflows where an approximate output is sufficient instead of the exact output. The idea behind approximate computing is to compute over a representative sample instead of the entire input dataset. Thus, approximate computing — based on the chosen sample size — can make a systematic trade-off between the output accuracy and computation efficiency. Unfortunately, the state-of-the-art systems for approximate computing primarily target batch analytics, where the input data remains unchanged during the course of sampling. Thus, they are not well-suited for stream analytics. This motivated the design of StreamApprox— a stream analytics system for approximate computing. To realize this idea, we designed an online stratified reservoir sampling algorithm to produce approximate outputwith rigorous error bounds. Importantly, our proposed algorithm is generic and can be applied to two prominent types of stream processing systems: (1) batched stream processing such asApache Spark Streaming, and (2) pipelined stream processing such as Apache Flink. To showcase the effectiveness of our algorithm,we implemented StreamApprox as a fully functional prototype based on Apache Spark Streaming and Apache Flink. We evaluated StreamApprox using a set of microbenchmarks and real-world case studies. Our results show that Sparkand Flink-based StreamApprox systems achieve a speedup of 1.15×—3× compared to the respective native Spark Streaming and Flink executions, with varying sampling fraction of 80% to 10%. Furthermore, we have also implemented an improved baseline in addition to the native execution baseline — a Spark-based approximate computing system leveraging the existing sampling modules in Apache Spark. Compared to the improved baseline, our results show that StreamApprox achieves a speedup 1.1×—2.4× while maintaining the same accuracy level. This technical report is an extended version of our conference publication [39].", "title": "" } ]
scidocsrr
39c3b23d47030037ec82f18acf5d4214
Fog Computing Over IoT: A Secure Deployment and Formal Verification
[ { "docid": "79caff0b1495900b5c8f913562d3e84d", "text": "We propose a formal model of web security based on an abstraction of the web platform and use this model to analyze the security of several sample web mechanisms and applications. We identify three distinct threat models that can be used to analyze web applications, ranging from a web attacker who controls malicious web sites and clients, to stronger attackers who can control the network and/or leverage sites designed to display user-supplied content. We propose two broadly applicable security goals and study five security mechanisms. In our case studies, which include HTML5 forms, Referer validation, and a single sign-on solution, we use a SAT-based model-checking tool to find two previously known vulnerabilities and three new vulnerabilities. Our case study of a Kerberos-based single sign-on system illustrates the differences between a secure network protocol using custom client software and a similar but vulnerable web protocol that uses cookies, redirects, and embedded links instead.", "title": "" }, { "docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db", "text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.", "title": "" } ]
[ { "docid": "48bc09501132babdd003f8016a01fff7", "text": "We study the security challenges that arise in opportunistic people-centric sensing, a new sensing paradigm leveraging humans as part of the sensing infrastructure. Most prior sensor-network research has focused on collecting and processing environmental data using a static topology and an application-aware infrastructure, whereas opportunistic sensing involves collecting, storing, processing and fusing large volumes of data related to everyday human activities. This highly dynamic and mobile setting, where humans are the central focus, presents new challenges for information security, because data originates from sensors carried by people— not tiny sensors thrown in the forest or attached to animals. In this paper we aim to instigate discussion of this critical issue, because opportunistic people-centric sensing will never succeed without adequate provisions for security and privacy. To that end, we outline several important challenges and suggest general solutions that hold promise in this new sensing paradigm.", "title": "" }, { "docid": "13091eb3775715269b7bee838f0a6b00", "text": "Smartphones can now connect to a variety of external sensors over wired and wireless channels. However, ensuring proper device interaction can be burdensome, especially when a single application needs to integrate with a number of sensors using different communication channels and data formats. This paper presents a framework to simplify the interface between a variety of external sensors and consumer Android devices. The framework simplifies both application and driver development with abstractions that separate responsibilities between the user application, sensor framework, and device driver. These abstractions facilitate a componentized framework that allows developers to focus on writing minimal pieces of sensor-specific code enabling an ecosystem of reusable sensor drivers. The paper explores three alternative architectures for application-level drivers to understand trade-offs in performance, device portability, simplicity, and deployment ease. We explore these tradeoffs in the context of four sensing applications designed to support our work in the developing world. They highlight a range of sensor usage models for our application-level driver framework that vary data types, configuration methods, communication channels, and sampling rates to demonstrate the framework's effectiveness.", "title": "" }, { "docid": "e945b0e23ad090cd76b920e073d26116", "text": "Despite the success of proxy caching in the Web, proxy servers have not been used effectively for caching of Internet multimedia streams such as audio and video. Explosive growth in demand for web-based streaming applications justifies the need for caching popular streams at a proxy server close to the interested clients. Because of the need for congestion control in the Internet, multimedia streams should be quality adaptive. This implies that on a cache-hit, a proxy must replay a variable-quality cached stream whose quality is determined by the bandwidth of the first session. This paper addresses the implications of congestion control and quality adaptation on proxy caching mechanisms. We present a fine-grain replacement algorithm for layered-encoded multimedia streams at Internet proxy servers, and describe a pre-fetching scheme to smooth out the variations in quality of a cached stream during subsequent playbacks. This enables the proxy to perform quality adaptation more effectively and maximizes the delivered quality. We also extend the semantics of popularity and introduce the idea of weighted hit to capture both the level of interest and the usefulness of a layer for a cached stream. Finally, we present our replacement algorithm and show that its interaction with prefetching results in the state of the cache converging to the optimal state such that the quality of a cached stream is proportional to its popularity, and the variations in quality of a cached stream are inversely proportional to its popularity. This implies that after serving several requests for a stream, the proxy can effectively hide low bandwidth paths to the original server from interested clients.", "title": "" }, { "docid": "201843d32d030d4c9bb388e4fbcd4f3c", "text": "This paper reports on thermal-mechanical failures of through-silicon-vias (TSVs), in particular, for the first time, the protrusions at the TSV backside, which is exposed after wafer bonding, thinning and TSV revealing. Temperature dependence of TSV protrusion is investigated based on wide-range thermal shock and thermal cycling tests. While TSV protrusion on the TSV frontside is not visible after any of the tests, protrusions on the backside are found after both thermal shock tests and thermal cycling tests at temperatures above 250°C. The average TSV protrusion height increases from ~0.1 μm at 250°C to ~0.5 μm at 400°C and can be fitted to an exponential function with an activation energy of ~0.6eV, suggesting a Cu grain boundary diffusion mechanism.", "title": "" }, { "docid": "b58a04bbb5d69e6d2e48392d389383a7", "text": "Automatic generation of natural language from images has attracted extensive attention. In this paper, we take one step further to investigate generation of poetic language (with multiple lines) to an image for automatic poetry creation. This task involves multiple challenges, including discovering poetic clues from the image (e.g., hope from green), and generating poems to satisfy both relevance to the image and poeticness in language level. To solve the above challenges, we formulate the task of poem generation into two correlated sub-tasks by multi-adversarial training via policy gradient, through which the cross-modal relevance and poetic language style can be ensured. To extract poetic clues from images, we propose to learn a deep coupled visual-poetic embedding, in which the poetic representation from objects, sentiments \\footnoteWe consider both adjectives and verbs that can express emotions and feelings as sentiment words in this research. and scenes in an image can be jointly learned. Two discriminative networks are further introduced to guide the poem generation, including a multi-modal discriminator and a poem-style discriminator. To facilitate the research, we have released two poem datasets by human annotators with two distinct properties: 1) the first human annotated image-to-poem pair dataset (with $8,292$ pairs in total), and 2) to-date the largest public English poem corpus dataset (with $92,265$ different poems in total). Extensive experiments are conducted with 8K images, among which 1.5K image are randomly picked for evaluation. Both objective and subjective evaluations show the superior performances against the state-of-the-art methods for poem generation from images. Turing test carried out with over $500$ human subjects, among which 30 evaluators are poetry experts, demonstrates the effectiveness of our approach.", "title": "" }, { "docid": "6a8e0345055e3c1f0dce1059e85741cf", "text": "This paper presents the effectiveness of a time-delay compensation method based on the concept of network disturbance and communication disturbance observer for bilateral teleoperation systems under time-varying delay. The most efficient feature of the compensation method is that it works without time-delay models (model-based time-delay compensation approaches like Smith predictor usually need time-delay models). Therefore, the method is expected to be widely applied to network-based control systems, in which time delay is usually unknown and time varying. In this paper, the validity of the time-delay compensation method in the cases of both constant delay and time-varying delay is verified by experimental results compared with Smith predictor.", "title": "" }, { "docid": "ed46f9225b60c5f128257310cd1b27ed", "text": "We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing control scheme. The method converges robustly even in difficult real-world settings with strong lighting variations and occlusions. A positioning error of less than one millimeter is obtained in experiments with a 6 DOF robot.", "title": "" }, { "docid": "4bf4582ed9da1bca4124f15f3e23fba4", "text": "MOTIVATION\nGenetic networks are often described statistically using graphical models (e.g. Bayesian networks). However, inferring the network structure offers a serious challenge in microarray analysis where the sample size is small compared to the number of considered genes. This renders many standard algorithms for graphical models inapplicable, and inferring genetic networks an 'ill-posed' inverse problem.\n\n\nMETHODS\nWe introduce a novel framework for small-sample inference of graphical models from gene expression data. Specifically, we focus on the so-called graphical Gaussian models (GGMs) that are now frequently used to describe gene association networks and to detect conditionally dependent genes. Our new approach is based on (1) improved (regularized) small-sample point estimates of partial correlation, (2) an exact test of edge inclusion with adaptive estimation of the degree of freedom and (3) a heuristic network search based on false discovery rate multiple testing. Steps (2) and (3) correspond to an empirical Bayes estimate of the network topology.\n\n\nRESULTS\nUsing computer simulations, we investigate the sensitivity (power) and specificity (true negative rate) of the proposed framework to estimate GGMs from microarray data. This shows that it is possible to recover the true network topology with high accuracy even for small-sample datasets. Subsequently, we analyze gene expression data from a breast cancer tumor study and illustrate our approach by inferring a corresponding large-scale gene association network for 3883 genes.", "title": "" }, { "docid": "ccfba22a7697a9deaedbb7d1ceebbc33", "text": "The Machine Learning field evolved from the broad field of Artificial Intelligence, which aims to mimic intelligent abilities of humans by machines. In the field of Machine Learningone considers the important question of how to make machines able to “learn”. Learning in this context is understood as inductive inference , where one observesexamplesthat represent incomplete information about some “statistical phenomenon”. Inunsupervisedlearning one typically tries to uncover hidden regularities (e.g. clusters) or to detect anomalies in the data (for instance some unusual machine function or a network intrusion). Insupervised learning , there is alabel associated with each example. It is supposed to be the answer to a question about the example. If the label is discrete, then the task is called classification problem– otherwise, for realvalued labels we speak of a regression problem. Based on these examples (including the labels), one is particularly interested to predict the answer for other cases before they are explicitly observed. Hence, learning is not only a question of remembering but also ofgeneralization to unseen cases .", "title": "" }, { "docid": "ca7870fd17c25a8ef2931cb39c062018", "text": "This paper offers an active inference account of choice behaviour and learning. It focuses on the distinction between goal-directed and habitual behaviour and how they contextualise each other. We show that habits emerge naturally (and autodidactically) from sequential policy optimisation when agents are equipped with state-action policies. In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits. Although goal-directed and habitual policies are usually associated with model-based and model-free schemes, we find the more important distinction is between belief-free and belief-based schemes. The underlying (variational) belief updating provides a comprehensive (if metaphorical) process theory for several phenomena, including the transfer of dopamine responses, reversal learning, habit formation and devaluation. Finally, we show that active inference reduces to a classical (Bellman) scheme, in the absence of ambiguity.", "title": "" }, { "docid": "8c9e311397d99dddd9a649a2f412604f", "text": "Currently, information security is a significant challenge in the information era because businesses store critical information in databases. Therefore, databases need to be a secure component of an enterprise. Organizations use Intrusion Detection Systems (IDS) as a security infrastructure component, of which a popular implementation is Snort. In this paper, we provide an overview of Snort and evaluate its ability to detect SQL Injection attacks.", "title": "" }, { "docid": "80114894d0de71af2bff0e2d5f168b2c", "text": "Software companies can leverage successful firms' business and revenue models to create a competitive advantage.", "title": "" }, { "docid": "41135401a2f04797ea2b4989065613bd", "text": "With the rapid expansion of new available information presented to us online on a daily basis, text classification becomes imperative in order to classify and maintain it. Word2vec offers a unique perspective to the text mining community. By converting words and phrases into a vector representation, word2vec takes an entirely new approach on text classification. Based on the assumption that word2vec brings extra semantic features that helps in text classification, our work demonstrates the effectiveness of word2vec by showing that tf-idf and word2vec combined can outperform tf-idf because word2vec provides complementary features (e.g. semantics that tf-idf can't capture) to tf-idf. Our results show that the combination of word2vec weighted by tf-idf and tf-idf does not outperform tf-idf consistently. It is consistent enough to say the combination of the two can outperform either individually.", "title": "" }, { "docid": "54bae3ac2087dbc7dcba553ce9f2ef2e", "text": "The landscape of computing capabilities within the home has seen a recent shift from persistent desktops to mobile platforms, which has led to the use of the cloud as the primary computing platform implemented by developers today. Cloud computing platforms, such as Amazon EC2 and Google App Engine, are popular for many reasons including their reliable, always on, and robust nature. The capabilities that centralized computing platforms provide are inherent to their implementation, and unmatched by previous platforms (e.g., Desktop applications). Thus, third-party developers have come to rely on cloud computing platforms to provide high quality services to their end-users.", "title": "" }, { "docid": "80a34e1544f9a20d6e1698278e0479b5", "text": "We introduce a method for imposing higher-level structure on generated, polyphonic music. A Convolutional Restricted Boltzmann Machine (C-RBM) as a generative model is combined with gradient descent constraint optimisation to provide further control over the generation process. Among other things, this allows for the use of a “template” piece, from which some structural properties can be extracted, and transferred as constraints to the newly generated material. The sampling process is guided with Simulated Annealing to avoid local optima, and to find solutions that both satisfy the constraints, and are relatively stable with respect to the C-RBM. Results show that with this approach it is possible to control the higher-level self-similarity structure, the meter, and the tonal properties of the resulting musical piece, while preserving its local musical coherence.", "title": "" }, { "docid": "1c22ee7dc93c35b45a817866c822f0e7", "text": "Despite the recent advances in test generation, fully automatic software testing remains a dream: Ultimately, any generated test input depends on a test oracle that determines correctness, and, except for generic properties such as “the program shall not crash”, such oracles require human input in one form or another. CrowdSourcing is a recently popular technique to automate computations that cannot be performed by machines, but only by humans. A problem is split into small chunks, that are then solved by a crowd of users on the Internet. In this paper we investigate whether it is possible to exploit CrowdSourcing to solve the oracle problem: We produce tasks asking users to evaluate CrowdOracles - assertions that reflect the current behavior of the program. If the crowd determines that an assertion does not match the behavior described in the code documentation, then a bug has been found. Our experiments demonstrate that CrowdOracles are a viable solution to automate the oracle problem, yet taming the crowd to get useful results is a difficult task.", "title": "" }, { "docid": "ae961e9267b1571ec606347f56b0d4ca", "text": "A benchmark turbulent Backward Facing Step (BFS) airflow was studied in detail through a program of tightly coupled experimental and CFD analysis. The theoretical and experimental approaches were developed simultaneously in a “building block” approach and the results used to verify each “block”. Information from both CFD and experiment was used to develop confidence in the accuracy of each technique and to increase our understanding of the BFS flow.", "title": "" }, { "docid": "5433a8e449bf4bf9d939e645e171f7e5", "text": "Software Testing (ST) processes attempt to verify and validate the capability of a software system to meet its required attributes and functionality. As software systems become more complex, the need for automated software testing methods emerges. Machine Learning (ML) techniques have shown to be quite useful for this automation process. Various works have been presented in the junction of ML and ST areas. The lack of general guidelines for applying appropriate learning methods for software testing purposes is our major motivation in this current paper. In this paper, we introduce a classification framework which can help to systematically review research work in the ML and ST domains. The proposed framework dimensions are defined using major characteristics of existing software testing and machine learning methods. Our framework can be used to effectively construct a concrete set of guidelines for choosing the most appropriate learning method and applying it to a distinct stage of the software testing life-cycle for automation purposes.", "title": "" }, { "docid": "087ca9ca531f14e8546c9f03d9e76ed3", "text": "Deep generative models have shown promising results in generating realistic images, but it is still non-trivial to generate images with complicated structures. The main reason is that most of the current generative models fail to explore the structures in the images including spatial layout and semantic relations between objects. To address this issue, we propose a novel deep structured generative model which boosts generative adversarial networks (GANs) with the aid of structure information. In particular, the layout or structure of the scene is encoded by a stochastic and-or graph (sAOG), in which the terminal nodes represent single objects and edges represent relations between objects. With the sAOG appropriately harnessed, our model can successfully capture the intrinsic structure in the scenes and generate images of complicated scenes accordingly. Furthermore, a detection network is introduced to infer scene structures from a image. Experimental results demonstrate the effectiveness of our proposed method on both modeling the intrinsic structures, and generating realistic images.", "title": "" }, { "docid": "7c4104651e484e4cbff5735d62f114ef", "text": "A pair of salient tradeoffs have driven the multiple-input multiple-output (MIMO) systems developments. More explicitly, the early era of MIMO developments was predominantly motivated by the multiplexing-diversity tradeoff between the Bell Laboratories layered space-time and space-time block coding. Later, the linear dispersion code concept was introduced to strike a flexible tradeoff. The more recent MIMO system designs were motivated by the performance-complexity tradeoff, where the spatial modulation and space-time shift keying concepts eliminate the problem of inter-antenna interference and perform well with the aid of low-complexity linear receivers without imposing a substantial performance loss on generic maximum-likelihood/max a posteriori -aided MIMO detection. Against the background of the MIMO design tradeoffs in both uncoded and coded MIMO systems, in this treatise, we offer a comprehensive survey of MIMO detectors ranging from hard decision to soft decision. The soft-decision MIMO detectors play a pivotal role in approaching to the full-performance potential promised by the MIMO capacity theorem. In the near-capacity system design, the soft-decision MIMO detection dominates the total complexity, because all the MIMO signal combinations have to be examined, when both the channel’s output signal and the a priori log-likelihood ratios gleaned from the channel decoder are taken into account. Against this background, we provide reduced-complexity design guidelines, which are conceived for a wide-range of soft-decision MIMO detectors.", "title": "" } ]
scidocsrr
883813dafc99315c8651a4c6633d1488
IoT system for monitoring vital signs of elderly population
[ { "docid": "99486ea19c5200afd11e8a1048ae1485", "text": "New wireless technology for tele-home-care purposes gives new possibilities for monitoring of vital parameters with wearable biomedical sensors, and will give the patient the freedom to be mobile and still be under continuously monitoring and thereby to better quality of patient care. This paper describes a new concept for wireless and wearable electrocardiogram (ECG) sensor transmitting signals to a diagnostic station at the hospital, and this concept is intended for detecting rarely occurrences of cardiac arrhythmias and to follow up critical patients from their home while they are carrying out daily activities.", "title": "" } ]
[ { "docid": "8464635cbbef4361d56cc017da8d0317", "text": "In large-scale distributed learning, security issues have become increasingly important. Particularly in a decentralized environment, some computing units may behave abnormally, or even exhibit Byzantine failures—arbitrary and potentially adversarial behavior. In this paper, we develop distributed learning algorithms that are provably robust against such failures, with a focus on achieving optimal statistical performance. A main result of this work is a sharp analysis of two robust distributed gradient descent algorithms based on median and trimmed mean operations, respectively. We prove statistical error rates for three kinds of population loss functions: strongly convex, nonstrongly convex, and smooth non-convex. In particular, these algorithms are shown to achieve order-optimal statistical error rates for strongly convex losses. To achieve better communication efficiency, we further propose a median-based distributed algorithm that is provably robust, and uses only one communication round. For strongly convex quadratic loss, we show that this algorithm achieves the same optimal error rate as the robust distributed gradient descent algorithms.", "title": "" }, { "docid": "06597c7f7d76cb3749d13b597b903570", "text": "2.1 Summary ............................................... 5 2.2 Definition .............................................. 6 2.3 History ................................................... 6 2.4 Overview of Currently Used Classification Systems and Terminology 7 2.5 Currently Used Terms in Classification of Osteomyelitis of the Jaws .................. 11 2.5.1 Acute/Subacute Osteomyelitis .............. 11 2.5.2 Chronic Osteomyelitis ........................... 11 2.5.3 Chronic Suppurative Osteomyelitis: Secondary Chronic Osteomyelitis .......... 11 2.5.4 Chronic Non-suppurative Osteomyelitis 11 2.5.5 Diffuse Sclerosing Osteomyelitis, Primary Chronic Osteomyelitis, Florid Osseous Dysplasia, Juvenile Chronic Osteomyelitis ............. 11 2.5.6 SAPHO Syndrome, Chronic Recurrent Multifocal Osteomyelitis (CRMO) ........... 13 2.5.7 Periostitis Ossificans, Garrès Osteomyelitis ............................. 13 2.5.8 Other Commonly Used Terms ................ 13 2.6 Osteomyelitis of the Jaws: The Zurich Classification System ........... 16 2.6.1 General Aspects of the Zurich Classification System ............................. 16 2.6.2 Acute Osteomyelitis and Secondary Chronic Osteomyelitis ........................... 17 2.6.3 Clinical Presentation ............................. 26 2.6.4 Primary Chronic Osteomyelitis .............. 34 2.7 Differential Diagnosis ............................ 48 2.7.1 General Considerations ......................... 48 2.7.2 Differential Diagnosis of Acute and Secondary Chronic Osteomyelitis ... 50 2.7.3 Differential Diagnosis of Primary Chronic Osteomyelitis ........................... 50 2.1 Summary", "title": "" }, { "docid": "627868e179aec6c5b807dc22da3258ed", "text": "As people integrate use of the cell phone into their lives, do they view it as just an update of the fixed telephone or assign it special values? This study explores that question in the framework of gratifications sought and their relationship both to differential cell phone use and to social connectedness. Based on a survey of Taiwanese college students, we found that the cell phone supplements the fixed telephone as a means of strengthening users’ family bonds, expanding their psychological neighborhoods, and facilitating symbolic proximity to the people they call. Thus, the cell phone has evolved from a luxury for businesspeople into an important facilitator of many users’ social relationships. For the poorly connected socially, the cell phone offers a unique advantage: it confers instant membership in a community. Finally, gender was found to mediate how users exploit the cell phone to maintain social ties.", "title": "" }, { "docid": "73877d224b5bbbde7ea8185284da3c2d", "text": "With the advancement of web technology and its growth, there is a huge volume of data present in the web for internet users and a lot of data is generated too. Internet has become a platform for online learning, exchanging ideas and sharing opinions. Social networking sites like Twitter, Facebook, Google+ are rapidly gaining popularity as they allow people to share and express their views about topics, have discussion with different communities, or post messages across the world. There has been lot of work in the field of sentiment analysis of twitter data. This survey focuses mainly on sentiment analysis of twitter data which is helpful to analyze the information in the tweets where opinions are highly unstructured, heterogeneous and are either positive or negative, or neutral in some cases. In this paper, we provide a survey and a comparative analyses of existing techniques for opinion mining like machine learning and lexicon-based approaches, together with evaluation metrics. Using various machine learning algorithms like Naive Bayes, Max Entropy, and Support Vector Machine, we provide research on twitter data streams.We have also discussed general challenges and applications of Sentiment Analysis on Twitter.", "title": "" }, { "docid": "846ed7b9a98f70ff6b86c11f16f7e7d0", "text": "In this study, three popular signal processing techniques (Empirical Mode Decomposition, Discrete Wavelet Transform, and Wavelet Packet Decomposition) were investigated for the decomposition of Electroencephalography (EEG) Signals in Brain Computer Interface (BCI) system for a classification task. Publicly available BCI competition III dataset IVa, a multichannel 2-class motor-imagery dataset, was used for this purpose. Multiscale Principal Component Analysis method was applied for the purpose of noise removal. In addition, different sets of features were formed to examine the effect of a particular group of features. The parameter selection process for signal decomposition methods was thoroughly explained as well. Our results show that the combination of Multiscale Principal Component Analysis de-noising and higher order statistics features extracted from wavelet packet decomposition sub-bands resulted in highest average classification accuracy of 92.8%. Our study is one among very few that provides a comprehensive comparison between signal decomposition methods in combination with higher order statistics in classification of BCI signals. In addition, we stressed the importance of higher frequency ranges in improving the classification task for EEG signals in Brain Computer Interface Systems. Obtained results indicate that the proposed model has the potential to obtain a reliable classification of motor imagery EEG signals, and can thus be used as a practical system for controlling a wheelchair. It can also further enhance the current rehabilitation therapies where appropriate feedback is delivered once the individual executes the correct movement. In that way, motor rehabilitation outcomes may improve over time. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8bf1b97320a6b7319e4b36dfc11b6c7b", "text": "In recent years, virtual reality exposure therapy (VRET) has become an interesting alternative for the treatment of anxiety disorders. Research has focused on the efficacy of VRET in treating anxiety disorders: phobias, panic disorder, and posttraumatic stress disorder. In this systematic review, strict methodological criteria are used to give an overview of the controlled trials regarding the efficacy of VRET in patients with anxiety disorders. Furthermore, research into process variables such as the therapeutic alliance and cognitions and enhancement of therapy effects through cognitive enhancers is discussed. The implications for implementation into clinical practice are considered.", "title": "" }, { "docid": "48931b870057884b8b1c679781e2adc9", "text": "Recommender systems have been researched extensively by the Technology Enhanced Learning (TEL) community during the last decade. By identifying suitable resources from a potentially overwhelming variety of choices, such systems offer a promising approach to facilitate both learning and teaching tasks. As learning is taking place in extremely diverse and rich environments, the incorporation of contextual information about the user in the recommendation process has attracted major interest. Such contextualization is researched as a paradigm for building intelligent systems that can better predict and anticipate the needs of users, and act more efficiently in response to their behavior. In this paper, we try to assess the degree to which current work in TEL recommender systems has achieved this, as well as outline areas in which further work is needed. First, we present a context framework that identifies relevant context dimensions for TEL applications. Then, we present an analysis of existing TEL recommender systems along these dimensions. Finally, based on our survey results, we outline topics on which further research is needed.", "title": "" }, { "docid": "818b2a97c4648f04feadbb3bd7da90cc", "text": "Reducing the number of features whilst maintaining an acceptable classification accuracy is a fundamental step in the process of constructing cancer predictive models. In this work, we introduce a novel hybrid (MI-LDA) feature selection approach for the diagnosis of ovarian cancer. This hybrid approach is embedded within a global optimization framework and offers a promising improvement on feature selection and classification accuracy processes. Global Mutual Information (MI) based feature selection optimizes the search process of finding best feature subsets in order to select the highly correlated predictors for ovarian cancer diagnosis. The maximal discriminative cancer predictors are then passed to a Linear Discriminant Analysis (LDA) classifier, and a Genetic Algorithm (GA) is applied to optimise the search process with respect to the estimated error rate of the LDA classifier (MI-LDA). Experiments were performed using an ovarian cancer dataset obtained from the FDA-NCI Clinical Proteomics Program Databank. The performance of the hybrid feature selection approach was evaluated using the Support Vector Machine (SVM) classifier and the LDA classifier. A comparison of the results revealed that the proposed (MI-LDA)-LDA model outperformed the (MI-LDA)-SVM model on selecting the maximal discriminative feature subset and achieved the highest predictive accuracy. The proposed system can therefore be used as an efficient tool for finding predictors and patterns in serum (blood)-derived proteomic data for the detection of ovarian cancer.", "title": "" }, { "docid": "57e162e717f17998ba9c9dc7d66f252b", "text": "The exhaustive digitalization of the economy, and to be more specific, of industrial production systems results in a new quality of information transparency. This is the basis for added values in terms of effectiveness, quality, and individuality. However, these added values also result in an increased exposure to Cyber-Security threats, due to the increased digitalization, information transparency and standardization. In this work, the procedural model for a Cyber-Security analysis based on reference architecture model Industry 4.0 (RAMI 4.0) and the VDI/VDE guideline 2182 is exemplary shown for the use case of a Cloud-based monitoring of the production. The derived procedure supports the identification of protection demands and allows a risk-based selection of suitable countermeasures.", "title": "" }, { "docid": "22658b675b501059ec5a7905f6b766ef", "text": "The purpose of this study was to compare the physiological results of 2 incremental graded exercise tests (GXTs) and correlate these results with a short-distance laboratory cycle time trial (TT). Eleven men (age 25 +/- 5 years, Vo(2)max 62 +/- 8 ml.kg(-1).min(-1)) randomly underwent 3 laboratory tests performed on a cycle ergometer. The first 2 tests consisted of a GXT consisting of either 3-minute (GXT(3-min)) or 5-minute (GXT(5-min)) workload increments. The third test involved 1 laboratory 30-minute TT. The peak power output, lactate threshold, onset of blood lactate accumulation, and maximum displacement threshold (Dmax) determined from each GXT was not significantly different and in agreement when measured from the GXT(3-min) or GXT(5-min). Furthermore, similar correlation coefficients were found among the results of each GXT and average power output in the 30-minute cycling TT. Hence, the results of either GXT can be used to predict performance or for training prescription.", "title": "" }, { "docid": "8f597b84bf40474b852083c9abb78620", "text": "The aim of this study was to re-examine individuals with gender identity disorder after as long a period of time as possible. To meet the inclusion criterion, the legal recognition of participants' gender change via a legal name change had to date back at least 10 years. The sample comprised 71 participants (35 MtF and 36 FtM). The follow-up period was 10-24 years with a mean of 13.8 years (SD = 2.78). Instruments included a combination of qualitative and quantitative methods: Clinical interviews were conducted with the participants, and they completed a follow-up questionnaire as well as several standardized questionnaires they had already filled in when they first made contact with the clinic. Positive and desired changes were determined by all of the instruments: Participants reported high degrees of well-being and a good social integration. Very few participants were unemployed, most of them had a steady relationship, and they were also satisfied with their relationships with family and friends. Their overall evaluation of the treatment process for sex reassignment and its effectiveness in reducing gender dysphoria was positive. Regarding the results of the standardized questionnaires, participants showed significantly fewer psychological problems and interpersonal difficulties as well as a strongly increased life satisfaction at follow-up than at the time of the initial consultation. Despite these positive results, the treatment of transsexualism is far from being perfect.", "title": "" }, { "docid": "4c28a0e7e14c567e4b66e3aaad389d6c", "text": "Given a set of objects P and a query point q, a k nearest neighbor (k-NN) query retrieves the k objects in P that lie closest to q. Even though the problem is well-studied for static datasets, the traditional methods do not extend to highly dynamic environments where multiple continuous queries require real-time results, and both objects and queries receive frequent location updates. In this paper we propose conceptual partitioning (CPM), a comprehensive technique for the efficient monitoring of continuous NN queries. CPM achieves low running time by handling location updates only from objects that fall in the vicinity of some query (and ignoring the rest). It can be used with multiple, static or moving queries, and it does not make any assumptions about the object moving patterns. We analyze the performance of CPM and show that it outperforms the current state-of-the-art algorithms for all problem settings. Finally, we extend our framework to aggregate NN (ANN) queries, which monitor the data objects that minimize the aggregate distance with respect to a set of query points (e.g., the objects with the minimum sum of distances to all query points).", "title": "" }, { "docid": "d67a93dde102bdcd2dd1a72c80aacd6b", "text": "Network intrusion detection systems have become a standard component in security infrastructures. Unfortunately, current systems are poor at detecting novel attacks without an unacceptable level of false alarms. We propose that the solution to this problem is the application of an ensemble of data mining techniques which can be applied to network connection data in an offline environment, augmenting existing real-time sensors. In this paper, we expand on our motivation, particularly with regard to running in an offline environment, and our interest in multisensor and multimethod correlation. We then review existing systems, from commercial systems, to research based intrusion detection systems. Next we survey the state of the art in the area. Standard datasets and feature extraction turned out to be more important than we had initially anticipated, so each can be found under its own heading. Next, we review the actual data mining methods that have been proposed or implemented. We conclude by summarizing the open problems in this area and proposing a new research project to answer some of these open problems.", "title": "" }, { "docid": "4bac5fa3b753c6da269a8c9d6d6ecb5a", "text": "The use of antimicrobial compounds in food animal production provides demonstrated benefits, including improved animal health, higher production and, in some cases, reduction in foodborne pathogens. However, use of antibiotics for agricultural purposes, particularly for growth enhancement, has come under much scrutiny, as it has been shown to contribute to the increased prevalence of antibiotic-resistant bacteria of human significance. The transfer of antibiotic resistance genes and selection for resistant bacteria can occur through a variety of mechanisms, which may not always be linked to specific antibiotic use. Prevalence data may provide some perspective on occurrence and changes in resistance over time; however, the reasons are diverse and complex. Much consideration has been given this issue on both domestic and international fronts, and various countries have enacted or are considering tighter restrictions or bans on some types of antibiotic use in food animal production. In some cases, banning the use of growth-promoting antibiotics appears to have resulted in decreases in prevalence of some drug resistant bacteria; however, subsequent increases in animal morbidity and mortality, particularly in young animals, have sometimes resulted in higher use of therapeutic antibiotics, which often come from drug families of greater relevance to human medicine. While it is clear that use of antibiotics can over time result in significant pools of resistance genes among bacteria, including human pathogens, the risk posed to humans by resistant organisms from farms and livestock has not been clearly defined. As livestock producers, animal health experts, the medical community, and government agencies consider effective strategies for control, it is critical that science-based information provide the basis for such considerations, and that the risks, benefits, and feasibility of such strategies are fully considered, so that human and animal health can be maintained while at the same time limiting the risks from antibiotic-resistant bacteria.", "title": "" }, { "docid": "17797efad4f13f961ed300316eb16b6b", "text": "Cellular senescence, which has been linked to age-related diseases, occurs during normal aging or as a result of pathological cell stress. Due to their incapacity to proliferate, senescent cells cannot contribute to normal tissue maintenance and tissue repair. Instead, senescent cells disturb the microenvironment by secreting a plethora of bioactive factors that may lead to inflammation, regenerative dysfunction and tumor progression. Recent understanding of stimuli and pathways that induce and maintain cellular senescence offers the possibility to selectively eliminate senescent cells. This novel strategy, which so far has not been tested in humans, has been coined senotherapy or senolysis. In mice, senotherapy proofed to be effective in models of accelerated aging and also during normal chronological aging. Senotherapy prolonged lifespan, rejuvenated the function of bone marrow, muscle and skin progenitor cells, improved vasomotor function and slowed down atherosclerosis progression. While initial studies used genetic approaches for the killing of senescent cells, recent approaches showed similar effects with senolytic drugs. These observations open up exciting possibilities with a great potential for clinical development. However, before the integration of senotherapy into patient care can be considered, we need further research to improve our insight into the safety and efficacy of this strategy during short- and long-term use.", "title": "" }, { "docid": "28b1cc95aa385664cacbf20661f5cf56", "text": "Many organizations now emphasize the use of technology that can help them get closer to consumers and build ongoing relationships with them. The ability to compile consumer data profiles has been made even easier with Internet technology. However, it is often assumed that consumers like to believe they can trust a company with their personal details. Lack of trust may cause consumers to have privacy concerns. Addressing such privacy concerns may therefore be crucial to creating stable and ultimately profitable customer relationships. Three specific privacy concerns that have been frequently identified as being of importance to consumers include unauthorized secondary use of data, invasion of privacy, and errors. Results of a survey study indicate that both errors and invasion of privacy have a significant inverse relationship with online purchase behavior. Unauthorized use of secondary data appears to have little impact. Managerial implications include the careful selection of communication channels for maximum impact, the maintenance of discrete “permission-based” contact with consumers, and accurate recording and handling of data.", "title": "" }, { "docid": "fef448324e17aeaa7bb0149369631103", "text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Python Photogrammetry Toolbox: A free solution for Three-Dimensional Documentation Pierre Moulon, Alessandro Bezzi", "title": "" }, { "docid": "a2251a3cd69eacf72c078f21e9ee3a40", "text": "This proposal investigates Selective Harmonic Elimination (SHE) to eliminate harmonics brought by Pulse Width Modulation (PWM) inverter. The selective harmonic elimination method for three phase voltage source inverter is generally based on ideas of opposite harmonic injection. In this proposed scheme, the lower order harmonics 3rd, 5th, 7th and 9th are eliminated. The dominant harmonics of same order generated in opposite phase by sine PWM inverter and by using this scheme the Total Harmonic Distortion (THD) is reduced. The analysis of Sinusoidal PWM technique (SPWM) and selective harmonic elimination is simulated using MATLAB/SIMULINK model.", "title": "" }, { "docid": "4a84f6400edf8cf0d3a7245efae6e5f7", "text": "The explosive use of social media also makes it a popular platform for malicious users, known as social spammers, to overwhelm normal users with unwanted content. One effective way for social spammer detection is to build a classifier based on content and social network information. However, social spammers are sophisticated and adaptable to game the system with fast evolving content and network patterns. First, social spammers continually change their spamming content patterns to avoid being detected. Second, reflexive reciprocity makes it easier for social spammers to establish social influence and pretend to be normal users by quickly accumulating a large number of “human” friends. It is challenging for existing anti-spamming systems based on batch-mode learning to quickly respond to newly emerging patterns for effective social spammer detection. In this paper, we present a general optimization framework to collectively use content and network information for social spammer detection, and provide the solution for efficient online processing. Experimental results on Twitter datasets confirm the effectiveness and efficiency of the proposed framework. Introduction Social media services, like Facebook and Twitter, are increasingly used in various scenarios such as marketing, journalism and public relations. While social media services have emerged as important platforms for information dissemination and communication, it has also become infamous for spammers who overwhelm other users with unwanted content. The (fake) accounts, known as social spammers (Webb et al. 2008; Lee et al. 2010), are a special type of spammers who coordinate among themselves to launch various attacks such as spreading ads to generate sales, disseminating pornography, viruses, phishing, befriending victims and then surreptitiously grabbing their personal information (Bilge et al. 2009), or simply sabotaging a system’s reputation (Lee et al. 2010). The problem of social spamming is a serious issue prevalent in social media sites. Characterizing and detecting social spammers can significantly improve the quality of user experience, and promote the healthy use and development of a social networking system. Following spammer detection in traditional platforms like Email and the Web (Chen et al. 2012), some efforts have Copyright c 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. been devoted to detect spammers in various social networking sites, including Twitter (Lee et al. 2010), Renren (Yang et al. 2011), Blogosphere (Lin et al. 2007), etc. Existing methods can generally be divided into two categories. First category is to employ content analysis for detecting spammers in social media. Profile-based features (Lee et al. 2010) such as content and posting patterns are extracted to build an effective supervised learning model, and the model is applied on unseen data to filter social spammers. Another category of methods is to detect spammers via social network analysis (Ghosh et al. 2012). A widely used assumption in the methods is that spammers cannot establish an arbitrarily large number of social trust relations with legitimate users. The users with relatively low social influence or social status in the network will be determined as spammers. Traditional spammer detection methods become less effective due to the fast evolution of social spammers. First, social spammers show dynamic content patterns in social media. Spammers’ content information changes too fast to be detected by a static anti-spamming system based on offline modeling (Zhu et al. 2012). Spammers continue to change their spamming strategies and pretend to be normal users to fool the system. A built system may become less effective when the spammers create many new, evasive accounts. Second, many social media sites like Twitter have become a target of link farming (Ghosh et al. 2012). The reflexive reciprocity (Weng et al. 2010; Hu et al. 2013b) indicates that many users simply follow back when they are followed by someone for the sake of courtesy. It is easier for spammers to acquire a large number of follower links in social media. Thus, with the perceived social influence, they can avoid being detected by network-based methods. Similar results targeting other platforms such as Renren (Yang et al. 2011) have been reported in literature as well. Existing systems rely on building a new model to capture newly emerging content-based and network-based patterns of social spammers. Given the rapidly evolving nature, it is necessary to have a framework that efficiently reflects the effect of newly emerging data. One efficient approach to incrementally update existing model in large-scale data analysis is online learning. While online learning has been studied for years and shown its effectiveness in many applications such as image and video processing (Mairal et al. 2009) and human computer inProceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence", "title": "" }, { "docid": "6021b5aa102fe910eb7428265c056fc8", "text": "Documents exhibit sequential structure at multiple levels of abstraction (e.g., sentences, paragraphs, sections). These abstractions constitute a natural hierarchy for representing the context in which to infer the meaning of words and larger fragments of text. In this paper, we present CLSTM (Contextual LSTM), an extension of the recurrent neural network LSTM (Long-Short Term Memory) model, where we incorporate contextual features (e.g., topics) into the model. We evaluate CLSTM on three specific NLP tasks: word prediction, next sentence selection, and sentence topic prediction. Results from experiments run on two corpora, English documents in Wikipedia and a subset of articles from a recent snapshot of English Google News, indicate that using both words and topics as features improves performance of the CLSTM models over baseline LSTM models for these tasks. For example on the next sentence selection task, we get relative accuracy improvements of 21% for the Wikipedia dataset and 18% for the Google News dataset. This clearly demonstrates the significant benefit of using context appropriately in natural language (NL) tasks. This has implications for a wide variety of NL applications like question answering, sentence completion, paraphrase generation, and next utterance prediction in dialog systems.", "title": "" } ]
scidocsrr