query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
0e6e396ce79a87aaa77afb0a8401d932
|
Linked Vocabulary Recommendation Tools for Internet of Things: A Survey
|
[
{
"docid": "0e758ff82eae43d705b6fde249b29998",
"text": "The continued growth of the World Wide Web makes the retrieval of relevant information for a user’s query increasingly difficult. Current search engines provide the user with many web pages, but varying levels of relevancy. In response, the Semantic Web has been proposed to retrieve and use more semantic information from the web. Our prior research has developed a Semantic Retrieval System to automate the processing of a user’s query while taking into account the query’s context. The system uses WordNet and the DARPA Agent Markup Language (DAML) ontologies to act as surrogates for understanding the context of terms in a user’s query. Like other applications that use ontologies, our system relies on using ‘good’ ontologies. This research draws upon semiotic theory to develop a suite of metrics that assess the syntactic, semantic, pragmatic, and social aspects of ontology quality. We operationalize the metrics and implement them in a prototype tool called the “Ontology Auditor.” An initial validation of the Ontology Auditor on the DAML library of domain ontologies indicates that the metrics are feasible and highlight the wide variations in quality among ontologies in the library. Acknowledgments The authors wish to thank Xinlin Tang and Sunyoung Cho for comments on a previous draft. This research was supported by Oakland University and by Georgia State University.",
"title": ""
}
] |
[
{
"docid": "985f5d7e0adbfd8a083c7ec2b5776b50",
"text": "The effect of emotional disclosure through expressive writing on available working memory (WM) capacity was examined in 2 semester-long experiments. In the first study, 35 freshmen assigned to write about their thoughts and feelings about coming to college demonstrated larger working memory gains 7 weeks later compared with 36 writers assigned to a trivial topic. Increased use of cause and insight words was associated with greater WM improvements. In the second study, students (n = 34) who wrote about a negative personal experience enjoyed greater WM improvements and declines in intrusive thinking compared with students who wrote about a positive experience (n = 33) or a trivial topic (n = 34). The results are discussed in terms of a model grounded in cognitive and social psychological theory in which expressive writing reduces intrusive and avoidant thinking about a stressful experience, thus freeing WM resources.",
"title": ""
},
{
"docid": "540f29acdb176ed4e2d712d520881931",
"text": "In this article a new antenna for Long Term Evolution (LTE) communication technology for automotive application is presented. It is designed to be set on the roof of a car underneath a standard plastic cover. Even though the antenna operates at a wide frequency band (from 698 MHz to 960 MHz and from 1470 MHz to 2700 MHz) in small mounting volume, the antenna requires no matching network. It obtains the demanded omnidirectional radiation pattern within all the above mentioned LTE frequency bands. A low-loss panel design enables a high efficiency of the antenna. The antenna properties are shown via simulation and measurement results of a functional demonstrator.",
"title": ""
},
{
"docid": "29e56287071ca1fc1bf3d83f67b3ce8d",
"text": "In this paper, we seek to identify factors that might increase the likelihood of adoption and continued use of cyberinfrastructure by scientists. To do so, we review the main research on Information and Communications Technology (ICT) adoption and use by addressing research problems, theories and models used, findings, and limitations. We focus particularly on the individual user perspective. We categorize previous studies into two groups: Adoption research and post-adoption (continued use) research. In addition, we review studies specifically regarding cyberinfrastructure adoption and use by scientists and other special user groups. We identify the limitations of previous theories, models and research findings appearing in the literature related to our current interest in scientists’ adoption and continued use of cyber-infrastructure. We synthesize the previous theories and models used for ICT adoption and use, and then we develop a theoretical framework for studying scientists’ adoption and use of cyber-infrastructure. We also proposed a research design based on the research model developed. Implications for researchers and practitioners are provided.",
"title": ""
},
{
"docid": "0899cfa62ccd036450c079eb3403902a",
"text": "Manual editing of a metro map is essential because many aesthetic and readability demands in map generation cannot be achieved by using a fully automatic method. In addition, a metro map should be updated when new metro lines are developed in a city. Considering that manually designing a metro map is time-consuming and requires expert skills, we present an interactive editing system that considers human knowledge and adjusts the layout to make it consistent with user expectations. In other words, only a few stations are controlled and the remaining stations are relocated by our system. Our system supports both curvilinear and octilinear layouts when creating metro maps. It solves an optimization problem, in which even spaces, route straightness, and maximum included angles at junctions are considered to obtain a curvilinear result. The system then rotates each edge to extend either vertically, horizontally, or diagonally while approximating the station positions provided by users to generate an octilinear layout. Experimental results, quantitative and qualitative evaluations, and user studies show that our editing system is easy to use and allows even non-professionals to design a metro map.",
"title": ""
},
{
"docid": "fdc9ea84004c259ceefa8e3195372b71",
"text": "Agile methods are often seen as providing ways to avoid overheads typically perceived as being imposed by traditional software development environments. However, few organizations are psychologically or technically able to take on an agile approach rapidly and effectively. Here, we describe a number of approaches to assist in such a transition. The Agile Software Solution Framework (ASSF) provides an overall context for the exploration of agile methods, knowledge and governance and contains an Agile Toolkit for quantifying part of the agile process. These link to the business aspects of software development so that the business value and agile process are well aligned. Finally, we describe how these theories are applied in practice with two industry case studies using the Agile Adoption and Improvement Model (AAIM). 2008 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "34e8cbfa11983f896d9e159daf08cc27",
"text": "XtratuM is an hypervisor designed to meet safety critical requirements. Initially designed for x86 architectures (version 2.0), it has been strongly redesigned for SPARC v8 arquitecture and specially for the to the LEON2 processor. Current version 2.2, includes all the functionalities required to build safety critical systems based on ARINC 653, AUTOSTAR and other standards. Although XtratuMdoes not provides a compliant API with these standards, partitions can offer easily the appropriated API to the applications. XtratuM is being used by the aerospace sector to build software building blocks of future generic on board software dedicated to payloads management units in aerospace. XtratuM provides ARINC 653 scheduling policy, partition management, inter-partition communications, health monitoring, logbooks, traces, and other services to easily been adapted to the ARINC standard. The configuration of the system is specified in a configuration file (XML format) and it is compiled to achieve a static configuration of the final container (XtratuM and the partition’s code) to be deployed to the hardware board. As far as we know, XtratuM is the first hypervisor for the SPARC v8 arquitecture. In this paper, the main design aspects are discussed and the internal architecture described. An evaluation of the most significant metrics is also provided. This evaluation permits to affirm that the overhead of a hypervisor is lower than 3% if the slot duration is higher than 1 millisecond.",
"title": ""
},
{
"docid": "c406d734f32cc4b88648c037d9d10e46",
"text": "In this paper, we review the state-of-the-art technologies for driver inattention monitoring, which can be classified into the following two main categories: 1) distraction and 2) fatigue. Driver inattention is a major factor in most traffic accidents. Research and development has actively been carried out for decades, with the goal of precisely determining the drivers' state of mind. In this paper, we summarize these approaches by dividing them into the following five different types of measures: 1) subjective report measures; 2) driver biological measures; 3) driver physical measures; 4) driving performance measures; and 5) hybrid measures. Among these approaches, subjective report measures and driver biological measures are not suitable under real driving conditions but could serve as some rough ground-truth indicators. The hybrid measures are believed to give more reliable solutions compared with single driver physical measures or driving performance measures, because the hybrid measures minimize the number of false alarms and maintain a high recognition rate, which promote the acceptance of the system. We also discuss some nonlinear modeling techniques commonly used in the literature.",
"title": ""
},
{
"docid": "411430d1e0718571dd008fc5694a8e7f",
"text": "Despite the known dangers of driver fatigue, it is a difficult construct to study empirically. Different forms of task-induced fatigue may differ in their effects on driver performance and safety. Desmond and Hancock (2001) defined active and passive fatigue states that reflect different styles of workload regulation. In 2 driving simulator studies we investigated the multidimensional subjective states and safety outcomes associated with active and passive fatigue. Wind gusts were used to induce active fatigue, and full vehicle automation to induce passive fatigue. Drive duration was independently manipulated to track the development of fatigue states over time. Participants were undergraduate students. Study 1 (N = 108) focused on subjective response and associated cognitive stress processes, while Study 2 (N = 168) tested fatigue effects on vehicle control and alertness. In both studies the 2 fatigue manipulations produced different patterns of subjective response reflecting different styles of workload regulation, appraisal, and coping. Active fatigue was associated with distress, overload, and heightened coping efforts, whereas passive fatigue corresponded to large-magnitude declines in task engagement, cognitive underload, and reduced challenge appraisal. Study 2 showed that only passive fatigue reduced alertness, operationalized as speed of braking and steering responses to an emergency event. Passive fatigue also increased crash probability, but did not affect a measure of vehicle control. Findings support theories that see fatigue as an outcome of strategies for managing workload. The distinction between active and passive fatigue is important for assessment of fatigue and for evaluating automated driving systems which may induce dangerous levels of passive fatigue.",
"title": ""
},
{
"docid": "cf374e1d1fa165edaf0b29749f32789c",
"text": "Photovoltaic (PV) system performance extremely depends on local insolation and temperature conditions. Under partial shading, P-I characteristics of PV systems are complicated and may have multiple local maxima. Conventional Maximum Power Point Tracking (MPPT) techniques can easily fail to track global maxima and may be trapped in local maxima under partial shading; this can be one of main causes for reduced energy yield for many PV systems. In order to solve this problem, this paper proposes a novel Maximum Power Point tracking algorithm based on Differential Evolution (DE) that is capable of tracking global MPP under partial shaded conditions. The ability of proposed algorithm and its excellent performances are evaluated with conventional and popular algorithm by means of simulation. The proposed algorithm works in conjunction with a Boost (step up) DC-DC converter to track the global peak. Moreover, this paper includes a MATLAB-based modeling and simulation scheme suitable for photovoltaic characteristics under partial shading.",
"title": ""
},
{
"docid": "1e7c1dfe168aec2353b31613811112ae",
"text": "A great video title describes the most salient event compactly and captures the viewer’s attention. In contrast, video captioning tends to generate sentences that describe the video as a whole. Although generating a video title automatically is a very useful task, it is much less addressed than video captioning. We address video title generation for the first time by proposing two methods that extend state-of-the-art video captioners to this new task. First, we make video captioners highlight sensitive by priming them with a highlight detector. Our framework allows for jointly training a model for title generation and video highlight localization. Second, we induce high sentence diversity in video captioners, so that the generated titles are also diverse and catchy. This means that a large number of sentences might be required to learn the sentence structure of titles. Hence, we propose a novel sentence augmentation method to train a captioner with additional sentence-only examples that come without corresponding videos. We collected a large-scale Video Titles in the Wild (VTW) dataset of 18100 automatically crawled user-generated videos and titles. On VTW, our methods consistently improve title prediction accuracy, and achieve the best performance in both automatic and human evaluation. Finally, our sentence augmentation method also outperforms the baselines on the M-VAD dataset.",
"title": ""
},
{
"docid": "a1757ee58eb48598d3cd6e257b53cd10",
"text": "This paper examines the issues of puzzle design in the context of collaborative gaming. The qualitative research approach involves both the conceptual analysis of key terminology and a case study of a collaborative game called eScape. The case study is a design experiment, involving both the process of designing a game environment and an empirical study, where data is collected using multiple methods. The findings and conclusions emerging from the analysis provide insight into the area of multiplayer puzzle design. The analysis and reflections answer questions on how to create meaningful puzzles requiring collaboration and how far game developers can go with collaboration design. The multiplayer puzzle design introduces a new challenge for game designers. Group dynamics, social roles and an increased level of interaction require changes in the traditional conceptual understanding of a single-player puzzle.",
"title": ""
},
{
"docid": "1db6ea040880ceeb57737a5054206127",
"text": "Several studies regarding security testing for corporate environments, networks, and systems were developed in the past years. Therefore, to understand how methodologies and tools for security testing have evolved is an important task. One of the reasons for this evolution is due to penetration test, also known as Pentest. The main objective of this work is to provide an overview on Pentest, showing its application scenarios, models, methodologies, and tools from published papers. Thereby, this work may help researchers and people that work with security to understand the aspects and existing solutions related to Pentest. A systematic mapping study was conducted, with an initial gathering of 1145 papers, represented by 1090 distinct papers that have been evaluated. At the end, 54 primary studies were selected to be analyzed in a quantitative and qualitative way. As a result, we classified the tools and models that are used on Pentest. We also show the main scenarios in which these tools and methodologies are applied to. Finally, we present some open issues and research opportunities on Pentest.",
"title": ""
},
{
"docid": "b50c6702253a3b56acf42fca6d4af883",
"text": "Infusion therapy is one of the largest practised therapies in any healthcare organisation, and infusion pumps are used to deliver millions of infusions every year in the NHS. The aircraft industry downloads information from 'black boxes' to help design better systems and reduce risk; however, the same cannot be said about error logs and data logs from infusion pumps. This study downloaded and analysed approximately 360 000 hours of infusion pump error logs from 131 infusion pumps used for up to 2 years in one large acute hospital. Staff had to manage 260 129 alarms; this accounted for approximately 5% of total infusion time, costing about £1000 per pump per year. This paper describes many such insights, including numerous technical errors, propensity for certain alarms in clinical conditions, logistical issues and how infrastructure problems can lead to an increase in alarm conditions. Routine use of error log analysis, combined with appropriate management of pumps to help identify improved device design, use and application is recommended.",
"title": ""
},
{
"docid": "b7957cc83988e0be2da64f6d9837419c",
"text": "Description: A revision of the #1 text in the Human Computer Interaction field, Interaction Design, the third edition is an ideal resource for learning the interdisciplinary skills needed for interaction design, human-computer interaction, information design, web design and ubiquitous computing. The authors are acknowledged leaders and educators in their field, with a strong global reputation. They bring depth of scope to the subject in this new edition, encompassing the latest technologies and devices including social networking, Web 2.0 and mobile devices. The third edition also adds, develops and updates cases, examples and questions to bring the book in line with the latest in Human Computer Interaction. Interaction Design offers a cross-disciplinary, practical and process-oriented approach to Human Computer Interaction, showing not just what principles ought to apply to Interaction Design, but crucially how they can be applied. The book focuses on how to design interactive products that enhance and extend the way people communicate, interact and work. Motivating examples are included to illustrate both technical, but also social and ethical issues, making the book approachable and adaptable for both Computer Science and non-Computer Science users. Interviews with key HCI luminaries are included and provide an insight into current and future trends.",
"title": ""
},
{
"docid": "2a36a2ab5b0e01da90859179a60cef9a",
"text": "We report 3 cases of renal toxicity associated with use of the antiviral agent tenofovir. Renal failure, proximal tubular dysfunction, and nephrogenic diabetes insipidus were observed, and, in 2 cases, renal biopsy revealed severe tubular necrosis with characteristic nuclear changes. Patients receiving tenofovir must be monitored closely for early signs of tubulopathy (glycosuria, acidosis, mild increase in the plasma creatinine level, and proteinuria).",
"title": ""
},
{
"docid": "1b7b64bd6c51a2a81c112a43ff10bb86",
"text": "We propose techniques for decentralizing prediction markets and order books, utilizing Bitcoin’s security model and consensus mechanism. Decentralization of prediction markets offers several key advantages over a centralized market: no single entity governs over the market, all transactions are transparent in the block chain, and anybody can participate pseudonymously to either open a new market or place bets in an existing one. We provide trust agility: each market has its own specified arbiter and users can choose to interact in markets that rely on the arbiters they trust. We also provide a transparent, decentralized order book that enables order execution on the block chain in the presence of potentially malicious miners. 1 Introductory Remarks Bitcoin has demonstrated that achieving consensus in a decentralized network is practical. This has stimulated research on applying Bitcoin-esque consensus mechanisms to new applications (e.g., DNS through Namecoin, timestamping through CommitCoin [10], and smart contracts through Ethereum). In this paper, we consider application of Bitcoin’s principles to prediction markets. A prediction market (PM) enables forecasts about uncertain future events to be forged into financial instruments that can be traded (bought, sold, shorted, etc.) until the uncertainty of the event is resolved. In several common forecasting scenarios, PMs have demonstrated lower error than polls, expert opinions, and statistical inference [2]. Thus an open and transparent PM not only serves its traders, it serves any stakeholder in the outcome by providing useful forecasting information through prices. Whenever discussing the application of Bitcoin to a new technology or service, its important to distinguish exactly what is meant. For example, a “Bitcoin-based prediction market” could mean at least three different things: (1) adding Bitcoin-related contracts (e.g., the future Bitcoin/USD exchange rate) to a traditional centralized PM, (2) converting the underlying currency of a centralized prediction market to Bitcoin, or (3) applying the design principles of Bitcoin to decentralize the functionality and governance of a PM. Of the three interpretations, approach (1) is not a research contribution. Approach (2) inherits most of the properties of a traditional PM: Opening markets for new future events is subject to a commitment by the PM host to determine the outcome, virtually any trading rules can be implemented, and trade settlement and clearing can be automated if money is held in trading accounts. In addition, by denominating the PM in Bitcoin, approach (2) enables easy electronic deposits and withdrawals from trading accounts, and can add a level of anonymity. An example of approach (2) is Predictious. This set of properties is a desirable starting point but we see several ways it can be improved through approach (3). Thus, our contribution is a novel PM design that enables: • A Decentralized Clearing/Settlement Service. Fully automated settlement and clearing of trades without escrowing funds to a trusted straight through processor (STP). • A Decentralized Order Matching Service. Fully automated matching of orders in a built-in call market, plus full support for external centralized exchanges. 4 http://namecoin.info 5 http://www.ethereum.org 6 https://www.predictious.com • Self-Organized Markets. Any participant can solicit forecasts on any event by arranging for any entity (or group of entities) to arbitrate the final payout based on the event’s outcome. • Agile Arbitration. Anyone can serve as an arbiter, and arbiters only need to sign two transactions (an agreement to serve and a declaration of an outcome) keeping the barrier to entry low. Traders can choose to participate in markets with arbiters they trust. Our analogue of Bitcoin miners can also arbitrate. • Transparency by Design. All trades, open markets, and arbitrated outcomes are reflected in a public ledger akin to Bitcoin’s block chain. • Flexible Fees. Fees paid to various parties can be configured on a per-market basis, with levels driven by market conditions (e.g., the minimum to incentivize correct behavior). • Resilience. Disruption to sets of participants will not disrupt the operations of the PM. • Theft Resistance. Like Bitcoin, currency and PM shares are held by the traders, and no transfers are possible without the holder’s digital signature. However like Bitcoin, users must protect their private keys and understand the risks of keeping money on an exchange service. • Pseudonymous Transactions. Like Bitcoin, holders of funds and shares are only identified with a pseudonymous public key, and any individual can hold an arbitrary number of keys. 2 Preliminaries and Related Work 2.1 Prediction Markets A PM enables participants to trade financial shares tied to the outcome of a specified future event. For example, if Alice, Bob, and Charlie are running for president, a share in ‘the outcome of Alice winning’ might entitle its holder to $1 if Alice wins and $0 if she does not. If the participants believed Alice to have a 60% chance of winning, the share would have an expected value of $0.60. In the opposite direction, if Bob and Charlie are trading at $0.30 and $0.10 respectively, the market on aggregate believes their likelihood of winning to be 30% and 10%. One of the most useful innovations of PMs is the intuitiveness of this pricing function [24]. Amateur traders and market observers can quickly assess current market belief, as well as monitor how forecasts change over time. The economic literature provides evidence that PMs can forecast certain types of events more accurately than methods that do not use financial incentives, such as polls (see [2] for an authoritative summary). They have been deployed internally by organizations such as the US Department of Defense, Microsoft, Google, IBM, and Intel, to forecast things like national security threats, natural disasters, and product development time and cost [2]. The literature on PMs tends to focus on topics orthogonal to how PMs are technically deployed, such as market scoring rules for market makers [13,9], accuracy of forecasts [23], and the relationship between share price and market belief [24]. Concurrently with the review of our paper, a decentralized PM called Truthcoin was independently proposed. It is also a Bitcoin-based design, however it focuses on determining a voting mechanism that incentivizes currency holders to judge the outcome of all events. We argue for designated arbitration in Section 5.1. Additionally, our approach does not use a market maker and is based on asset trading through a decentralized order book.",
"title": ""
},
{
"docid": "5edbc7588faccbae73037b50316656cb",
"text": "Unmanned aerial vehicles (UAVs) are increasingly replacing manned systems in situations that are dangerous, remote, or difficult for manned aircraft to access. Its control tasks are empowered by computer vision technology. Visual sensors are robustly used for stabilization as primary or at least secondary sensors. Hence, UAV stabilization by attitude estimation from visual sensors is a very active research area. Vision based techniques are proving their effectiveness and robustness in handling this problem. In this work a comprehensive review of UAV vision based attitude estimation approaches is covered, starting from horizon based methods and passing by vanishing points, optical flow, and stereoscopic based techniques. A novel segmentation approach for UAV attitude estimation based on polarization is proposed. Our future insightes for attitude estimation from uncalibrated catadioptric sensors are also discussed.",
"title": ""
},
{
"docid": "fb2973fd2191c35880ed7f6919f86298",
"text": "In this course, you will: • Learn about the HR function, its strategic organizational position, and the challenges faced in developing human resources to ensure retention and growth of core competencies. • Learn from the text, current issues in the news and classroom discussion the practice and management of HR functions as related to job design, staffing, training, compensation, benefits, assessment, counseling, IT, labor relations, legal issues, and organizational culture and change. • Have the opportunity to test the concepts, innovate and contribute from your personal experiences to enrich the class. • Demonstrate your understanding of these concepts through individual and group exercises, papers and projects. These activities will serve as a platform for you to examine the issues, problem-solve and critically analyze the outcomes.",
"title": ""
}
] |
scidocsrr
|
fee80fd95587516e29635959b2d2fe5c
|
SALICON: Reducing the Semantic Gap in Saliency Prediction by Adapting Deep Neural Networks
|
[
{
"docid": "0cdf08bd9c2e63f0c9bb1dd7472a23a8",
"text": "Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such voluntary eye and attentional shifts. Although the important role of high level stimulus properties (e.g., semantic information) in search stands undisputed, most models are based on low-level image properties. We here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on, based on eye-movement recordings of humans observing photographs of natural scenes, most of which contained at least one person. Observers, even when not instructed to look for anything particular, fixate on a face with a probability of over 80% within their first two fixations; furthermore, they exhibit more similar scanpaths when faces are present. Remarkably, our model’s predictive performance in images that do not contain faces is not impaired, and is even improved in some cases by spurious face detector responses.",
"title": ""
},
{
"docid": "704d068f791a8911068671cb3dca7d55",
"text": "Most models of visual search, whether involving overt eye movements or covert shifts of attention, are based on the concept of a saliency map, that is, an explicit two-dimensional map that encodes the saliency or conspicuity of objects in the visual environment. Competition among neurons in this map gives rise to a single winning location that corresponds to the next attended target. Inhibiting this location automatically allows the system to attend to the next most salient location. We describe a detailed computer implementation of such a scheme, focusing on the problem of combining information across modalities, here orientation, intensity and color information, in a purely stimulus-driven manner. The model is applied to common psychophysical stimuli as well as to a very demanding visual search task. Its successful performance is used to address the extent to which the primate visual system carries out visual search via one or more such saliency maps and how this can be tested.",
"title": ""
},
{
"docid": "a0437070b667281f6cbb657815d7f5c8",
"text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: a b s t r a c t a r t i c l e i n f o This paper presents a novel approach to visual saliency that relies on a contextually adapted representation produced through adaptive whitening of color and scale features. Unlike previous models, the proposal is grounded on the specific adaptation of the basis of low level features to the statistical structure of the image. Adaptation is achieved through decorrelation and contrast normalization in several steps in a hierarchical approach, in compliance with coarse features described in biological visual systems. Saliency is simply computed as the square of the vector norm in the resulting representation. The performance of the model is compared with several state-of-the-art approaches, in predicting human fixations using three different eye-tracking datasets. Referring this measure to the performance of human priority maps, the model proves to be the only one able to keep the same behavior through different datasets, showing free of biases. Moreover, it is able to predict a wide set of relevant psychophysical observations, to our knowledge, not reproduced together by any other model before. Research on the estimation of visual saliency has experienced an increasing activity in the last years from both computer vision and neuro-science perspectives, giving rise to a number of improved approaches. Furthermore, a wide diversity of applications based on saliency are being proposed that range from image retargeting [1] to human-like robot surveillance [2], object learning and recognition [3–5], objectness definition [6], image processing for retinal implants [7], and many others. Existing approaches to visual saliency have adopted a number of quite different strategies. A first group, including many early models, is very influenced by psychophysical theories supporting a parallel processing of several feature dimensions. Models in this group are particularly concerned with biological plausibility in their formulation, and they resort to the modeling of visual functions. Outstanding examples can be found in [8] or in [9]. Most recent models are in a second group that broadly aims to estimate the inverse of the probability density of a set of low level features by different procedures. In this kind of models, low level features are usually …",
"title": ""
}
] |
[
{
"docid": "f23bde650be816fdca4594c180c47309",
"text": "Indian economy highly depends on agricultural productivity. An important role is played by the detection of disease to obtain a perfect results in agriculture, and it is natural to have disease in plants. Proper care should be taken in this area for product quality and quantity. To reduce the large amount of monitoring in field automatic detection techniques can be used. This paper discuss different processes for segmentation technique which can be applied for different lesion disease detection. Thresholding and K-means cluster algorithms are done to detect different diseases in plant leaf.",
"title": ""
},
{
"docid": "65385d7aee49806476dc913f6768fc43",
"text": "Software developers spend a significant portion of their resources handling user-submitted bug reports. For software that is widely deployed, the number of bug reports typically outstrips the resources available to triage them. As a result, some reports may be dealt with too slowly or not at all. \n We present a descriptive model of bug report quality based on a statistical analysis of surface features of over 27,000 publicly available bug reports for the Mozilla Firefox project. The model predicts whether a bug report is triaged within a given amount of time. Our analysis of this model has implications for bug reporting systems and suggests features that should be emphasized when composing bug reports. \n We evaluate our model empirically based on its hypothetical performance as an automatic filter of incoming bug reports. Our results show that our model performs significantly better than chance in terms of precision and recall. In addition, we show that our modelcan reduce the overall cost of software maintenance in a setting where the average cost of addressing a bug report is more than 2% of the cost of ignoring an important bug report.",
"title": ""
},
{
"docid": "c65f050e911abb4b58b4e4f9b9aec63b",
"text": "The abundant spatial and contextual information provided by the advanced remote sensing technology has facilitated subsequent automatic interpretation of the optical remote sensing images (RSIs). In this paper, a novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning. First, deep Boltzmann machine is adopted to infer the spatial and structural information encoded in the low-level and middle-level features to effectively describe objects in optical RSIs. Then, a novel WSL approach is presented to object detection where the training sets require only binary labels indicating whether an image contains the target object or not. Based on the learnt high-level features, it jointly integrates saliency, intraclass compactness, and interclass separability in a Bayesian framework to initialize a set of training examples from weakly labeled images and start iterative learning of the object detector. A novel evaluation criterion is also developed to detect model drift and cease the iterative learning. Comprehensive experiments on three optical RSI data sets have demonstrated the efficacy of the proposed approach in benchmarking with several state-of-the-art supervised-learning-based object detection approaches.",
"title": ""
},
{
"docid": "8d570c7d70f9003b9d2f9bfa89234c35",
"text": "BACKGROUND\nThe targeting of the prostate-specific membrane antigen (PSMA) is of particular interest for radiotheragnostic purposes of prostate cancer. Radiolabeled PSMA-617, a 1,4,7,10-tetraazacyclododecane-N,N',N'',N'''-tetraacetic acid (DOTA)-functionalized PSMA ligand, revealed favorable kinetics with high tumor uptake, enabling its successful application for PET imaging (68Ga) and radionuclide therapy (177Lu) in the clinics. In this study, PSMA-617 was labeled with cyclotron-produced 44Sc (T 1/2 = 4.04 h) and investigated preclinically for its use as a diagnostic match to 177Lu-PSMA-617.\n\n\nRESULTS\n44Sc was produced at the research cyclotron at PSI by irradiation of enriched 44Ca targets, followed by chromatographic separation. 44Sc-PSMA-617 was prepared under standard labeling conditions at elevated temperature resulting in a radiochemical purity of >97% at a specific activity of up to 10 MBq/nmol. 44Sc-PSMA-617 was evaluated in vitro and compared to the 177Lu- and 68Ga-labeled match, as well as 68Ga-PSMA-11 using PSMA-positive PC-3 PIP and PSMA-negative PC-3 flu prostate cancer cells. In these experiments it revealed similar in vitro properties to that of 177Lu- and 68Ga-labeled PSMA-617. Moreover, 44Sc-PSMA-617 bound specifically to PSMA-expressing PC-3 PIP tumor cells, while unspecific binding to PC-3 flu cells was not observed. The radioligands were investigated with regard to their in vivo properties in PC-3 PIP/flu tumor-bearing mice. 44Sc-PSMA-617 showed high tumor uptake and a fast renal excretion. The overall tissue distribution of 44Sc-PSMA-617 resembled that of 177Lu-PSMA-617 most closely, while the 68Ga-labeled ligands, in particular 68Ga-PSMA-11, showed different distribution kinetics. 44Sc-PSMA-617 enabled distinct visualization of PC-3 PIP tumor xenografts shortly after injection, with increasing tumor-to-background contrast over time while unspecific uptake in the PC-3 flu tumors was not observed.\n\n\nCONCLUSIONS\nThe in vitro characteristics and in vivo kinetics of 44Sc-PSMA-617 were more similar to 177Lu-PSMA-617 than to 68Ga-PSMA-617 and 68Ga-PSMA-11. Due to the almost four-fold longer half-life of 44Sc as compared to 68Ga, a centralized production of 44Sc-PSMA-617 and transport to satellite PET centers would be feasible. These features make 44Sc-PSMA-617 particularly appealing for clinical application.",
"title": ""
},
{
"docid": "f873e55f76905f465e17778f25ba2a79",
"text": "PURPOSE\nThe purpose of this study is to develop an automatic human movement classification system for the elderly using single waist-mounted tri-axial accelerometer.\n\n\nMETHODS\nReal-time movement classification algorithm was developed using a hierarchical binary tree, which can classify activities of daily living into four general states: (1) resting state such as sitting, lying, and standing; (2) locomotion state such as walking and running; (3) emergency state such as fall and (4) transition state such as sit to stand, stand to sit, stand to lie, lie to stand, sit to lie, and lie to sit. To evaluate the proposed algorithm, experiments were performed on five healthy young subjects with several activities, such as falls, walking, running, etc.\n\n\nRESULTS\nThe results of experiment showed that successful detection rate of the system for all activities were about 96%. To evaluate long-term monitoring, 3 h experiment in home environment was performed on one healthy subject and 98% of the movement was successfully classified.\n\n\nCONCLUSIONS\nThe results of experiment showed a possible use of this system which can monitor and classify the activities of daily living. For further improvement of the system, it is necessary to include more detailed classification algorithm to distinguish several daily activities.",
"title": ""
},
{
"docid": "59b1cbd4f94c231c7d5a1f06672c3faf",
"text": "Life stress is a major predictor of the course of bipolar disorder. Few studies have used laboratory paradigms to examine stress reactivity in bipolar disorder, and none have assessed autonomic reactivity to laboratory stressors. In the present investigation we sought to address this gap in the literature. Participants, 27 diagnosed with bipolar I disorder and 24 controls with no history of mood disorder, were asked to complete a complex working memory task presented as \"a test of general intelligence.\" Self-reported emotions were assessed at baseline and after participants were given task instructions; autonomic physiology was assessed at baseline and continuously during the stressor task. Compared to controls, individuals with bipolar disorder reported greater increases in pretask anxiety from baseline and showed greater cardiovascular threat reactivity during the task. Group differences in cardiovascular threat reactivity were significantly correlated with comorbid anxiety in the bipolar group. Our results suggest that a multimethod approach to assessing stress reactivity-including the use of physiological parameters that differentiate between maladaptive and adaptive profiles of stress responding-can yield valuable information regarding stress sensitivity and its associations with negative affectivity in bipolar disorder. (PsycINFO Database Record (c) 2015 APA, all rights reserved).",
"title": ""
},
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "5837606de41a0ed39c093d8f65a9176c",
"text": "Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, \"Geminoid F\", a typical humanoid robot with less facial degrees of freedom, \"Robovie R2\", and a robot with a 3-axis rotatable neck and movable lips, \"Telenoid R2\"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people's original motions without gaze information. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.",
"title": ""
},
{
"docid": "decd813dfea894afdceb55b3ca087487",
"text": "BACKGROUND\nAddiction to smartphone usage is a common worldwide problem among adults, which might negatively affect their wellbeing. This study investigated the prevalence and factors associated with smartphone addiction and depression among a Middle Eastern population.\n\n\nMETHODS\nThis cross-sectional study was conducted in 2017 using a web-based questionnaire distributed via social media. Responses to the Smartphone Addiction Scale - Short version (10-items) were rated on a 6-point Likert scale, and their percentage mean score (PMS) was commuted. Responses to Beck's Depression Inventory (20-items) were summated (range 0-60); their mean score (MS) was commuted and categorized. Higher scores indicated higher levels of addiction and depression. Factors associated with these outcomes were identified using descriptive and regression analyses. Statistical significance was set at P < 0.05.\n\n\nRESULTS\nComplete questionnaires were 935/1120 (83.5%), of which 619 (66.2%) were females and 316 (33.8%) were males. The mean ± standard deviation of their age was 31.7 ± 11 years. Majority of participants obtained university education 766 (81.9%), while 169 (18.1%) had school education. The PMS of addiction was 50.2 ± 20.3, and MS of depression was 13.6 ± 10.0. A significant positive linear relationship was present between smart phone addiction and depression (y = 39.2 + 0.8×; P < 0.001). Significantly higher smartphone addiction scores were associated with younger age users, (β = - 0.203, adj. P = 0.004). Factors associated with higher depression scores were school educated users (β = - 2.03, adj. P = 0.01) compared to the university educated group and users with higher smart phone addiction scores (β =0.194, adj. P < 0.001).\n\n\nCONCLUSIONS\nThe positive correlation between smartphone addiction and depression is alarming. Reasonable usage of smart phones is advised, especially among younger adults and less educated users who could be at higher risk of depression.",
"title": ""
},
{
"docid": "a61dd2408c467513b1f1d27c5de9a7ea",
"text": "This paper presents a new class of wideband 90° hybrid coupler with an arbitrary coupling level. The physical size of the proposed coupler is close to that of a conventional two-section branch-line coupler, but it has an additional phase inverter. The impedance bandwidth of the proposed coupler is close to that of a four-section branch-line coupler. The proposed coupler is a backward-wave coupler with a port assignment different from that of a conventional branch-line coupler. The design formulas of the proposed coupler are proved based on its even- and odd-mode half structures. We demonstrated three couplers at the center frequency of 2 GHz with different design parameters.",
"title": ""
},
{
"docid": "40fda9cba754c72f1fba17dd3a5759b2",
"text": "Humans can easily recognize handwritten words, after gaining basic knowledge of languages. This knowledge needs to be transferred to computers for automatic character recognition. The work proposed in this paper tries to automate recognition of handwritten hindi isolated characters using multiple classifiers. For feature extraction, it uses histogram of oriented gradients as one feature and profile projection histogram as another feature. The performance of various classifiers has been evaluated using theses features experimentally and quadratic SVM has been found to produce better results.",
"title": ""
},
{
"docid": "7b220c4e424abd4c6a724c7d0b45c0f4",
"text": "Text in video is a very compact and accurate clue for video indexing and summarization. Most video text detection and extraction methods hold assumptions on text color, background contrast, and font style. Moreover, few methods can handle multilingual text well since different languages may have quite different appearances. This paper performs a detailed analysis of multilingual text characteristics, including English and Chinese. Based on the analysis, we propose a comprehensive, efficient video text detection, localization, and extraction method, which emphasizes the multilingual capability over the whole processing. The proposed method is also robust to various background complexities and text appearances. The text detection is carried out by edge detection, local thresholding, and hysteresis edge recovery. The coarse-to-fine localization scheme is then performed to identify text regions accurately. The text extraction consists of adaptive thresholding, dam point labeling, and inward filling. Experimental results on a large number of video images and comparisons with other methods are reported in detail.",
"title": ""
},
{
"docid": "baba2dc1de14cc70f88284d3e7d2c41b",
"text": "Deep generative models have achieved remarkable success in various data domains, including images, time series, and natural languages. There remain, however, substantial challenges for combinatorial structures, including graphs. One of the key challenges lies in the difficulty of ensuring semantic validity in context. For example, in molecular graphs, the number of bonding-electron pairs must not exceed the valence of an atom; whereas in protein interaction networks, two proteins may be connected only when they belong to the same or correlated gene ontology terms. These constraints are not easy to be incorporated into a generative model. In this work, we propose a regularization framework for variational autoencoders as a step toward semantic validity. We focus on the matrix representation of graphs and formulate penalty terms that regularize the output distribution of the decoder to encourage the satisfaction of validity constraints. Experimental results confirm a much higher likelihood of sampling valid graphs in our approach, compared with others reported in the literature.",
"title": ""
},
{
"docid": "0fbc38c8a8c4171785902382e8d43762",
"text": "Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p<0.05) and had an efficient implementation with a run time of 8min and 3s per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/.",
"title": ""
},
{
"docid": "d18d67949bae399cdc148f2ded81903a",
"text": "Stock market news and investing tips are popular topics in Twitter. In this paper, first we utilize a 5-year financial news corpus comprising over 50,000 articles collected from the NASDAQ website for the 30 stock symbols in Dow Jones Index (DJI) to train a directional stock price prediction system based on news content. Then we proceed to prove that information in articles indicated by breaking Tweet volumes leads to a statistically significant boost in the hourly directional prediction accuracies for the prices of DJI stocks mentioned in these articles. Secondly, we show that using document-level sentiment extraction does not yield to a statistically significant boost in the directional predictive accuracies in the presence of other 1-gram keyword features.",
"title": ""
},
{
"docid": "41b7e610e0aa638052f71af1902e92d5",
"text": "This work investigates how social bots can phish employees of organizations, and thus endanger corporate network security. Current literature mostly focuses on traditional phishing methods (through e-mail, phone calls, and USB sticks). We address the serious organizational threats and security risks caused by phishing through online social media, specifically through Twitter. This paper first provides a review of current work. It then describes our experimental development, in which we created and deployed eight social bots on Twitter, each associated with one specific subject. For a period of four weeks, each bot published tweets about its subject and followed people with similar interests. In the final two weeks, our experiment showed that 437 unique users could have been phished, 33 of which visited our website through the network of an organization. Without revealing any sensitive or real data, the paper analyses some findings of this experiment and addresses further plans for research in this area.",
"title": ""
},
{
"docid": "22b47cfd0170734f5f3e3fd2b5230bce",
"text": "We present a synthesis method for communication protocols for active safety applications that satisfy certain formal specifications on quality of service requirements. The protocols are developed to provide reliable communication services for automobile active safety applications. The synthesis method transforms a specification into a distributed implementation of senders and receivers that together satisfy the quality of service requirements by transmitting messages over an unreliable medium. We develop a specification language and an execution model for the implementations, and demonstrate the viability of our method by developing a protocol for a traffic scenario in which a car runs a red light at a busy intersection.",
"title": ""
},
{
"docid": "2ec14d4544d1fcc6591b6f31140af204",
"text": "To better understand the molecular and cellular differences in brain organization between human and nonhuman primates, we performed transcriptome sequencing of 16 regions of adult human, chimpanzee, and macaque brains. Integration with human single-cell transcriptomic data revealed global, regional, and cell-type–specific species expression differences in genes representing distinct functional categories. We validated and further characterized the human specificity of genes enriched in distinct cell types through histological and functional analyses, including rare subpallial-derived interneurons expressing dopamine biosynthesis genes enriched in the human striatum and absent in the nonhuman African ape neocortex. Our integrated analysis of the generated data revealed diverse molecular and cellular features of the phylogenetic reorganization of the human brain across multiple levels, with relevance for brain function and disease.",
"title": ""
},
{
"docid": "6e9edeffb12cf8e50223a933885bcb7c",
"text": "Reversible data hiding in encrypted images (RDHEI) is an effective technique to embed data in the encrypted domain. An original image is encrypted with a secret key and during or after its transmission, it is possible to embed additional information in the encrypted image, without knowing the encryption key or the original content of the image. During the decoding process, the secret message can be extracted and the original image can be reconstructed. In the last few years, RDHEI has started to draw research interest. Indeed, with the development of cloud computing, data privacy has become a real issue. However, none of the existing methods allow us to hide a large amount of information in a reversible manner. In this paper, we propose a new reversible method based on MSB (most significant bit) prediction with a very high capacity. We present two approaches, these are: high capacity reversible data hiding approach with correction of prediction errors and high capacity reversible data hiding approach with embedded prediction errors. With this method, regardless of the approach used, our results are better than those obtained with current state of the art methods, both in terms of reconstructed image quality and embedding capacity.",
"title": ""
},
{
"docid": "0f7a4ddeb2627b8815175aea809a1ca3",
"text": "A deep community in a graph is a connected component that can only be seen after removal of nodes or edges from the rest of the graph. This paper formulates the problem of detecting deep communities as multi-stage node removal that maximizes a new centrality measure, called the local Fiedler vector centrality (LFVC), at each stage. The LFVC is associated with the sensitivity of algebraic connectivity to node or edge removals. We prove that a greedy node/edge removal strategy, based on successive maximization of LFVC, has bounded performance loss relative to the optimal, but intractable, combinatorial batch removal strategy. Under a stochastic block model framework, we show that the greedy LFVC strategy can extract deep communities with probability one as the number of observations becomes large. We apply the greedy LFVC strategy to real-world social network datasets. Compared with conventional community detection methods we demonstrate improved ability to identify important communities and key members in the network.",
"title": ""
}
] |
scidocsrr
|
178f9abe89976ee8ef4fbd41cc46e759
|
A Survey on Assessment and Ranking Methodologies for User-Generated Content on the Web
|
[
{
"docid": "cc0a875eca7237f786b81889f028f1f2",
"text": "Online photo services such as Flickr and Zooomr allow users to share their photos with family, friends, and the online community at large. An important facet of these services is that users manually annotate their photos using so called tags, which describe the contents of the photo or provide additional contextual and semantical information. In this paper we investigate how we can assist users in the tagging phase. The contribution of our research is twofold. We analyse a representative snapshot of Flickr and present the results by means of a tag characterisation focussing on how users tags photos and what information is contained in the tagging. Based on this analysis, we present and evaluate tag recommendation strategies to support the user in the photo annotation task by recommending a set of tags that can be added to the photo. The results of the empirical evaluation show that we can effectively recommend relevant tags for a variety of photos with different levels of exhaustiveness of original tagging.",
"title": ""
},
{
"docid": "e59d1a3936f880233001eb086032d927",
"text": "In microblogging services such as Twitter, the users may become overwhelmed by the raw data. One solution to this problem is the classification of short text messages. As short texts do not provide sufficient word occurrences, traditional classification methods such as \"Bag-Of-Words\" have limitations. To address this problem, we propose to use a small set of domain-specific features extracted from the author's profile and text. The proposed approach effectively classifies the text to a predefined set of generic classes such as News, Events, Opinions, Deals, and Private Messages.",
"title": ""
}
] |
[
{
"docid": "57fd4b59ffb27c35faa6a5ee80001756",
"text": "This paper describes a novel method for motion generation and reactive collision avoidance. The algorithm performs arbitrary desired velocity profiles in absence of external disturbances and reacts if virtual or physical contact is made in a unified fashion with a clear physically interpretable behavior. The method uses physical analogies for defining attractor dynamics in order to generate smooth paths even in presence of virtual and physical objects. The proposed algorithm can, due to its low complexity, run in the inner most control loop of the robot, which is absolutely crucial for safe Human Robot Interaction. The method is thought as the locally reactive real-time motion generator connecting control, collision detection and reaction, and global path planning.",
"title": ""
},
{
"docid": "cc8e52fdb69a9c9f3111287905f02bfc",
"text": "We present a new methodology for exploring and analyzing navigation patterns on a web site. The patterns that can be analyzed consist of sequences of URL categories traversed by users. In our approach, we first partition site users into clusters such that users with similar navigation paths through the site are placed into the same cluster. Then, for each cluster, we display these paths for users within that cluster. The clustering approach we employ is model-based (as opposed to distance-based) and partitions users according to the order in which they request web pages. In particular, we cluster users by learning a mixture of first-order Markov models using the Expectation-Maximization algorithm. The runtime of our algorithm scales linearly with the number of clusters and with the size of the data; and our implementation easily handles hundreds of thousands of user sessions in memory. In the paper, we describe the details of our method and a visualization tool based on it called WebCANVAS. We illustrate the use of our approach on user-traffic data from msnbc.com.",
"title": ""
},
{
"docid": "fc66ced7b3faad64621722ab30cd5cc9",
"text": "In this paper, we present a novel framework for urban automated driving based 1 on multi-modal sensors; LiDAR and Camera. Environment perception through 2 sensors fusion is key to successful deployment of automated driving systems, 3 especially in complex urban areas. Our hypothesis is that a well designed deep 4 neural network is able to end-to-end learn a driving policy that fuses LiDAR and 5 Camera sensory input, achieving the best out of both. In order to improve the 6 generalization and robustness of the learned policy, semantic segmentation on 7 camera is applied, in addition to applying our new LiDAR post processing method; 8 Polar Grid Mapping (PGM). The system is evaluated on the recently released urban 9 car simulator, CARLA. The evaluation is measured according to the generalization 10 performance from one environment to another. The experimental results show that 11 the best performance is achieved by fusing the PGM and semantic segmentation. 12",
"title": ""
},
{
"docid": "2f9efc20fb961bc42f20211a6c958832",
"text": "We introduce PixelPlayer, a system that, by leveraging large amounts of unlabeled videos, learns to locate image regions which produce sounds and separate the input sounds into a set of components that represents the sound from each pixel. Our approach capitalizes on the natural synchronization of the visual and audio modalities to learn models that jointly parse sounds and images, without requiring additional manual supervision. Experimental results on a newly collected MUSIC dataset show that our proposed Mix-and-Separate framework outperforms several baselines on source separation. Qualitative results suggest our model learns to ground sounds in vision, enabling applications such as independently adjusting the volume of sound sources.",
"title": ""
},
{
"docid": "af9931dbd56100f8b9ea3004d7d43b25",
"text": "Solvent-free microwave extraction (SFME) has been proposed as a green method for the extraction of essential oil from aromatic herbs that are extensively used in the food industry. This technique is a combination of microwave heating and dry distillation performed at atmospheric pressure without any added solvent or water. The isolation and concentration of volatile compounds is performed in a single stage. In this work, SFME and a conventional technique, hydro-distillation HD (Clevenger apparatus), are used for the extraction of essential oil from rosemary (Rosmarinus officinalis L.) and are compared. This preliminary laboratory study shows that essential oils extracted by SFME in 30min were quantitatively (yield and kinetics profile) and qualitatively (aromatic profile) similar to those obtained using conventional hydro-distillation in 2h. Experiments performed in a 75L pilot microwave reactor prove the feasibility of SFME up scaling and potential industrial applications.",
"title": ""
},
{
"docid": "6e923a586a457521e9de9d4a9cab77ad",
"text": "We present a new approach to the matting problem which splits the task into two steps: interactive trimap extraction followed by trimap-based alpha matting. By doing so we gain considerably in terms of speed and quality and are able to deal with high resolution images. This paper has three contributions: (i) a new trimap segmentation method using parametric max-flow; (ii) an alpha matting technique for high resolution images with a new gradient preserving prior on alpha; (iii) a database of 27 ground truth alpha mattes of still objects, which is considerably larger than previous databases and also of higher quality. The database is used to train our system and to validate that both our trimap extraction and our matting method improve on state-of-the-art techniques.",
"title": ""
},
{
"docid": "08f26c702f7d0bb5e21b51d7681869a2",
"text": "Millions of posts are being generated in real-time by users in social networking services, such as Twitter. However, a considerable number of those posts are mundane posts that are of interest to the authors and possibly their friends only. This paper investigates the problem of automatically discovering valuable posts that may be of potential interest to a wider audience. Specifically, we model the structure of Twitter as a graph consisting of users and posts as nodes and retweet relations between the nodes as edges. We propose a variant of the HITS algorithm for producing a static ranking of posts. Experimental results on real world data demonstrate that our method can achieve better performance than several baseline methods.",
"title": ""
},
{
"docid": "1c02c556aec217e0056e7f8fcb61f14a",
"text": "In this paper, scalable Whole Slide Imaging (sWSI), a novel high-throughput, cost-effective and robust whole slide imaging system on both Android and iOS platforms is introduced and analyzed. With sWSI, most mainstream smartphone connected to a optical eyepiece of any manually controlled microscope can be automatically controlled to capture sequences of mega-pixel fields of views that are synthesized into giga-pixel virtual slides. Remote servers carry out the majority of computation asynchronously to support clients running at satisfying frame rates without sacrificing image quality nor robustness. A typical 15x15mm sample can be digitized in 30 seconds with 4X or in 3 minutes with 10X object magnification, costing under $1. The virtual slide quality is considered comparable to existing high-end scanners thus satisfying for clinical usage by surveyed pathologies. The scan procedure with features such as supporting magnification up to 100x, recoding z-stacks, specimen-type-neutral and giving real-time feedback, is deemed work-flow-friendly and reliable.",
"title": ""
},
{
"docid": "66c548d14007f82d2ab1c5337965e2ae",
"text": "The objective of this paper is to provide a review of recent advances in automatic vibration- and audio-based fault diagnosis in machinery using condition monitoring strategies. It presents the most valuable techniques and results in this field and highlights the most profitable directions of research to present. Automatic fault diagnosis systems provide greater security in surveillance of strategic infrastructures, such as electrical substations and industrial scenarios, reduce downtime of machines, decrease maintenance costs, and avoid accidents which may have devastating consequences. Automatic fault diagnosis systems include signal acquisition, signal processing, decision support, and fault diagnosis. The paper includes a comprehensive bibliography of more than 100 selected references which can be used by researchers working in this field.",
"title": ""
},
{
"docid": "daac9ee402eebc650fe4f98328a7965d",
"text": "5.1. Detection Formats 475 5.2. Food Quality and Safety Analysis 477 5.2.1. Pathogens 477 5.2.2. Toxins 479 5.2.3. Veterinary Drugs 479 5.2.4. Vitamins 480 5.2.5. Hormones 480 5.2.6. Diagnostic Antibodies 480 5.2.7. Allergens 481 5.2.8. Proteins 481 5.2.9. Chemical Contaminants 481 5.3. Medical Diagnostics 481 5.3.1. Cancer Markers 481 5.3.2. Antibodies against Viral Pathogens 482 5.3.3. Drugs and Drug-Induced Antibodies 483 5.3.4. Hormones 483 5.3.5. Allergy Markers 483 5.3.6. Heart Attack Markers 484 5.3.7. Other Molecular Biomarkers 484 5.4. Environmental Monitoring 484 5.4.1. Pesticides 484 5.4.2. 2,4,6-Trinitrotoluene (TNT) 485 5.4.3. Aromatic Hydrocarbons 485 5.4.4. Heavy Metals 485 5.4.5. Phenols 485 5.4.6. Polychlorinated Biphenyls 487 5.4.7. Dioxins 487 5.5. Summary 488 6. Conclusions 489 7. Abbreviations 489 8. Acknowledgment 489 9. References 489",
"title": ""
},
{
"docid": "ee1293cc2e11543c5dad4473b0592f58",
"text": "Mobile ad hoc networks’ (MANETs) inherent power limitation makes power-awareness a critical requirement for MANET protocols. In this paper, we propose a new routing metric, the drain rate, which predicts the lifetime of a node as a function of current traffic conditions. We describe the Minimum Drain Rate (MDR) mechanism which uses a combination of the drain rate with remaining battery capacity to establish routes. MDR can be employed by any existing MANET routing protocol to achieve a dual goal: extend both nodal battery life and connection lifetime. Using the ns-2 simulator and the Dynamic Source Routing (DSR) protocol, we compared MDR to the Minimum Total Transmission Power Routing (MTPR) scheme and the Min-Max Battery Cost Routing (MMBCR) scheme and proved that MDR is the best approach to achieve the dual goal.",
"title": ""
},
{
"docid": "b0ebcd7a340725713e90d05e9a50ae24",
"text": "Analogies are ubiquitous in science, both in theory and experiments. Based on an ethnographic study of a research lab in neural engineering, we focus on a case of conceptual innovation where the cross-breeding of two types of analogies led to a breakthrough. In vivo phenomena were recreated in two analogical forms: one, as an in vitro physical model, and the other, as a computational model of the first physical model. The computational model also embodied constraints drawn from the neuroscience and engineering literature. Cross connections and linkages were then made between these two analogical models, over time, to solve problems. We describe how the development of the intermediary, hybrid computational model led to a conceptual innovation, and subsequent engineering innovations. Using this case study, we highlight some of the peculiar features of such hybrid analogies that are now used widely in the sciences and engineering sciences, and the significant questions they raise for current theories of analogy.",
"title": ""
},
{
"docid": "0081abb45db5d3e893ee1086d1680041",
"text": "`introduction Technologies are amplifying each other in a fusion of technologies across the physical digital and biological worlds. We are witnessing profound shifts across all industries market by the emergence of new business models, the disruption of incumbents and the reshaping of production, consumption, transportation and delivery systems. On the social front a paradigm shift is underway in how we work and communicate, as well as how we express, inform, and entertain our self. Decision makers are too often caught in traditional linear (non-disruptive) thinking or too absorbed by immediate concerns to think strategically about the forces of disruption and innovation shaping our future.",
"title": ""
},
{
"docid": "af420d60e9aafb9aa39da5381a681b76",
"text": "In this paper, a novel planar Marchand balun using a patterned ground plane is presented. In the new design, with a slot under the coupled lines cut on the ground plane, the even-mode impedance can be increased substantially. Meanwhile, we propose that two additional separated rectangular conductors are placed under the coupled lines to act as two capacitors so that the odd-mode impedance is decreased. Design theory and procedure are presented to optimize the Marchand balun. As an example, one Marchand balun on a double-sided PCB is designed, simulated, fabricated and measured. The measured return loss is found to be better than – 10 dB over the frequency band from 1.2 GHz to 3.3 GHz, or around 100% bandwidth. The measured amplitude and phase imbalance between the two balanced output ports are within 1 dB and 4, respectively, over the operating frequency band. Index Terms — Baluns, coupled lines, wideband",
"title": ""
},
{
"docid": "011a9ac960aecc4a91968198ac6ded97",
"text": "INTRODUCTION\nPsychological empowerment is really important and has remarkable effect on different organizational variables such as job satisfaction, organizational commitment, productivity, etc. So the aim of this study was to investigate the relationship between psychological empowerment and productivity of Librarians in Isfahan Medical University.\n\n\nMETHODS\nThis was correlational research. Data were collected through two questionnaires. Psychological empowerment questionnaire and the manpower productivity questionnaire of Gold. Smith Hersey which their content validity was confirmed by experts and their reliability was obtained by using Cronbach's Alpha coefficient, 0.89 and 0.9 respectively. Due to limited statistical population, did not used sampling and review was taken via census. So 76 number of librarians were evaluated. Information were reported on both descriptive and inferential statistics (correlation coefficient tests Pearson, Spearman, T-test, ANOVA), and analyzed by using the SPSS19 software.\n\n\nFINDINGS\nIn our study, the trust between partners and efficacy with productivity had the highest correlation. Also there was a direct relationship between psychological empowerment and the productivity of labor (r =0.204). In other words, with rising of mean score of psychological empowerment, the mean score of efficiency increase too.\n\n\nCONCLUSIONS\nThe results showed that if development programs of librarian's psychological empowerment increase in order to their productivity, librarians carry out their duties with better sense. Also with using the capabilities of librarians, the development of creativity with happen and organizational productivity will increase.",
"title": ""
},
{
"docid": "4ce67aeca9e6b31c5021712f148108e2",
"text": "Self-endorsing—the portrayal of potential consumers using products—is a novel advertising strategy made possible by the development of virtual environments. Three experiments compared self-endorsing to endorsing by an unfamiliar other. In Experiment 1, self-endorsing in online advertisements led to higher brand attitude and purchase intention than other-endorsing. Moreover, photographs were a more effective persuasion channel than text. In Experiment 2, participants wore a brand of clothing in a high-immersive virtual environment and preferred the brand worn by their virtual self to the brand worn by others. Experiment 3 demonstrated that an additional mechanism behind self-endorsing was the interactivity of the virtual representation. Evidence for self-referencing as a mediator is presented. 94 The Journal of Advertising context, consumers can experience presence while interacting with three-dimensional products on Web sites (Biocca et al. 2001; Edwards and Gangadharbatla 2001; Li, Daugherty, and Biocca 2001). When users feel a heightened sense of presence and perceive the virtual experience to be real, they are more easily persuaded by the advertisement (Kim and Biocca 1997). The differing degree, or the objectively measurable property of presence, is called immersion. Immersion is the extent to which media are capable of delivering a vivid illusion of reality using rich layers of sensory input (Slater and Wilbur 1997). Therefore, different levels of immersion (objective unit) lead to different experiences of presence (subjective unit), and both concepts are closely related to interactivity. Web sites are considered to be low-immersive virtual environments because of limited interactive capacity and lack of richness in sensory input, which decreases the sense of presence, whereas virtual reality is considered a high-immersive virtual environment because of its ability to reproduce perceptual richness, which heightens the sense of feeling that the virtual experience is real. Another differentiating aspect of virtual environments is that they offer plasticity of the appearance and behavior of virtual self-representations. It is well known that virtual selves may or may not be true replications of physical appearances (Farid 2009; Yee and Bailenson 2006), but users can also be faced with situations in which they are not controlling the behaviors of their own virtual representations (Fox and Bailenson 2009). In other words, a user can see himor herself using (and perhaps enjoying) a product he or she has never physically used. Based on these unique features of virtual platforms, the current study aims to explore the effect of viewing a virtual representation that may or may not look like the self, endorsing a brand by use. We also manipulate the interactivity of endorsers within virtual environments to provide evidence for the mechanism behind self-endorsing. THE SELF-ENDORSED ADVERTISEMENT Recent studies have confirmed that positive connections between the self and brands can be created by subtle manipulations, such as mimicry of the self ’s nonverbal behaviors (Tanner et al. 2008). The slightest affiliation between the self and the other can lead to positive brand evaluations. In a study by Ferraro, Bettman, and Chartrand (2009), an unfamiliar ingroup or out-group member was portrayed in a photograph with a water bottle bearing a brand name. The simple detail of the person wearing a baseball cap with the same school logo (i.e., in-group affiliation) triggered participants to choose the brand associated with the in-group member. Thus, the self–brand relationship significantly influences brand attitude, but self-endorsing has not received scientific attention to date, arguably because it was not easy to implement before the onset of virtual environments. Prior research has studied the effectiveness of different types of endorsers and their influence on the persuasiveness of advertisements (Friedman and Friedman 1979; Stafford, Stafford, and Day 2002), but the self was not considered in these investigations as a possible source of endorsement. However, there is the possibility that the currently sporadic use of self-endorsing (e.g., www.myvirtualmodel.com) will increase dramatically. For instance, personalized recommendations are being sent to consumers based on online “footsteps” of prior purchases (Tam and Ho 2006). Furthermore, Google has spearheaded keyword search advertising, which displays text advertisements in real-time based on search words ( Jansen, Hudson, and Hunter 2008), and Yahoo has begun to display video and image advertisements based on search words (Clifford 2009). Considering the availability of personal images on the Web due to the widespread employment of social networking sites, the idea of self-endorsing may spread quickly. An advertiser could replace the endorser shown in the image advertisement called by search words with the user to create a self-endorsed advertisement. Thus, the timely investigation of the influence of self-endorsing on users, as well as its mechanism, is imperative. Based on positivity biases related to the self (Baumeister 1998; Chambers and Windschitl 2004), self-endorsing may be a powerful persuasion tool. However, there may be instances when using the self in an advertisement may not be effective, such as when the virtual representation does not look like the consumer and the consumer fails to identify with the representation. Self-endorsed advertisements may also lose persuasiveness when movements of the representation are not synched with the actions of the consumer. Another type of endorser that researchers are increasingly focusing on is the typical user endorser. Typical endorsers have an advantage in that they appeal to the similarity of product usage with the average user. For instance, highly attractive models are not always effective compared with normally attractive models, even for beauty-enhancing products (i.e., acne treatment), when users perceive that the highly attractive models do not need those products (Bower and Landreth 2001). Moreover, with the advancement of the Internet, typical endorsers are becoming more influential via online testimonials (Lee, Park, and Han 2006; Wang 2005). In the current studies, we compared the influence of typical endorsers (i.e., other-endorsing) and self-endorsers on brand attitude and purchase intentions. In addition to investigating the effects of self-endorsing, this work extends results of earlier studies on the effectiveness of different types of endorsers and makes important theoretical contributions by studying self-referencing as an underlying mechanism of self-endorsing.",
"title": ""
},
{
"docid": "99efdc3c90fa88ecffeaaae6d907a5ae",
"text": "Subgraph patterns are widely used in graph classification, but their effectiveness is often hampered by large number of patterns or lack of discrimination power among individual patterns. We introduce a novel classification method based on pattern co-occurrence to derive graph classification rules. Our method employs a pattern exploration order such that the complementary discriminative patterns are examined first. Patterns are grouped into co-occurrence rules during the pattern exploration, leading to an integrated process of pattern mining and classifier learning. By taking advantage of co-occurrence information, our method can generate strong features by assembling weak features. Unlike previous methods that invoke the pattern mining process repeatedly, our method only performs pattern mining once. In addition, our method produces a more interpretable classifier and shows better or competitive classification effectiveness in terms of accuracy and execution time.",
"title": ""
},
{
"docid": "fa4480bbc460658bd1ea5804fdebc5ed",
"text": "This paper examines the problem of how to teach multiple tasks to a Reinforcement Learning (RL) agent. To this end, we use Linear Temporal Logic (LTL) as a language for specifying multiple tasks in a manner that supports the composition of learned skills. We also propose a novel algorithm that exploits LTL progression and off-policy RL to speed up learning without compromising convergence guarantees, and show that our method outperforms the state-of-the-art approach on randomly generated Minecraft-like grids.",
"title": ""
},
{
"docid": "83722bfd8ee5dc71c8dd55b781c35d35",
"text": "The use of amateur drones is expected to significantly increase over the upcoming years. However, regulations do not allow such drones to fly over all areas, in addition to typical altitude limitations. As a result, there is an urgent need for amateur drone surveillance solutions. These solutions should include means of accurate detection, classification, and localization of the unwanted drones in a no-fly zone. In this article, we give an overview of promising techniques for modulation classification and signal-strength-based localization of amateur drones by using surveillance drones. By introducing a generic altitude-dependent propagation model, we show how detection and localization performance depend on the altitude of surveillance drones. Particularly, our simulation results show a 25 dB reduction in the minimum detectable power or 10 times coverage enhancement of a surveillance drone by flying at the optimum altitude. Moreover, for a target no-fly zone, the location estimation error of an amateur drone can be remarkably reduced by optimizing the positions of the surveillance drones. Finally, we conclude the article with a general discussion about the future work and possible challenges in aerial surveillance systems.",
"title": ""
},
{
"docid": "f8339417b0894191670d1528df7ac297",
"text": "OBJECTIVE\nThe purpose of this study was to reanalyze the results of a previously published trial that compared 3 methods of anterior colporrhaphy according to the clinically relevant definitions of success.\n\n\nSTUDY DESIGN\nA secondary analysis of a trial of 114 subjects who underwent surgery for anterior pelvic organ prolapse who were assigned randomly to standard anterior colporrhaphy, ultralateral colporrhaphy, or anterior colporrhaphy plus polyglactin 910 mesh from 1996-1999. For the current analysis, success was defined as (1) no prolapse beyond the hymen, (2) the absence of prolapse symptoms (visual analog scale ≤ 2), and (3) the absence of retreatment.\n\n\nRESULTS\nEighty-eight percent of the women met our definition of success at 1 year. One subject (1%) underwent surgery for recurrence 29 months after surgery. No differences among the 3 groups were noted for any outcomes.\n\n\nCONCLUSION\nReanalysis of a trial of 3 methods of anterior colporrhaphy revealed considerably better success with the use of clinically relevant outcome criteria compared with strict anatomic criteria.",
"title": ""
}
] |
scidocsrr
|
171473280b389a1bc36a5ecbbeebe02e
|
SIFT Hardware Implementation for Real-Time Image Feature Extraction
|
[
{
"docid": "c797b2a78ea6eb434159fd948c0a1bf0",
"text": "Feature extraction is an essential part in applications that require computer vision to recognize objects in an image processed. To extract the features robustly, feature extraction algorithms are often very demanding in computation so that the performance achieved by pure software is far from real-time. Among those feature extraction algorithms, scale-invariant feature transform (SIFT) has gained a lot of popularity recently. In this paper, we propose an all-hardware SIFT accelerator-the fastest of its kind to our knowledge. It consists of two interactive hardware components, one for key point identification, and the other for feature descriptor generation. We successfully developed a segment buffer scheme that could not only feed data to the computing modules in a data-streaming manner, but also reduce about 50% memory requirement than a previous work. With a parallel architecture incorporating a three-stage pipeline, the processing time of the key point identification is only 3.4 ms for one video graphics array (VGA) image. Taking also into account the feature descriptor generation part, the overall SIFT processing time for a VGA image can be kept within 33 ms (to support real-time operation) when the number of feature points to be extracted is fewer than 890.",
"title": ""
},
{
"docid": "744519470178d9e53f8e4a06a4c4fdb3",
"text": "Detecting and matching image features is a fundamental task in video analytics and computer vision systems. It establishes the correspondences between two images taken at different time instants or from different viewpoints. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a new FPGA-based embedded system architecture for feature detection and matching. It consists of scale-invariant feature transform (SIFT) feature detection, as well as binary robust independent elementary features (BRIEF) feature description and matching. It is able to establish accurate correspondences between consecutive frames for 720-p (1280x720) video. It optimizes the FPGA architecture for the SIFT feature detection to reduce the utilization of FPGA resources. Moreover, it implements the BRIEF feature description and matching on FPGA. Due to these contributions, the proposed system achieves feature detection and matching at 60 frame/s for 720-p video. Its processing speed can meet and even exceed the demand of most real-life real-time video analytics applications. Extensive experiments have demonstrated its efficiency and effectiveness.",
"title": ""
},
{
"docid": "90378605e6ee192cfedf60d226f8cacf",
"text": "Ever since the introduction of freely programmable hardware components into modern graphics hardware, graphics processing units (GPUs) have become increasingly popular for general purpose computations. Especially when applied to computer vision algorithms where a Single set of Instructions has to be executed on Multiple Data (SIMD), GPU-based algorithms can provide a major increase in processing speed compared to their CPU counterparts. This paper presents methods that take full advantage of modern graphics card hardware for real-time scale invariant feature detection and matching. The focus lies on the extraction of feature locations and the generation of feature descriptors from natural images. The generation of these feature-vectors is based on the Speeded Up Robust Features (SURF) method [1] due to its high stability against rotation, scale and changes in lighting condition of the processed images. With the presented methods feature detection and matching can be performed at framerates exceeding 100 frames per second for 640 times 480 images. The remaining time can then be spent on fast matching against large feature databases on the GPU while the CPU can be used for other tasks.",
"title": ""
}
] |
[
{
"docid": "c35306b0ec722364308d332664c823f8",
"text": "The uniform asymmetrical microstrip parallel coupled line is used to design the multi-section unequal Wilkinson power divider with high dividing ratio. The main objective of the paper is to increase the trace widths in order to facilitate the construction of the power divider with the conventional photolithography method. The separated microstrip lines in the conventional Wilkinson power divider are replaced with the uniform asymmetrical parallel coupled lines. An even-odd mode analysis is used to calculate characteristic impedances and then the per-unit-length capacitance and inductance parameter matrix are used to calculate the physical dimension of the power divider. To clarify the advantages of this method, two three-section Wilkinson power divider with an unequal power-division ratio of 1 : 2.5 are designed and fabricated and measured, one in the proposed configuration and the other in the conventional configuration. The simulation and the measurement results show that not only the specified design goals are achieved, but also all the microstrip traces can be easily implemented in the proposed power divider.",
"title": ""
},
{
"docid": "ff3867a1c0ee1d3f1e61cb306af37bb1",
"text": "Introduction: The mucocele is one of the most common benign soft tissue masses that occur in the oral cavity. Mucoceles (mucus and coele cavity), by definition, are cavities filled with mucus. Two types of mucoceles can appear – extravasation type and retention type. Diagnosis is mostly based on clinical findings. The common location of the extravasation mucocele is the lower lip and the treatment of choice is surgical removal. This paper gives an insight into the phenomenon and a case report has been presented. Case report: Twenty five year old femalepatient reported with chief complaint of small swelling on the left side of the lower lip since 2 months. The swelling was diagnosed as extravasation mucocele after history and clinical examination. The treatment involved surgical excision of tissue and regular follow up was done to check for recurrence. Conclusion: The treatment of lesion such as mucocele must be planned taking into consideration the various clinical parameters and any oral habits as these lesions have a propensity of recurrence.",
"title": ""
},
{
"docid": "6f674570fce0c7070b3b1df83ce9da6a",
"text": "Monitoring of the network performance in highspeed Internet infrastructure is a challenging task, as the requirements for the given quality level are service-dependent. Backbone QoS monitoring and analysis in Multi-hop Networks requires therefore knowledge about types of applications forming current network traffic. To overcome the drawbacks of existing methods for traffic classification, usage of C5.0 Machine Learning Algorithm (MLA) was proposed. On the basis of statistical traffic information received from volunteers and C5.0 algorithm we constructed a boosted classifier, which was shown to have ability to distinguish between 7 different applications in test set of 76,632-1,622,710 unknown cases with average accuracy of 99.3-99.9%. This high accuracy was achieved by using high quality training data collected by our system, a unique set of parameters used for both training and classification, an algorithm for recognizing flow direction and the C5.0 itself. Classified applications include Skype, FTP, torrent, web browser traffic, web radio, interactive gaming and SSH. We performed subsequent tries using different sets of parameters and both training and classification options. This paper shows how we collected accurate traffic data, presents arguments used in classification process, introduces the C5.0 classifier and its options, and finally evaluates and compares the obtained results.",
"title": ""
},
{
"docid": "a717222db438adc4be0fd82f916bacdc",
"text": "This paper presents MalwareVis, a utility that provides security researchers a method to browse, filter, view and compare malware network traces as entities.\n Specifically, we propose a cell-like visualization model to view the network traces of a malware sample's execution. This model is a intuitive representation of the heterogeneous attributes (protocol, host ip, transmission size, packet number, duration) of a list of network streams associated with a malware instance. We encode these features into colors and basic geometric properties of common shapes. The list of streams is organized circularly in a clock-wise fashion to form an entity. Our design takes into account of the sparse and skew nature of these attributes' distributions and proposes mapping and layout strategies to allow a clear global view of a malware sample's behaviors.\n We demonstrate MalwareVis on a real-world corpus of malware samples and display their individual activity patterns. We show that it is a simple to use utility that provides intriguing visual representations that facilitate user interaction to perform security analysis.",
"title": ""
},
{
"docid": "8fd90f5904e6bd9738840bdaf8014372",
"text": "We present analytical formulations, based on a coulombian approach, of the magnetic field created by permanent-magnet rings. For axially magnetized magnets, we establish the expressions for the three components. We also give the analytical 3-D formulation of the created magnetic field for radially magnetized rings. We compare the results determined by a 2-D analytical approximation to those for the 3-D analytical formulation, in order to determine the range of validity of the 2-D approximation.",
"title": ""
},
{
"docid": "13a23fe61319bc82b8b3e88ea895218c",
"text": "A new generation of robots is being designed for human occupied workspaces where safety is of great concern. This research demonstrates the use of a capacitive skin sensor for collision detection. Tests demonstrate that the sensor reduces impact forces and can detect and characterize collision events, providing information that may be used in the future for force reduction behaviors. Various parameters that affect collision severity, including interface friction, interface stiffness, end tip velocity and joint stiffness irrespective of controller bandwidth are also explored using the sensor to provide information about the contact force at the site of impact. Joint stiffness is made independent of controller bandwidth limitations using passive torsional springs of various stiffnesses. Results indicate a positive correlation between peak impact force and joint stiffness, skin friction and interface stiffness, with implications for future skin and robot link designs and post-collision behaviors.",
"title": ""
},
{
"docid": "8741e414199ecfbbf4a4c16d8a303ab5",
"text": "In ophthalmic artery occlusion by hyaluronic acid injection, the globe may get worse by direct intravitreal administration of hyaluronidase. Retrograde cannulation of the ophthalmic artery may have the potential for restoration of retinal perfusion and minimizing the risk of phthisis bulbi. The study investigated the feasibility of cannulation of the ophthalmic artery for retrograde injection. In 10 right orbits of 10 cadavers, cannulation and ink injection of the supraorbital artery in the supraorbital approach were performed under surgical loupe magnification. In 10 left orbits, the medial upper lid was curvedly incised to retrieve the retroseptal ophthalmic artery for cannulation by a transorbital approach. Procedural times were recorded. Diameters of related arteries were bilaterally measured for comparison. Dissections to verify dye distribution were performed. Cannulation was successfully performed in 100 % and 90 % of the transorbital and the supraorbital approaches, respectively. The transorbital approach was more practical to perform compared with the supraorbital approach due to a trend toward a short procedure time (18.4 ± 3.8 vs. 21.9 ± 5.0 min, p = 0.74). The postseptal ophthalmic artery exhibited a tortious course, easily retrieved and cannulated, with a larger diameter compared to the supraorbital artery (1.25 ± 0.23 vs. 0.84 ± 0.16 mm, p = 0.000). The transorbital approach is more practical than the supraorbital approach for retrograde cannulation of the ophthalmic artery. This study provides a reliable access route implication for hyaluronidase injection into the ophthalmic artery to salvage central retinal occlusion following hyaluronic acid injection. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .",
"title": ""
},
{
"docid": "b5c27fa3dbcd917f7cdc815965b22a67",
"text": "Our aim is to provide a pixel-wise instance-level labeling of a monocular image in the context of autonomous driving. We build on recent work [32] that trained a convolutional neural net to predict instance labeling in local image patches, extracted exhaustively in a stride from an image. A simple Markov random field model using several heuristics was then proposed in [32] to derive a globally consistent instance labeling of the image. In this paper, we formulate the global labeling problem with a novel densely connected Markov random field and show how to encode various intuitive potentials in a way that is amenable to efficient mean field inference [15]. Our potentials encode the compatibility between the global labeling and the patch-level predictions, contrast-sensitive smoothness as well as the fact that separate regions form different instances. Our experiments on the challenging KITTI benchmark [8] demonstrate that our method achieves a significant performance boost over the baseline [32].",
"title": ""
},
{
"docid": "84f6bc32035aab1e490d350c687df342",
"text": "Popularity bias is a phenomenon associated with collaborative filtering algorithms, in which popular items tend to be recommended over unpopular items. As the appropriate level of item popularity differs depending on individual users, a user-level modification approach can produce diverse recommendations while improving the recommendation accuracy. However, there are two issues with conventional user-level approaches. First, these approaches do not isolate users’ preferences from their tendencies toward item popularity clearly. Second, they do not consider temporal item popularity, although item popularity changes dynamically over time in reality. In this paper, we propose a novel approach to counteract the popularity bias, namely, matrix factorization based collaborative filtering incorporating individual users’ tendencies toward item popularity. Our model clearly isolates users’ preferences from their tendencies toward popularity. In addition, we consider the temporal item popularity and incorporate it into our model. Experimental results using a real-world dataset show that our model improve both accuracy and diversity compared with a baseline algorithm in both static and time-varying models. Moreover, our model outperforms conventional approaches in terms of accuracy with the same diversity level. Furthermore, we show that our proposed model recommends items by capturing users’ tendencies toward item popularity: it recommends popular items for the user who likes popular items, while recommending unpopular items for those who don’t like popular items.",
"title": ""
},
{
"docid": "f43ae2f0002343deeb0987d19e6a425e",
"text": "Recent state-of-the-art approaches automatically generate regular expressions from natural language specifications. Given that these approaches use only synthetic data in both training datasets and validation/test datasets, a natural question arises: are these approaches effective to address various real-world situations? To explore this question, in this paper, we conduct a characteristic study on comparing two synthetic datasets used by the recent research and a real-world dataset collected from the Internet, and conduct an experimental study on applying a state-of-the-art approach on the real-world dataset. Our study results suggest the existence of distinct characteristics between the synthetic datasets and the real-world dataset, and the state-of-the-art approach (based on a model trained from a synthetic dataset) achieves extremely low effectiveness when evaluated on real-world data, much lower than the effectiveness when evaluated on the synthetic dataset. We also provide initial analysis on some of those challenging cases and discuss future directions.",
"title": ""
},
{
"docid": "9c2c74da1e0f5ea601e50f257015c5b3",
"text": "We present a new lock-based algorithm for concurrent manipulation of a binary search tree in an asynchronous shared memory system that supports search, insert and delete operations. Some of the desirable characteristics of our algorithm are: (i) a search operation uses only read and write instructions, (ii) an insert operation does not acquire any locks, and (iii) a delete operation only needs to lock up to four edges in the absence of contention. Our algorithm is based on an internal representation of a search tree and it operates at edge-level (locks edges) rather than at node-level (locks nodes); this minimizes the contention window of a write operation and improves the system throughput. Our experiments indicate that our lock-based algorithm outperforms existing algorithms for a concurrent binary search tree for medium-sized and larger trees, achieving up to 59% higher throughput than the next best algorithm.",
"title": ""
},
{
"docid": "7dd3c935b6a5a38284b36ddc1dc1d368",
"text": "(2012): Mindfulness and self-compassion as predictors of psychological wellbeing in long-term meditators and matched nonmeditators, The Journal of Positive Psychology: This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "c2e92f8289ebf50ca363840133dc2a43",
"text": "0957-4174/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.08.042 ⇑ Address: WOLNM & ESIME Zacatenco, Instituto Politécnico Nacional, U. Profesional Adolfo López Mateos, Edificio Z-4, 2do piso, cubiculo 6, Miguel Othón de Mendizábal S/N, La Escalera, Gustavo A. Madero, D.F., C.P. 07320, Mexico. Tel.: +52 55 5694 0916/+52 55 5454 2611 (cellular); fax: +52 55 5694 0916. E-mail address: apenaa@ipn.mx URL: http://www.wolnm.org/apa 1 AIWBES: adaptive and intelligent web-based educational systems; BKT: Bayesian knowledge tracing; CBES: computer-based educational systems; CBIS: computerbased information system,; DM: data mining; DP: dynamic programming; EDM: educational data mining; EM: expectation maximization; HMM: hidden Markov model; IBL: instances-based learning; IRT: item response theory; ITS: intelligent tutoring systems; KDD: knowledge discovery in databases; KT: knowledge tracing; LMS: learning management systems; SNA: social network analysis; SWOT: strengths, weakness, opportunities, and threats; WBC: web-based courses; WBES: web-based educational systems. Alejandro Peña-Ayala ⇑",
"title": ""
},
{
"docid": "333b15d94a2108929a8f6c18ef460ff4",
"text": "Inferring the latent emotive content of a narrative requires consideration of para-linguistic cues (e.g. pitch), linguistic content (e.g. vocabulary) and the physiological state of the narrator (e.g. heart-rate). In this study we utilized a combination of auditory, text, and physiological signals to predict the mood (happy or sad) of 31 narrations from subjects engaged in personal story-telling. We extracted 386 audio and 222 physiological features (using the Samsung Simband) from the data. A subset of 4 audio, 1 text, and 5 physiologic features were identified using Sequential Forward Selection (SFS) for inclusion in a Neural Network (NN). These features included subject movement, cardiovascular activity, energy in speech, probability of voicing, and linguistic sentiment (i.e. negative or positive). We explored the effects of introducing our selected features at various layers of the NN and found that the location of these features in the network topology had a significant impact on model performance. To ensure the real-time utility of the model, classification was performed over 5 second intervals. We evaluated our model’s performance using leave-one-subject-out crossvalidation and compared the performance to 20 baseline models and a NN with all features included in the input layer.",
"title": ""
},
{
"docid": "b4d85eae82415b0a8dcd5e9f6eadbc6f",
"text": "We compared the effects of children’s reading of an educational electronic storybook on their emergent literacy with those of being read the same story in its printed version by an adult. We investigated 128 5to 6-year-old kindergarteners; 64 children from each of two socio-economic status (SES) groups: low (LSES) and middle (MSES). In each group, children were randomly assigned to one of three subgroups. The two intervention groups included three book reading sessions each; children in one group individually read the electronic book; in the second group, the children were read the same printed book by an adult; children in the third group, which served as a control, received the regular kindergarten programme. Preand post-intervention emergent literacy measures included vocabulary, word recognition and phonological awareness. Compared with the control group, the children’s vocabulary scores in both intervention groups improved following reading activity. Children from both interventions groups and both SES groups showed a similarly good level of story comprehension. In both SES groups, compared with the control group, children’s phonological awareness and word recognition did not improve following both reading interventions. Implications for future research and for education are discussed.",
"title": ""
},
{
"docid": "6773b060fd16b6630f581eb65c5c6488",
"text": "Proximity detection is one of the most common location-based applications in daily life when users intent to find their friends who get into their proximity. Studies on protecting user privacy information during the detection process have been widely concerned. In this paper, we first analyze a theoretical and experimental analysis of existing solutions for proximity detection, and then demonstrate that these solutions either provide a weak privacy preserving or result in a high communication and computational complexity. Accordingly, a location difference-based proximity detection protocol is proposed based on the Paillier cryptosystem for the purpose of dealing with the above shortcomings. The analysis results through an extensive simulation illustrate that our protocol outperforms traditional protocols in terms of communication and computation cost.",
"title": ""
},
{
"docid": "6d3e19c44f7af5023ef991b722b078c5",
"text": "Volatile substances are commonly misused with easy-to-obtain commercial products, such as glue, shoe polish, nail polish remover, butane lighter fluid, gasoline and computer duster spray. This report describes a case of sudden death of a 29-year-old woman after presumably inhaling gas cartridge butane from a plastic bag. Autopsy, pathological and toxicological analyses were performed in order to determine the cause of death. Pulmonary edema was observed pathologically, and the toxicological study revealed 2.1μL/mL of butane from the blood. The causes of death from inhalation of volatile substances have been explained by four mechanisms; cardiac arrhythmia, anoxia, respiratory depression, and vagal inhibition. In this case, the cause of death was determined to be asphyxia from anoxia. Additionally, we have gathered fatal butane inhalation cases with quantitative analyses of butane concentrations, and reviewed other reports describing volatile substance abuse worldwide.",
"title": ""
},
{
"docid": "70e2716835f789398e6d7a50aed9df46",
"text": "Human spatial behavior and experience cannot be investigated independently from the shape and configuration of environments. Therefore, comparative studies in architectural psychology and spatial cognition would clearly benefit from operationalizations of space that provide a common denominator for capturing its behavioral and psychologically relevant properties. This paper presents theoretical and methodological issues arising from the practical application of isovist-based graphs for the analysis of architectural spaces. Based on recent studies exploring the influence of spatial form and structure on behavior and experience in virtual environments, the following topics are discussed: (1) the derivation and empirical verification of meaningful descriptor variables on the basis of classic qualitative theories of environmental psychology relating behavior and experience to spatial properties; (2) methods to select reference points for the analysis of architectural spaces at a local level; furthermore, based on two experiments exploring the phenomenal conception of the spatial structure of architectural environments, formalized strategies for (3) the selection of reference points at a global level, and for (4), their integration into a sparse yet plausible comprehensive graph structure, are proposed. Taken together, a well formalized and psychologically oriented methodology for the efficient description of spatial properties of environments at the architectural scale level is outlined. This method appears useful for a wide range of applications, ranging from abstract architectural analysis over behavioral experiments to studies on mental representations in cognitive science. doi:10.1068/b33050 }Formerly also associated to Cognitive Neuroscience, Department of Zoology, University of Tu« bingen. Currently at the Centre for Cognitive Science, University of Freiburg, Friedrichstrasse 50, 79098 Freiburg, Germany. because, in reality, various potentially relevant factors coexist. In order to obtain better predictions under such complex conditions, either a comprehensive model or at least additional knowledge on the relative weights of individual factors and their potential interactions is required. As an intermediate step towards such more comprehensive approaches, existing theories have to be formulated qualitatively and translated to a common denominator. In this paper an integrative framework for describing the shape and structure of environments is outlined that allows for a quantitative formulation and test of theories on behavioral and emotional responses to environments. It is based on the two basic elements isovist and place graph. This combination appears particularly promising, since its sparseness allows an efficient representation of both geometrical and topological properties at a wide range of scales, and at the same time it seems capable and flexible enough to retain a substantial share of psychologically and behaviorally relevant detail features. Both the isovist and the place graph are established analysis techniques within their scientific communities of space syntax and spatial cognition respectively. Previous combinations of graphs and isovists (eg Batty, 2001; Benedikt, 1979; Turner et al, 2001) were based on purely formal criteria, whereas many placegraph applications made use of their inherent flexibility but suffered from a lack of formalization (cf Franz et al, 2005a). The methodology outlined in this paper seeks to combine both approaches by defining well-formalized rules for flexible graphs based on empirical findings on the human conception of the spatial structure. In sections 3 and 4, methodological issues of describing local properties on the basis of isovists are discussed. This will be done on the basis of recent empirical studies that tested the behavioral relevance of a selection of isovist measurands. The main issues are (a) the derivation of meaningful isovist measurands, based on classic qualitative theories from environmental psychology, and (b) strategies to select reference points for isovist analysis in environments consisting of few subspaces. Sections 5 and 6 then discuss issues arising when using an isovist-based description system for operationalizing larger environments consisting of multiple spaces: (c) on the basis of an empirical study in which humans identified subspaces by marking their centers, psychologically plausible selection criteria for sets of reference points are proposed and formalized; (d) a strategy to derive a topological graph on the basis of the previously identified elements is outlined. Taken together, a viable methodology is proposed which describes spatial properties of environments efficiently and comprehensively in a psychologically and behaviorally plausible manner.",
"title": ""
},
{
"docid": "0b407f1f4d771a34e6d0bc59bf2ef4c4",
"text": "Social advertisement is one of the fastest growing sectors in the digital advertisement landscape: ads in the form of promoted posts are shown in the feed of users of a social networking platform, along with normal social posts; if a user clicks on a promoted post, the host (social network owner) is paid a fixed amount from the advertiser. In this context, allocating ads to users is typically performed by maximizing click-through-rate, i.e., the likelihood that the user will click on the ad. However, this simple strategy fails to leverage the fact the ads can propagate virally through the network, from endorsing users to their followers. In this paper, we study the problem of allocating ads to users through the viral-marketing lens. Advertisers approach the host with a budget in return for the marketing campaign service provided by the host. We show that allocation that takes into account the propensity of ads for viral propagation can achieve significantly better performance. However, uncontrolled virality could be undesirable for the host as it creates room for exploitation by the advertisers: hoping to tap uncontrolled virality, an advertiser might declare a lower budget for its marketing campaign, aiming at the same large outcome with a smaller cost. This creates a challenging trade-off: on the one hand, the host aims at leveraging virality and the network effect to improve advertising efficacy, while on the other hand the host wants to avoid giving away free service due to uncontrolled virality. We formalize this as the problem of ad allocation with minimum regret, which we show is NP-hard and inapproximable w.r.t. any factor. However, we devise an algorithm that provides approximation guarantees w.r.t. the total budget of all advertisers. We develop a scalable version of our approximation algorithm, which we extensively test on four real-world data sets, confirming that our algorithm delivers high quality solutions, is scalable, and significantly outperforms several natural baselines.",
"title": ""
},
{
"docid": "4ade01af5fd850722fd690a5d8f938f4",
"text": "IT may appear blasphemous to paraphrase the title of the classic article of Vannevar Bush but it may be a mitigating factor that it is done to pay tribute to another legendary scientist, Eugene Garfield. His ideas of citationbased searching, resource discovery and quantitative evaluation of publications serve as the basis for many of the most innovative and powerful online information services these days. Bush 60 years ago contemplated – among many other things – an information workstation, the Memex. A researcher would use it to annotate, organize, link, store, and retrieve microfilmed documents. He is acknowledged today as the forefather of the hypertext system, which in turn, is the backbone of the Internet. He outlined his thoughts in an essay published in the Atlantic Monthly. Maybe because of using a nonscientific outlet the paper was hardly quoted and cited in scholarly and professional journals for 30 years. Understandably, the Atlantic Monthly was not covered by the few, specialized abstracting and indexing databases of scientific literature. Such general interest magazines are not source journals in either the Web of Science (WoS), or Scopus databases. However, records for items which cite the ‘As We May Think’ article of Bush (also known as the ‘Memex’ paper) are listed with appropriate bibliographic information. Google Scholar (G-S) lists the records for the Memex paper and many of its citing papers. It is a rather confusing list with many dead links or otherwise dysfunctional links, and a hodge-podge of information related to Bush. It is quite telling that (based on data from the 1945– 2005 edition of WoS) the article of Bush gathered almost 90% of all its 712 citations in WoS between 1975 and 2005, peaking in 1999 with 45 citations in that year alone. Undoubtedly, this proportion is likely to be distorted because far fewer source articles from far fewer journals were processed by the Institute for Scientific Information for 1945–1974 than for 1975–2005. Scopus identifies 267 papers citing the Bush article. The main reason for the discrepancy is that Scopus includes cited references only from 1995 onward, while WoS does so from 1945. Bush’s impatience with the limitations imposed by the traditional classification and indexing tools and practices of the time is palpable. It is worth to quote it as a reminder. Interestingly, he brings up the terms ‘web of trails’ and ‘association of thoughts’ which establishes the link between him and Garfield.",
"title": ""
}
] |
scidocsrr
|
da71981258ef726d3b8973ac70e03233
|
Robust Optimization for Deep Regression
|
[
{
"docid": "ab430da4dbaae50c2700f3bb9b1dbde5",
"text": "Visual appearance score, appearance mixture type and deformation are three important information sources for human pose estimation. This paper proposes to build a multi-source deep model in order to extract non-linear representation from these different aspects of information sources. With the deep model, the global, high-order human body articulation patterns in these information sources are extracted for pose estimation. The task for estimating body locations and the task for human detection are jointly learned using a unified deep model. The proposed approach can be viewed as a post-processing of pose estimation results and can flexibly integrate with existing methods by taking their information sources as input. By extracting the non-linear representation from multiple information sources, the deep model outperforms state-of-the-art by up to 8.6 percent on three public benchmark datasets.",
"title": ""
},
{
"docid": "c718a2f9eb395e3b4a27ddf3208c4233",
"text": "Our objective is to efficiently and accurately estimate the upper body pose of humans in gesture videos. To this end, we build on the recent successful applications of deep convolutional neural networks (ConvNets). Our novelties are: (i) our method is the first to our knowledge to use ConvNets for estimating human pose in videos; (ii) a new network that exploits temporal information from multiple frames, leading to better performance; (iii) showing that pre-segmenting the foreground of the video improves performance; and (iv) demonstrating that even without foreground segmentations, the network learns to abstract away from the background and can estimate the pose even in the presence of a complex, varying background. We evaluate our method on the BBC TV Signing dataset and show that our pose predictions are significantly better, and an order of magnitude faster to compute, than the state of the art [3].",
"title": ""
}
] |
[
{
"docid": "8a59e2b140eaf91a4a5fd8c109682543",
"text": "A search-based procedural content generation (SBPCG) algorithm for strategy game maps is proposed. Two representations for strategy game maps are devised, along with a number of objectives relating to predicted player experience. A multiobjective evolutionary algorithm is used for searching the space of maps for candidates that satisfy pairs of these objectives. As the objectives are inherently partially conflicting, the algorithm generates Pareto fronts showing how these objectives can be balanced. Such fronts are argued to be a valuable tool for designers looking to balance various design needs. Choosing appropriate points (manually or automatically) on the Pareto fronts, maps can be found that exhibit good map design according to specified criteria, and could either be used directly in e.g. an RTS game or form the basis for further human design.",
"title": ""
},
{
"docid": "28a859bed62033aed005ee5895109953",
"text": "Eating out has recently become part of our lifestyle. However, when eating out in restaurants, many people find it difficult to make meal choices consistent with their health goals. Bad eating choices and habits are in part responsible for the alarming increase in the prevalence of chronic diseases such as obesity, diabetes, and high blood pressure, which burden the health care system. Therefore, there is a need for an intervention that educates the public on how to make healthy choices while eating away from home. In this paper, we propose a goal-based slow-casual game approach that addresses this need. This approach acknowledges different groups of users with varying health goals and adopts slow technology to promote learning and reflection. We model two recognized determinants of well-being into dietary interventions and provide feedback accordingly. To demonstrate the suitability of our approach for long-term sustained learning, reflection, and attitude and/or behavior change, we develop and evaluate LunchTime—a goal-based slow-casual game that educates players on how to make healthier meal choices. The result from the evaluation shows that LunchTime facilitates learning and reflection and promotes positive dietary attitude change.",
"title": ""
},
{
"docid": "602c176fc4150543f443f0891161b1bb",
"text": "In the wake of a polarizing election, the cyber world is laden with hate speech. Context accompanying a hate speech text is useful for identifying hate speech, which however has been largely overlooked in existing datasets and hate speech detection models. In this paper, we provide an annotated corpus of hate speech with context information well kept. Then we propose two types of hate speech detection models that incorporate context information, a logistic regression model with context features and a neural network model with learning components for context. Our evaluation shows that both models outperform a strong baseline by around 3% to 4% in F1 score and combining these two models further improve the performance by another 7% in F1 score.",
"title": ""
},
{
"docid": "84fa05e6953d4a16d892107e8c909935",
"text": "Within the last years automotive radar sensors became more and more important. Started with comfort systems like adaptive cruise control (ACC) and parking aid, the safety aspect increasingly came to the fore. In the near future not only upper class cars will be equipped with various radar applications like pre crash, collision warning and collision avoidance to increase traffic safety. Therefore many high performance radar sensors have to be integrated in the car. This paper describes the demands on future radar sensors. It is discussed how the step to higher operating frequencies could be beneficial particularly for urban situations with high traffic density. Furthermore the paper will provide and discuss design considerations for future mmW-Radar sensors in automotive safety applications.",
"title": ""
},
{
"docid": "f094754a454233cc8992f11e9dcb8bc9",
"text": "This paper reports on the 2018 PIRM challenge on perceptual super-resolution (SR), held in conjunction with the Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018. In contrast to previous SR challenges, our evaluation methodology jointly quantifies accuracy and perceptual quality, therefore enabling perceptualdriven methods to compete alongside algorithms that target PSNR maximization. Twenty-one participating teams introduced algorithms which well-improved upon the existing state-of-the-art methods in perceptual SR, as confirmed by a human opinion study. We also analyze popular image quality measures and draw conclusions regarding which of them correlates best with human opinion scores. We conclude with an analysis of the current trends in perceptual SR, as reflected from the leading submissions.",
"title": ""
},
{
"docid": "3bcf0e33007feb67b482247ef6702901",
"text": "Bitcoin is a popular cryptocurrency that records all transactions in a distributed append-only public ledger called blockchain. The security of Bitcoin heavily relies on the incentive-compatible proof-of-work (PoW) based distributed consensus protocol, which is run by the network nodes called miners. In exchange for the incentive, the miners are expected to maintain the blockchain honestly. Since its launch in 2009, Bitcoin economy has grown at an enormous rate, and it is now worth about 150 billions of dollars. This exponential growth in the market value of bitcoins motivate adversaries to exploit weaknesses for profit, and researchers to discover new vulnerabilities in the system, propose countermeasures, and predict upcoming trends. In this paper, we present a systematic survey that covers the security and privacy aspects of Bitcoin. We start by giving an overview of the Bitcoin system and its major components along with their functionality and interactions within the system. We review the existing vulnerabilities in Bitcoin and its major underlying technologies such as blockchain and PoW-based consensus protocol. These vulnerabilities lead to the execution of various security threats to the standard functionality of Bitcoin. We then investigate the feasibility and robustness of the state-of-the-art security solutions. Additionally, we discuss the current anonymity considerations in Bitcoin and the privacy-related threats to Bitcoin users along with the analysis of the existing privacy-preserving solutions. Finally, we summarize the critical open challenges, and we suggest directions for future research towards provisioning stringent security and privacy solutions for Bitcoin.",
"title": ""
},
{
"docid": "455b4203d77a63a6de2083665b7250bb",
"text": "Gephi is a network visualization software used in various disciplines (social network analysis, biology, genomics...). One of its key features is the ability to display the spatialization process, aiming at transforming the network into a map, and ForceAtlas2 is its default layout algorithm. The latter is developed by the Gephi team as an all-around solution to Gephi users' typical networks (scale-free, 10 to 10,000 nodes). We present here for the first time its functioning and settings. ForceAtlas2 is a force-directed layout close to other algorithms used for network spatialization. We do not claim a theoretical advance but an attempt to integrate different techniques such as the Barnes Hut simulation, degree-dependent repulsive force, and local and global adaptive temperatures. It is designed for the Gephi user experience (it is a continuous algorithm), and we explain which constraints it implies. The algorithm benefits from much feedback and is developed in order to provide many possibilities through its settings. We lay out its complete functioning for the users who need a precise understanding of its behaviour, from the formulas to graphic illustration of the result. We propose a benchmark for our compromise between performance and quality. We also explain why we integrated its various features and discuss our design choices.",
"title": ""
},
{
"docid": "fbb71a8a7630350a7f33f8fb90b57965",
"text": "As the Web of Things (WoT) broadens real world interaction via the internet, there is an increasing need for a user centric model for managing and interacting with real world objects. We believe that online social networks can provide that capability and can enhance existing and future WoT platforms leading to a Social WoT. As both social overlays and user interface containers, online social networks (OSNs) will play a significant role in the evolution of the web of things. As user interface containers and social overlays, they can be used by end users and applications as an on-line entry point for interacting with things, both receiving updates from sensors and controlling things. Conversely, access to user identity and profile information, content and social graphs can be useful in physical social settings like cafés. In this paper we describe some of the key features of social networks used by existing social WoT systems. We follow this with a discussion of open research questions related to integration of OSNs and how OSNs may evolve to be more suitable for integration with places and things. Several ongoing projects in our lab leverage OSNs to connect places and things to online communities.",
"title": ""
},
{
"docid": "bef0d332ecdae7b69537e87ac0e5c7bb",
"text": "Over the past decade, the field of finite-dimensional variational inequality and complementarity problems has seen a rapid development in its theory of existence, uniqueness and sensitivity of solution(s), in the theory of algorithms, and in the application of these techniques to transportation planning, regional science, socio-economic analysis, energy modeling, and game theory. This paper provides a state-of-the-art review of these developments as well as a summary of some open research topics in this growing field.",
"title": ""
},
{
"docid": "32c405ebed87b4e1ca47cd15b7b9b61b",
"text": "Video cameras are pervasively deployed for security and smart city scenarios, with millions of them in large cities worldwide. Achieving the potential of these cameras requires efficiently analyzing the live videos in realtime. We describe VideoStorm, a video analytics system that processes thousands of video analytics queries on live video streams over large clusters. Given the high costs of vision processing, resource management is crucial. We consider two key characteristics of video analytics: resource-quality tradeoff with multi-dimensional configurations, and variety in quality and lag goals. VideoStorm’s offline profiler generates query resourcequality profile, while its online scheduler allocates resources to queries to maximize performance on quality and lag, in contrast to the commonly used fair sharing of resources in clusters. Deployment on an Azure cluster of 101 machines shows improvement by as much as 80% in quality of real-world queries and 7× better lag, processing video from operational traffic cameras.",
"title": ""
},
{
"docid": "ba2f7eb97611cb3a75f236436b048820",
"text": "Learning interpretable disentangled representations is a crucial yet challenging task. In this paper, we propose a weakly semi-supervised method, termed as Dual Swap Disentangling (DSD), for disentangling using both labeled and unlabeled data. Unlike conventional weakly supervised methods that rely on full annotations on the group of samples, we require only limited annotations on paired samples that indicate their shared attribute like the color. Our model takes the form of a dual autoencoder structure. To achieve disentangling using the labeled pairs, we follow a “encoding-swap-decoding” process, where we first swap the parts of their encodings corresponding to the shared attribute, and then decode the obtained hybrid codes to reconstruct the original input pairs. For unlabeled pairs, we follow the “encoding-swap-decoding” process twice on designated encoding parts and enforce the final outputs to approximate the input pairs. By isolating parts of the encoding and swapping them back and forth, we impose the dimension-wise modularity and portability of the encodings of the unlabeled samples, which implicitly encourages disentangling under the guidance of labeled pairs. This dual swap mechanism, tailored for semi-supervised setting, turns out to be very effective. Experiments on image datasets from a wide domain show that our model yields state-of-the-art disentangling performances.",
"title": ""
},
{
"docid": "2d7c0c93199c05ee53cdd2d0beb444ce",
"text": "This paper presents a three-phase grid-connected photovoltaic generation system with unity power factor for any situation of solar radiation. The model of the PWM inverter and a control strategy using dq0 transformation are proposed the system operates as an active filter capable of compensate harmonic components and reactive power, generated by the other loads connected to the system. A input voltage clamping technique is proposed to control the power between the grid and photovoltaic system, where it is intended to achieve the maximum power point operation. Simulation results and analyses are presented to validate the proposed methodology for grid connected photovoltaic generation system.",
"title": ""
},
{
"docid": "87c6a5a8d00a284f313d923c27531f75",
"text": "Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy.",
"title": ""
},
{
"docid": "b12f1b1ff7618c1f54462c18c768dae8",
"text": "Retrieval is the key process for understanding learning and for promoting learning, yet retrieval is not often granted the central role it deserves. Learning is typically identified with the encoding or construction of knowledge, and retrieval is considered merely the assessment of learning that occurred in a prior experience. The retrieval-based learning perspective outlined here is grounded in the fact that all expressions of knowledge involve retrieval and depend on the retrieval cues available in a given context. Further, every time a person retrieves knowledge, that knowledge is changed, because retrieving knowledge improves one’s ability to retrieve it again in the future. Practicing retrieval does not merely produce rote, transient learning; it produces meaningful, long-term learning. Yet retrieval practice is a tool many students lack metacognitive awareness of and do not use as often as they should. Active retrieval is an effective but undervalued strategy for promoting meaningful learning.",
"title": ""
},
{
"docid": "db383295c34b919b2e2e859cfdf82fc2",
"text": "Wafer level packages (WLPs) with various design configurations are rapidly gaining tremendous applications throughout semiconductor industry due to small-form factor, low-cost, and high performance. Because of the innovative production processes utilized in WLP manufacturing and the accompanying rise in the price of gold, the traditional wire bonding packages are no longer as attractive as they used to be. In addition, WLPs provide the smallest form factor to satisfy multifunctional device requirements along with improved signal integrity for today’s handheld electronics. Existing wire bonding devices can be easily converted to WLPs by adding a redistribution layer (RDL) during backend wafer level processing. Since the input/output (I/O) pads do not have to be routed to the perimeter of the die, the WLP die can be designed to have a much smaller footprint as compared to its wire bonding counterpart, which means more area-array dies can be packed onto a single wafer to reduce overall processing costs per die. Conventional (fan-in) WLPs are formed on the dies while they are still on the uncut wafer. The result is that the final packaged product is the same size as the die itself. Recently, fan-out WLPs have emerged. Fan-out WLP starts with the reconstitution or reconfiguration of individual dies to an artificial molded wafer. Fan-out WLPs eliminate the need of expensive substrate as in flip-chip packages, while expanding the WLP size with molding compound for higher I/O applications without compromising on the board level reliability. Essentially, WLP enables the next generation of portable electronics at a competitive price. Many future products using through-silicon-via (TSV) technology will be packaged as WLPs. There have been relatively few publications focused on the latest results of WLP development and research. Many design guidelines, such as material selection and geometry dimensions of under bump metallurgy (UBM), RDL, passivation and solder alloy, for optimum board level reliability performance of WLPs, are still based on technical know-how gained from flip-chip or wire bonding BGA reliability studies published in the past two decades. However, WLPs have their unique product requirements for design guidelines, process conditions, material selection, reliability tests, and failure analysis. In addition, WLP is also an enabling technology for 3D package and system-in-package (SIP), justifying significant research attention. The timing is therefore ripe for this edition to summarize the state-of-the-art research advances in wafer level packaging in various fields of interest. Integration of WLP in 3D packages with TSV or wireless proximity communication (PxC), as well as applications in Microelectromechanical Systems (MEMS) packaging and power packaging, will be highlighted in this issue. In addition, the stateof-the-art simulation is applied to design for enhanced package and board level reliability of WLPs, including thermal cycling test,",
"title": ""
},
{
"docid": "a1a04d251e19a43455787cefa02bae53",
"text": "This paper provides an overview of CMOS-based sensor technology with specific attention placed on devices made through micromachining of CMOS substrates and thin films. Microstructures may be formed using either pre-CMOS, intra-CMOS and post-CMOS fabrication approaches. To illustrate and motivate monolithic integration, a handful of microsystem examples, including inertial sensors, gravimetric chemical sensors, microphones, and a bone implantable sensor will be highlighted. Design constraints and challenges for CMOS-MEMS devices will be covered",
"title": ""
},
{
"docid": "177db8a6f89528c1e822f52395a34468",
"text": "Design of a low-energy power-ON reset (POR) circuit is proposed to reduce the energy consumed by the stable supply of the dual supply static random access memory (SRAM), as the other supply is ramping up. The proposed POR circuit, when embedded inside dual supply SRAM, removes its ramp-up constraints related to voltage sequencing and pin states. The circuit consumes negligible energy during ramp-up, does not consume dynamic power during operations, and includes hysteresis to improve noise immunity against voltage fluctuations on the power supply. The POR circuit, designed in the 40-nm CMOS technology within 10.6-μm2 area, enabled 27× reduction in the energy consumed by the SRAM array supply during periphery power-up in typical conditions.",
"title": ""
},
{
"docid": "c581f1797921247e9674c06b49c1b055",
"text": "Service organizations are increasingly utilizing advanced information and communication technologies, such as the Internet, in hopes of improving the efficiency, cost-effectiveness, and/or quality of their customer-facing operations. More of the contact a customer has with the firm is likely to be with the back-office and, therefore, mediated by technology. While previous operations management research has been important for its contributions to our understanding of customer contact in face-to-facesettings, considerably less work has been done to improve our understanding of customer contact in what we refer to as technology-mediated settings (e.g., via telephone, instant messaging (IM), or email). This paper builds upon the service operations management (SOM) literature on customer contact by theoretically defining and empirically developing new multi-item measurement scales specifically designed for assessing tech ology-mediated customer contact. Seminal works on customer contact theory and its empirical measurement are employed to provide a foundation for extending these concepts to technology-mediated contexts. We also draw upon other important frameworks, including the Service Profit Chain, the Theory of Planned Behavior, and the concept of media/information richness, in order to identify and define our constructs. We follow a rigorous empirical scale development process to create parsimonious sets of survey items that exhibit satisfactory levels of reliability and validity to be useful in advancing SOM empirical research in the emerging Internet-enabled back-office. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3addac5ab7cb72d47b6f11310d1071df",
"text": "Software vulnerabilities pose significant security risks to the host computing system. Faced with continuous disclosure of software vulnerabilities, system administrators must prioritize their efforts, triaging the most critical vulnerabilities to address first. Many vulnerability scoring systems have been proposed, but they all require expert knowledge to determine intricate vulnerability metrics. In this paper, we propose a deep learning approach to predict multi-class severity level of software vulnerability using only vulnerability description. Compared with intricate vulnerability metrics, vulnerability description is the \"surface level\" information about how a vulnerability works. To exploit vulnerability description for predicting vulnerability severity, discriminative features of vulnerability description have to be defined. This is a challenging task due to the diversity of software vulnerabilities and the richness of vulnerability descriptions. Instead of relying on manual feature engineering, our approach uses word embeddings and a one-layer shallow Convolutional Neural Network (CNN) to automatically capture discriminative word and sentence features of vulnerability descriptions for predicting vulnerability severity. We exploit large amounts of vulnerability data from the Common Vulnerabilities and Exposures (CVE) database to train and test our approach.",
"title": ""
},
{
"docid": "b4fb3d502f87c2114d6c5b0fc9b6f2aa",
"text": "A new power semiconductor device called the Insulated Gate Rectifier (IGR) is described in this paper. This device has the advantages of operating at high current densities while requiring low gate drive power. The devices exhibit relatively slow switching speeds due to bipolar operation. The results of two dimensional computer modelling of the device structure are compared with measurements taken on devices fabricated with 600 volt forward and reverse blocking capability.",
"title": ""
}
] |
scidocsrr
|
c221b48264b1fea8d920dfbf75f89510
|
Mining actionlet ensemble for action recognition with depth cameras
|
[
{
"docid": "8b51b2ee7385649bc48ba4febe0ec4c3",
"text": "This paper presents a HMM-based methodology for action recogni-tion using star skeleton as a representative descriptor of human posture. Star skeleton is a fast skeletonization technique by connecting from centroid of target object to contour extremes. To use star skeleton as feature for action recognition, we clearly define the fea-ture as a five-dimensional vector in star fashion because the head and four limbs are usually local extremes of human shape. In our proposed method, an action is composed of a series of star skeletons over time. Therefore, time-sequential images expressing human action are transformed into a feature vector sequence. Then the fea-ture vector sequence must be transformed into symbol sequence so that HMM can model the action. We design a posture codebook, which contains representative star skeletons of each action type and define a star distance to measure the similarity between feature vec-tors. Each feature vector of the sequence is matched against the codebook and is assigned to the symbol that is most similar. Conse-quently, the time-sequential images are converted to a symbol posture sequence. We use HMMs to model each action types to be recognized. In the training phase, the model parameters of the HMM of each category are optimized so as to best describe the training symbol sequences. For human action recognition, the model which best matches the observed symbol sequence is selected as the recog-nized category. We implement a system to automatically recognize ten different types of actions, and the system has been tested on real human action videos in two cases. One case is the classification of 100 video clips, each containing a single action type. A 98% recog-nition rate is obtained. The other case is a more realistic situation in which human takes a series of actions combined. An action-series recognition is achieved by referring a period of posture history using a sliding window scheme. The experimental results show promising performance.",
"title": ""
}
] |
[
{
"docid": "b1ba9d65373fc7bd57259fb1fc252298",
"text": "BACKGROUND\nFocus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out.\n\n\nMETHODS\nWe searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed.\n\n\nRESULTS\nWe identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers.\n\n\nCONCLUSIONS\nBased on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and inconsistent reporting seen in these studies may also reflect the lack of clear, evidence-based guidance about deciding on sample size. More empirical research is needed to develop focus group methodology.",
"title": ""
},
{
"docid": "68093a9767aea52026a652813c3aa5fd",
"text": "Conventional capacitively coupled neural recording amplifiers often present a large input load capacitance to the neural signal source and hence take up large circuit area. They suffer due to the unavoidable trade-off between the input capacitance and chip area versus the amplifier gain. In this work, this trade-off is relaxed by replacing the single feedback capacitor with a clamped T-capacitor network. With this simple modification, the proposed amplifier can achieve the same mid-band gain with less input capacitance, resulting in a higher input impedance and a smaller silicon area. Prototype neural recording amplifiers based on this proposal were fabricated in 0.35 μm CMOS, and their performance is reported. The amplifiers occupy smaller area and have lower input loading capacitance compared to conventional neural amplifiers. One of the proposed amplifiers occupies merely 0.056 mm2. It achieves 38.1-dB mid-band gain with 1.6 pF input capacitance, and hence has an effective feedback capacitance of 20 fF. Consuming 6 μW, it has an input referred noise of 13.3 μVrms over 8.5 kHz bandwidth and NEF of 7.87. In-vivo recordings from animal experiments are also demonstrated.",
"title": ""
},
{
"docid": "d98809ba1dd612fb1d73e72cc8b40096",
"text": "Recent advances in functional magnetic resonance imaging (fMRI) data acquisition and processing techniques have made real-time fMRI (rtfMRI) of localized brain areas feasible, reliable and less susceptible to artefacts. Previous studies have shown that healthy subjects learn to control local brain activity with operant training by using rtfMRI-based neurofeedback. In the present study, we investigated whether healthy subjects could voluntarily gain control over right anterior insular activity. Subjects were provided with continuously updated information of the target ROI's level of activation by visual feedback. All participants were able to successfully regulate BOLD-magnitude in the right anterior insular cortex within three sessions of 4 min each. Training resulted in a significantly increased activation cluster in the anterior portion of the right insula across sessions. An increased activity was also found in the left anterior insula but the percent signal change was lower than in the target ROI. Two different control conditions intended to assess the effects of non-specific feedback and mental imagery demonstrated that the training effect was not due to unspecific activations or non feedback-related cognitive strategies. Both control groups showed no enhanced activation across the sessions, which confirmed our main hypothesis that rtfMRI feedback is area-specific. The increased activity in the right anterior insula during training demonstrates that the effects observed are anatomically specific and self-regulation of right anterior insula only is achievable. This is the first group study investigating the volitional control of emotionally relevant brain region by using rtfMRI training and confirms that self-regulation of local brain activity with rtfMRI is possible.",
"title": ""
},
{
"docid": "6d8bd77d78263f6a98b23d1759417d94",
"text": "Implementations of word sense disambiguation (WSD) algorithms tend to be tied to a particular test corpus format and sense inventory. This makes it difficult to test their performance on new data sets, or to compare them against past algorithms implemented for different data sets. In this paper we present DKPro WSD, a freely licensed, general-purpose framework for WSD which is both modular and extensible. DKPro WSD abstracts the WSD process in such a way that test corpora, sense inventories, and algorithms can be freely swapped. Its UIMA-based architecture makes it easy to add support for new resources and algorithms. Related tasks such as word sense induction and entity linking are also supported.",
"title": ""
},
{
"docid": "03c14c8dff455afdaab6fd3ddc4dcc35",
"text": "BACKGROUND\nAdolescents and college students are at high risk for initiating alcohol use and high-risk (or binge) drinking. There is a growing body of literature on neurotoxic and harmful cognitive effects of drinking by young people. On average, youths take their first drink at age 12 years.\n\n\nMETHODS\nMEDLINE search on neurologic and cognitive effects of underage drinking.\n\n\nRESULTS\nProblematic alcohol consumption is not a benign condition that resolves with age. Individuals who first use alcohol before age 14 years are at increased risk of developing alcohol use disorders. Underage drinkers are susceptible to immediate consequences of alcohol use, including blackouts, hangovers, and alcohol poisoning and are at elevated risk of neurodegeneration (particularly in regions of the brain responsible for learning and memory), impairments in functional brain activity, and the appearance of neurocognitive deficits. Heavy episodic or binge drinking impairs study habits and erodes the development of transitional skills to adulthood.\n\n\nCONCLUSIONS\nUnderage alcohol use is associated with brain damage and neurocognitive deficits, with implications for learning and intellectual development. Impaired intellectual development may continue to affect individuals into adulthood. It is imperative for policymakers and organized medicine to address the problem of underage drinking.",
"title": ""
},
{
"docid": "192b4a503a903747caffe5ea03c31c16",
"text": "We analyze and reframe AI progress. In addition to the prevailing metrics of performance, we highlight the usually neglected costs paid in the development and deployment of a system, including: data, expert knowledge, human oversight, software resources, computing cycles, hardware and network facilities, development time, etc. These costs are paid throughout the life cycle of an AI system, fall differentially on different individuals, and vary in magnitude depending on the replicability and generality of the AI solution. The multidimensional performance and cost space can be collapsed to a single utility metric for a user with transitive and complete preferences. Even absent a single utility function, AI advances can be generically assessed by whether they expand the Pareto (optimal) surface. We explore a subset of these neglected dimensions using the two case studies of Alpha* and ALE. This broadened conception of progress in AI should lead to novel ways of measuring success in AI, and can help set milestones for future progress.",
"title": ""
},
{
"docid": "93325e6f1c13889fb2573f4631d021a5",
"text": "The difference between a computer game and a simulator can be a small one both require the same capabilities from the computer: realistic graphics, behavior consistent with the laws of physics, a variety of scenarios where difficulties can emerge, and some assessment technique to inform users of performance. Computer games are a multi-billion dollar industry in the United States, and as the production costs and complexity of games have increased, so has the effort to make their creation easier. Commercial software products have been developed to greatly simpl ify the game-making process, allowing developers to focus on content rather than on programming. This paper investigates Unity3D game creation software for making threedimensional engine-room simulators. Unity3D is arguably the best software product for game creation, and has been used for numerous popular and successful commercial games. Maritime universities could greatly benefit from making custom simulators to fit specific applications and requirements, as well as from reducing the cost of purchasing simulators. We use Unity3D to make a three-dimensional steam turbine simulator that achieves a high degree of realism. The user can walk around the turbine, open and close valves, activate pumps, and run the turbine. Turbine operating parameters such as RPM, condenser vacuum, lube oil temperature. and governor status are monitored. In addition, the program keeps a log of any errors made by the operator. We find that with the use of Unity3D, students and faculty are able to make custom three-dimensional ship and engine room simulators that can be used as training and evaluation tools.",
"title": ""
},
{
"docid": "ce786570fc3565145d980a4c53c3d292",
"text": "Existing digital hearing aids, to our knowledge, all exclude ANSI S1.11-compliant filter banks because of the high computational complexity. Most ANSI S1.11 designs are IIR- based and only applicable in applications where linear phase is not important. This paper presents an FIR-based ANSI S1.11 filter bank for digital hearing aids, which adopts a multi-rate architecture to reduce the data rates on the bandwidth-limited bands. A systematic way is also proposed to minimize the FIR orders thereof. In an 18-band digital hearing aid with 24 kHz input sampling rate, the proposed design with linear phase has comparable computational complexity with IIR filter banks. Moreover, our design requires only 4% multiplications and additions of a straightforward FIR implementation.",
"title": ""
},
{
"docid": "3790ec7f10c014fa56d3890060ed8bce",
"text": "Since LCL filter has smaller inductance value comparing to L type filter with the same performance in harmonic suppression. it is gradually used in high-power and low-frequency current-source-controlled grid-connected converters. However design of LCL filter's parameter not only relates switch frequency ripple attenuation, but also impacts on performance of grid-connected current controller. This paper firstly introduced a harmonic model of LCL filter in grid-connected operation, then researched the variable relationship among LCL filter's parameter and resonance frequency and high-frequency ripple attenuation. Based on above analysis a reasonable design method was brought out in order to achieve optimal effect under the precondition of saving inductance magnetic core of LCL filter, at the same time guaranteeing the resonance frequency of LCL filter was not too small lest restrict current controller resign. Finally this design method was verified by the experimental results.",
"title": ""
},
{
"docid": "522e384f4533ca656210561be9afbdab",
"text": "Every software program that interacts with a user requires a user interface. Model-View-Controller (MVC) is a common design pattern to integrate a user interface with the application domain logic. MVC separates the representation of the application domain (Model) from the display of the application's state (View) and user interaction control (Controller). However, studying the literature reveals that a variety of other related patterns exists, which we denote with Model-View- (MV) design patterns. This paper discusses existing MV patterns classified in three main families: Model-View-Controller (MVC), Model-View-View Model (MVVM), and Model-View-Presenter (MVP). We take a practitioners' point of view and emphasize the essentials of each family as well as the differences. The study shows that the selection of patterns should take into account the use cases and quality requirements at hand, and chosen technology. We illustrate the selection of a pattern with an example of our practice. The study results aim to bring more clarity in the variety of MV design patterns and help practitioners to make better grounded decisions when selecting patterns.",
"title": ""
},
{
"docid": "f262aba2003f986012bbec1a9c2fcb83",
"text": "Hemiplegic migraine is a rare form of migraine with aura that involves motor aura (weakness). This type of migraine can occur as a sporadic or a familial disorder. Familial forms of hemiplegic migraine are dominantly inherited. Data from genetic studies have implicated mutations in genes that encode proteins involved in ion transportation. However, at least a quarter of the large families affected and most sporadic cases do not have a mutation in the three genes known to be implicated in this disorder, suggesting that other genes are still to be identified. Results from functional studies indicate that neuronal hyperexcitability has a pivotal role in the pathogenesis of hemiplegic migraine. The clinical manifestations of hemiplegic migraine range from attacks with short-duration hemiparesis to severe forms with recurrent coma and prolonged hemiparesis, permanent cerebellar ataxia, epilepsy, transient blindness, or mental retardation. Diagnosis relies on a careful patient history and exclusion of potential causes of symptomatic attacks. The principles of management are similar to those for common varieties of migraine, except that vasoconstrictors, including triptans, are historically contraindicated but are often used off-label to stop the headache, and prophylactic treatment can include lamotrigine and acetazolamide.",
"title": ""
},
{
"docid": "0a0cc3c3d3cd7e7c3e8b409554daa5a3",
"text": "Purpose: We investigate the extent of voluntary disclosures in UK higher education institutions’ (HEIs) annual reports and examine whether internal governance structures influence disclosure in the period following major reform and funding constraints. Design/methodology/approach: We adopt a modified version of Coy and Dixon’s (2004) public accountability index, referred to in this paper as a public accountability and transparency index (PATI), to measure the extent of voluntary disclosures in 130 UK HEIs’ annual reports. Informed by a multitheoretical framework drawn from public accountability, legitimacy, resource dependence and stakeholder perspectives, we propose that the characteristics of governing and executive structures in UK universities influence the extent of their voluntary disclosures. Findings: We find a large degree of variability in the level of voluntary disclosures by universities and an overall relatively low level of PATI (44%), particularly with regards to the disclosure of teaching/research outcomes. We also find that audit committee quality, governing board diversity, governor independence, and the presence of a governance committee are associated with the level of disclosure. Finally, we find that the interaction between executive team characteristics and governance variables enhances the level of voluntary disclosures, thereby providing support for the continued relevance of a ‘shared’ leadership in the HEIs’ sector towards enhancing accountability and transparency in HEIs. Research limitations/implications: In spite of significant funding cuts, regulatory reforms and competitive challenges, the level of voluntary disclosure by UK HEIs remains low. Whilst the role of selected governance mechanisms and ‘shared leadership’ in improving disclosure, is asserted, the varying level and selective basis of the disclosures across the surveyed HEIs suggest that the public accountability motive is weaker relative to the other motives underpinned by stakeholder, legitimacy and resource dependence perspectives. Originality/value: This is the first study which explores the association between HEI governance structures, managerial characteristics and the level of disclosure in UK HEIs.",
"title": ""
},
{
"docid": "c34b474b06d21d1bebdcb8a37b8470c5",
"text": "Using machine learning to analyze data often results in developer exhaust – code, logs, or metadata that do not de ne the learning algorithm but are byproducts of the data analytics pipeline. We study how the rich information present in developer exhaust can be used to approximately solve otherwise complex tasks. Speci cally, we focus on using log data associated with training deep learning models to perform model search by predicting performance metrics for untrainedmodels. Instead of designing a di erent model for each performance metric, we present two preliminary methods that rely only on information present in logs to predict these characteristics for di erent architectures. We introduce (i) a nearest neighbor approachwith a hand-crafted edit distancemetric to comparemodel architectures and (ii) a more generalizable, end-to-end approach that trains an LSTM using model architectures and associated logs to predict performancemetrics of interest.We performmodel search optimizing for best validation accuracy, degree of over tting, and best validation accuracy given a constraint on training time. Our approaches can predict validation accuracy within 1.37% error on average, while the baseline achieves 4.13% by using the performance of a trainedmodel with the closest number of layers.When choosing the best performing model given constraints on training time, our approaches select the top-3 models that overlap with the true top3 models 82% of the time, while the baseline only achieves this 54% of the time. Our preliminary experiments hold promise for how developer exhaust can help learnmodels that can approximate various complex tasks e ciently. ACM Reference Format: Jian Zhang, Max Lam, Stephanie Wang, Paroma Varma, Luigi Nardi, Kunle Olukotun, Christopher Ré. 2018. Exploring the Utility of Developer Exhaust. In DEEM’18: International Workshop on Data Management for End-to-End Machine Learning, June 15, 2018, Houston, TX, USA.",
"title": ""
},
{
"docid": "544cfa381dad24a53a31e368e10d8f75",
"text": "Several previous works have shown that TCP exhibits poor performance in mobile ad hoc networks (MANETs). The ultimate reason for this is that MANETs behave in a significantly different way from traditional wired networks, like the Internet, for which TCP was originally designed. In this paper we propose a novel transport protocol - named TPA - specifically tailored to the characteristics of the MANET environment. It is based on a completely new congestion control mechanism, and designed in such a way to minimize the number of useless transmissions and, hence, power consumption. Furthermore, it is able to manage efficiently route changes and route failures. We evaluated the TPA protocol in a static scenario where TCP exhibits good performance. Simulation results show that, even in such a scenario, TPA significantly outperforms TCP.",
"title": ""
},
{
"docid": "748b470bfbd62b5ddf747e3ef989e66d",
"text": "Purpose – This paper sets out to integrate research on knowledge management with the dynamic capabilities approach. This paper will add to the understanding of dynamic capabilities by demonstrating that dynamic capabilities can be seen as composed of concrete and well-known knowledge management activities. Design/methodology/approach – This paper is based on a literature review focusing on key knowledge management processes and activities as well as the concept of dynamic capabilities, the paper connects these two approaches. The analysis is centered on knowledge management activities which then are compiled into dynamic capabilities. Findings – In the paper eight knowledge management activities are identified; knowledge creation, acquisition, capture, assembly, sharing, integration, leverage, and exploitation. These activities are assembled into the three dynamic capabilities of knowledge development, knowledge (re)combination, and knowledge use. The dynamic capabilities and the associated knowledge management activities create flows to and from the firm’s stock of knowledge and they support the creation and use of organizational capabilities. Practical implications – The findings in the paper demonstrate that the somewhat elusive concept of dynamic capabilities can be untangled through the use of knowledge management activities. Practicing managers struggling with the operationalization of dynamic capabilities should instead focus on the contributing knowledge management activities in order to operationalize and utilize the concept of dynamic capabilities. Originality/value – The paper demonstrates that the existing research on knowledge management can be a key contributor to increasing our understanding of dynamic capabilities. This finding is valuable for both researchers and practitioners.",
"title": ""
},
{
"docid": "5fc3da9b59e9a2a7c26fa93445c68933",
"text": "A country's growth is strongly measured by quality of its education system. Education sector, across the globe has witnessed sea change in its functioning. Today it is recognized as an industry and like any other industry it is facing challenges, the major challenges of higher education being decrease in students' success rate and their leaving a course without completion. An early prediction of students' failure may help the management provide timely counseling as well coaching to increase success rate and student retention. We use different classification techniques to build performance prediction model based on students' social integration, academic integration, and various emotional skills which have not been considered so far. Two algorithms J48 (Implementation of C4.5) and Random Tree have been applied to the records of MCA students of colleges affiliated to Guru Gobind Singh Indraprastha University to predict third semester performance. Random Tree is found to be more accurate in predicting performance than J48 algorithm.",
"title": ""
},
{
"docid": "a936f3ea3a168c959c775dbb50a5faf2",
"text": "From the Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts. Address correspondence to Dr. Schmahmann, Department of Neurology, VBK 915, Massachusetts General Hospital, Fruit St., Boston, MA 02114; jschmahmann@partners.org (E-mail). Copyright 2004 American Psychiatric Publishing, Inc. Disorders of the Cerebellum: Ataxia, Dysmetria of Thought, and the Cerebellar Cognitive Affective Syndrome",
"title": ""
},
{
"docid": "9eabe9a867edbceee72bd20d483ad886",
"text": "Inspired by recent advances of deep learning in instance segmentation and object tracking, we introduce the concept of convnet-based guidance applied to video object segmentation. Our model proceeds on a per-frame basis, guided by the output of the previous frame towards the object of interest in the next frame. We demonstrate that highly accurate object segmentation in videos can be enabled by using a convolutional neural network (convnet) trained with static images only. The key component of our approach is a combination of offline and online learning strategies, where the former produces a refined mask from the previous frame estimate and the latter allows to capture the appearance of the specific object instance. Our method can handle different types of input annotations such as bounding boxes and segments while leveraging an arbitrary amount of annotated frames. Therefore our system is suitable for diverse applications with different requirements in terms of accuracy and efficiency. In our extensive evaluation, we obtain competitive results on three different datasets, independently from the type of input annotation.",
"title": ""
},
{
"docid": "06731beb8a4563ed89338b4cba88d1df",
"text": "It has been almost five years since the ISO adopted a standard for measurement of image resolution of digital still cameras using slanted-edge gradient analysis. The method has also been applied to the spatial frequency response and MTF of film and print scanners, and CRT displays. Each of these applications presents challenges to the use of the method. Previously, we have described causes of both bias and variation error in terms of the various signal processing steps involved. This analysis, when combined with observations from practical systems testing, has suggested improvements and interpretation of results. Specifically, refinements in data screening for signal encoding problems, edge feature location and slope estimation, and noise resilience will be addressed.",
"title": ""
}
] |
scidocsrr
|
cb634082d59b19f5a36b852b863a03c2
|
Support Vector Machines for Multiple-Instance Learning
|
[
{
"docid": "04435e017e720c0ed6e5c0cd29f1b4fc",
"text": "Blobworld is a system for image retrieval based on finding coherent image regions which roughly correspond to objects. Each image is automatically segmented into regions (“blobs”) with associated color and texture descriptors. Querying is based on the attributes of one or two regions of interest, rather than a description of the entire image. In order to make large-scale retrieval feasible, we index the blob descriptions using a tree. Because indexing in the high-dimensional feature space is computationally prohibitive, we use a lower-rank approximation to the high-dimensional distance. Experiments show encouraging results for both querying and indexing.",
"title": ""
}
] |
[
{
"docid": "6fee1cce864d858af6e28959961f5c24",
"text": "Much of the organic light emitting diode (OLED) characterization published to date addresses the high current regime encountered in the operation of passively addressed displays. Higher efficiency and brightness can be obtained by driving with an active matrix, but the lower instantaneous pixel currents place the OLEDs in a completely different operating mode. Results at these low current levels are presented and their impact on active matrix display design is discussed.",
"title": ""
},
{
"docid": "34f7878d3c4775899bbc189ac192004a",
"text": "The Dutch-Belgian Randomized Lung Cancer Screening Trial (Dutch acronym: NELSON study) was designed to investigate whether screening for lung cancer by low-dose multidetector computed tomography (CT) in high-risk subjects will lead to a decrease in 10-year lung cancer mortality of at least 25% compared with a control group without screening. Since the start of the NELSON study in 2003, 7557 participants underwent CT screening, with scan rounds in years 1, 2, 4 and 6. In the current review, the design of the NELSON study including participant selection and the lung nodule management protocol, as well as results on validation of CT screening and first results on lung cancer screening are described.",
"title": ""
},
{
"docid": "c4dbf075f91d1a23dda421261911a536",
"text": "In cultures of the Litopenaeus vannamei with biofloc, the concentrations of nitrate rise during the culture period, which may cause a reduction in growth and mortality of the shrimps. Therefore, the aim of this study was to determine the effect of the concentration of nitrate on the growth and survival of shrimp in systems using bioflocs. The experiment consisted of four treatments with three replicates each: The concentrations of nitrate that were tested were 75 (control), 150, 300, and 600 mg NO3 −-N/L. To achieve levels above 75 mg NO3 −-N/L, different dosages of sodium nitrate (PA) were added. For this purpose, twelve experimental units with a useful volume of 45 L were stocked with 15 juvenile L. vannamei (1.30 ± 0.31 g), corresponding to a stocking density of 333 shrimps/m3, that were reared for an experimental period of 42 days. Regarding the water quality parameters measured throughout the study, no significant differences were detected (p > 0.05). Concerning zootechnical performance, a significant difference (p < 0.05) was verified with the 75 (control) and 150 treatments presenting the best performance indexes, while the 300 and 600 treatments led to significantly poorer results (p < 0.05). The histopathological damage was observed in the gills and hepatopancreas of the shrimps exposed to concentrations ≥300 mg NO3 −-N/L for 42 days, and poorer zootechnical performance and lower survival were observed in the shrimps reared at concentrations ≥300 mg NO3 −-N/L under a salinity of 23. The results obtained in this study show that concentrations of nitrate up to 177 mg/L are acceptable for the rearing of L. vannamei in systems with bioflocs, without renewal of water, at a salinity of 23.",
"title": ""
},
{
"docid": "7d0bbf3a83881a97b0217b427b596b76",
"text": "This paper proposes a novel tracker which is controlled by sequentially pursuing actions learned by deep reinforcement learning. In contrast to the existing trackers using deep networks, the proposed tracker is designed to achieve a light computation as well as satisfactory tracking accuracy in both location and scale. The deep network to control actions is pre-trained using various training sequences and fine-tuned during tracking for online adaptation to target and background changes. The pre-training is done by utilizing deep reinforcement learning as well as supervised learning. The use of reinforcement learning enables even partially labeled data to be successfully utilized for semi-supervised learning. Through evaluation of the OTB dataset, the proposed tracker is validated to achieve a competitive performance that is three times faster than state-of-the-art, deep network–based trackers. The fast version of the proposed method, which operates in real-time on GPU, outperforms the state-of-the-art real-time trackers.",
"title": ""
},
{
"docid": "e2d63fece5536aa4668cd5027a2f42b9",
"text": "To ensure integrity, trust, immutability and authenticity of software and information (cyber data, user data and attack event data) in a collaborative environment, research is needed for cross-domain data communication, global software collaboration, sharing, access auditing and accountability. Blockchain technology can significantly automate the software export auditing and tracking processes. It allows to track and control what data or software components are shared between entities across multiple security domains. Our blockchain-based solution relies on role-based and attribute-based access control and prevents unauthorized data accesses. It guarantees integrity of provenance data on who updated what software module and when. Furthermore, our solution detects data leakages, made behind the scene by authorized blockchain network participants, to unauthorized entities. Our approach is used for data forensics/provenance, when the identity of those entities who have accessed/ updated/ transferred the sensitive cyber data or sensitive software is determined. All the transactions in the global collaborative software development environment are recorded in the blockchain public ledger and can be verified any time in the future. Transactions can not be repudiated by invokers. We also propose modified transaction validation procedure to improve performance and to protect permissioned IBM Hyperledger-based blockchains from DoS attacks, caused by bursts of invalid transactions.",
"title": ""
},
{
"docid": "511991822f427c3f62a4c091594e89e3",
"text": "Reinforcement learning has recently gained popularity due to its many successful applications in various fields. In this project reinforcement learning is implemented in a simple warehouse situation where robots have to learn to interact with each other while performing specific tasks. The aim is to study whether reinforcement learning can be used to train multiple agents. Two different methods have been used to achieve this aim, Q-learning and deep Q-learning. Due to practical constraints, this paper cannot provide a comprehensive review of real life robot interactions. Both methods are tested on single-agent and multi-agent models in Python computer simulations. The results show that the deep Q-learning model performed better in the multiagent simulations than the Q-learning model and it was proven that agents can learn to perform their tasks to some degree. Although, the outcome of this project cannot yet be considered sufficient for moving the simulation into reallife, it was concluded that reinforcement learning and deep learning methods can be seen as suitable for modelling warehouse robots and their interactions.",
"title": ""
},
{
"docid": "31c62f403e6d7f06ff2ab028894346ff",
"text": "Automated text summarization is important to for humans to better manage the massive information explosion. Several machine learning approaches could be successfully used to handle the problem. This paper reports the results of our study to compare the performance between neural networks and support vector machines for text summarization. Both models have the ability to discover non-linear data and are effective model when dealing with large datasets.",
"title": ""
},
{
"docid": "67c444b9538ccfe7a2decdd11523dcd5",
"text": "Attention-based learning for fine-grained image recognition remains a challenging task, where most of the existing methods treat each object part in isolation, while neglecting the correlations among them. In addition, the multi-stage or multi-scale mechanisms involved make the existing methods less efficient and hard to be trained end-to-end. In this paper, we propose a novel attention-based convolutional neural network (CNN) which regulates multiple object parts among different input images. Our method first learns multiple attention region features of each input image through the one-squeeze multi-excitation (OSME) module, and then apply the multi-attention multi-class constraint (MAMC) in a metric learning framework. For each anchor feature, the MAMC functions by pulling same-attention same-class features closer, while pushing different-attention or different-class features away. Our method can be easily trained end-to-end, and is highly efficient which requires only one training stage. Moreover, we introduce Dogs-in-the-Wild, a comprehensive dog species dataset that surpasses similar existing datasets by category coverage, data volume and annotation quality. Extensive experiments are conducted to show the substantial improvements of our method on four benchmark datasets.",
"title": ""
},
{
"docid": "57f5b00d796489b7f5caee701ce3116b",
"text": "SR-IOV capable network devices offer the benefits of direct I/O throughput and reduced CPU utilization while greatly increasing the scalability and sharing capabilities of the device. SR-IOV allows the benefits of the paravirtualized driver’s throughput increase and additional CPU usage reductions in HVMs (Hardware Virtual Machines). SR-IOV uses direct I/O assignment of a network device to multiple VMs, maximizing the potential for using the full bandwidth capabilities of the network device, as well as enabling unmodified guest OS based device drivers which will work for different underlying VMMs. Drawing on our recent experience in developing an SR-IOV capable networking solution for the Xen hypervisor we discuss the system level requirements and techniques for SR-IOV enablement on the platform. We discuss PCI configuration considerations, direct MMIO, interrupt handling and DMA into an HVM using an IOMMU (I/O Memory Management Unit). We then explain the architectural, design and implementation considerations for SR-IOV networking in Xen in which the Physical Function has a driver running in the driver domain that serves as a “master” and each Virtual Function exposed to a guest VM has its own virtual driver.",
"title": ""
},
{
"docid": "8bb5acdafefc35f6c1adf00cfa47ac2c",
"text": "A general method is introduced for separating points in multidimensional spaces through the use of stochastic processes. This technique is called stochastic discrimination.",
"title": ""
},
{
"docid": "51bc9449c1dd9518513945a4a2669806",
"text": "We present a robust model to locate facial landmarks under different views and possibly severe occlusions. To build reliable relationships between face appearance and shape with large view variations, we propose to formulate face alignment as an l1-induced Stagewise Relational Dictionary (SRD) learning problem. During each training stage, the SRD model learns a relational dictionary to capture consistent relationships between face appearance and shape, which are respectively modeled by the pose-indexed image features and the shape displacements for current estimated landmarks. During testing, the SRD model automatically selects a sparse set of the most related shape displacements for the testing face and uses them to refine its shape iteratively. To locate facial landmarks under occlusions, we further propose to learn an occlusion dictionary to model different kinds of partial face occlusions. By deploying the occlusion dictionary into the SRD model, the alignment performance for occluded faces can be further improved. Our algorithm is simple, effective, and easy to implement. Extensive experiments on two benchmark datasets and two newly built datasets have demonstrated its superior performances over the state-of-the-art methods, especially for faces with large view variations and/or occlusions.",
"title": ""
},
{
"docid": "52606d9059e08bda1bd837c8e5b8296b",
"text": "The problem of point of interest (POI) recommendation is to provide personalized recommendations of places, such as restaurants and movie theaters. The increasing prevalence of mobile devices and of location based social networks (LBSNs) poses significant new opportunities as well as challenges, which we address. The decision process for a user to choose a POI is complex and can be influenced by numerous factors, such as personal preferences, geographical considerations, and user mobility behaviors. This is further complicated by the connection LBSNs and mobile devices. While there are some studies on POI recommendations, they lack an integrated analysis of the joint effect of multiple factors. Meanwhile, although latent factor models have been proved effective and are thus widely used for recommendations, adopting them to POI recommendations requires delicate consideration of the unique characteristics of LBSNs. To this end, in this paper, we propose a general geographical probabilistic factor model (Geo-PFM) framework which strategically takes various factors into consideration. Specifically, this framework allows to capture the geographical influences on a user's check-in behavior. Also, user mobility behaviors can be effectively leveraged in the recommendation model. Moreover, based our Geo-PFM framework, we further develop a Poisson Geo-PFM which provides a more rigorous probabilistic generative process for the entire model and is effective in modeling the skewed user check-in count data as implicit feedback for better POI recommendations. Finally, extensive experimental results on three real-world LBSN datasets (which differ in terms of user mobility, POI geographical distribution, implicit response data skewness, and user-POI observation sparsity), show that the proposed recommendation methods outperform state-of-the-art latent factor models by a significant margin.",
"title": ""
},
{
"docid": "35e671088cb28f44d729fd21f0ccd7db",
"text": "Sound event detection (SED) in environmental recordings is a key topic of research in machine listening, with applications in noise monitoring for smart cities, self-driving cars, surveillance, bioa-coustic monitoring, and indexing of large multimedia collections. Developing new solutions for SED often relies on the availability of strongly labeled audio recordings, where the annotation includes the onset, offset and source of every event. Generating such precise annotations manually is very time consuming, and as a result existing datasets for SED with strong labels are scarce and limited in size. To address this issue, we present Scaper, an open-source library for soundscape synthesis and augmentation. Given a collection of iso-lated sound events, Scaper acts as a high-level sequencer that can generate multiple soundscapes from a single, probabilistically defined, “specification”. To increase the variability of the output, Scaper supports the application of audio transformations such as pitch shifting and time stretching individually to every event. To illustrate the potential of the library, we generate a dataset of 10,000 sound-scapes and use it to compare the performance of two state-of-the-art algorithms, including a breakdown by soundscape characteristics. We also describe how Scaper was used to generate audio stimuli for an audio labeling crowdsourcing experiment, and conclude with a discussion of Scaper's limitations and potential applications.",
"title": ""
},
{
"docid": "094bb78ae482f2ad4877e53a446236f0",
"text": "While the amount of available information on the Web is increasing rapidly, the problem of managing it becomes more difficult. We present two applications, Thinkbase and Thinkpedia, which aim to make Web content more accessible and usable by utilizing visualizations of the semantic graph as a means to navigate and explore large knowledge repositories. Both of our applications implement a similar concept: They extract semantically enriched contents from a large knowledge spaces (Freebase and Wikipedia respectively), create an interactive graph-based representation out of it, and combine them into one interface together with the original text based content. We describe the design and implementation of our applications, and provide a discussion based on an informal evaluation. Author",
"title": ""
},
{
"docid": "044072a5478ac14beb201d38fab56ed4",
"text": "Biallelic HMX1 mutations cause a very rare autosomal recessive genetic disorder termed as oculoauricular syndrome (OAS) because it is characterized only by the combination of eye and ear anomalies. We identified a new family bringing to three the total families reported with this disorder. Our proband presented with anteriorly protruded ears and malformed ear pinnae in association with microphthalmia, congenital cataract, microcornea, and iris and optic disc colobomata. Additionally, he had high and broad forehead with asymmetry giving a recognizable facial gestalt. Further, short left mandibular ramus and bifid cingulum in the boy and short right mandibular ramus in his father were observed. Mutation analysis revealed a novel homozygous nonsense mutation c.487G>T in the second exon of the HMX1 that predicted to introduce a premature stop codon at position 163 (p.E163*). Parents showed the heterozygous state of the detected mutation. Investigations in a process as complex as craniofacial development suggest that there are still additional, as yet unidentified, genes that play in orchestrate to determine the final phenotype.",
"title": ""
},
{
"docid": "70a970138428aeb06c139abb893a56a9",
"text": "Two sequentially rotated, four stage, wideband circularly polarized high gain microstrip patch array antennas at Ku-band are investigated and compared by incorporating both unequal and equal power division based feeding networks. Four stages of sequential rotation is used to create 16×16 patch array which provides wider common bandwidth between the impedance matching (S11 < −10dB), 3dB axial ratio and 3dB gain of 12.3% for the equal power divider based feed array and 13.2% for the unequal power divider based feed array in addition to high polarization purity. The high peak gain of 28.5dBic is obtained for the unequal power division feed based array antennas compared to 26.8dBic peak gain in the case of the equal power division based feed array antennas. The additional comparison between two feed networks based arrays reveals that the unequal power divider based array antennas provide better array characteristics than the equal power divider based feed array antennas.",
"title": ""
},
{
"docid": "3692954147d1a60fb683001bd379047f",
"text": "OBJECTIVE\nThe current study aimed to compare the Philadelphia collar and an open-design cervical collar with regard to user satisfaction and cervical range of motion in asymptomatic adults.\n\n\nDESIGN\nSeventy-two healthy subjects (36 women, 36 men) aged 18 to 29 yrs were recruited for this study. Neck movements, including active flexion, extension, right/left lateral flexion, and right/left axial rotation, were assessed in each subject under three conditions--without wearing a collar and while wearing two different cervical collars--using a dual digital inclinometer. Subject satisfaction was assessed using a five-item self-administered questionnaire.\n\n\nRESULTS\nBoth Philadelphia and open-design collars significantly reduced cervical motions (P < 0.05). Compared with the Philadelphia collar, the open-design collar more greatly reduced cervical motions in three planes and the differences were statistically significant except for limiting flexion. Satisfaction scores for Philadelphia and open-design collars were 15.89 (3.87) and 19.94 (3.11), respectively.\n\n\nCONCLUSION\nBased on the data of the 72 subjects presented in this study, the open-design collar adequately immobilized the cervical spine as a semirigid collar and was considered cosmetically acceptable, at least for subjects aged younger than 30 yrs.",
"title": ""
},
{
"docid": "72a283eda92eb25404536308d8909999",
"text": "This paper presents a 128.7nW analog front-end amplifier and Gm-C filter for biomedical sensing applications, specifically for Electroencephalogram (EEG) use. The proposed neural amplifier has a supply voltage of 1.8V, consumes a total current of 71.59nA, for a total dissipated power of 128nW and has a gain of 40dB. Also, a 3th order Butterworth Low Pass Gm-C Filter with a 14.7nS transconductor is designed and presented. The filter has a pass band suitable for use in EEG (1-100Hz). The amplifier and filter utilize current sources without resistance which provide 56nA and (1.154nA ×5) respectively. The proposed amplifier occupies and area of 0.26mm2 in 0.3μm TSMC process.",
"title": ""
},
{
"docid": "50f7b9b21f6006b9e0976b8bf56f0fc3",
"text": "Based on the characteristics of wheeled, tracked and legged movements, a variable parallelogram tracked mobile robot(VPTMR) is proposed and developed to enhance its adaptability and stability in the complex environment. This VPTMR robot consists of two variable parallelogram structures, which are composed of one main tracked arm, two lower tracked arms and a chasis. The variable parallelogram structure is actuated by a DC motor. And another DC motor actuates the track rotation, which enables VPTMR robot to move in wheeled, tracked and legged mode that makes the robot to adapt to all rugged environments. The prototype(VPTMR) is developed to verify its performance on environmental adaptability, obstacle crossing ability and stability.",
"title": ""
},
{
"docid": "df27cb7c7ab82ef44aebfeb45d6c3cf1",
"text": "Nowadays, data is created by humans as well as automatically collected by physical things, which embed electronics, software, sensors and network connectivity. Together, these entities constitute the Internet of Things (IoT). The automated analysis of its data can provide insights into previously unknown relationships between things, their environment and their users, facilitating an optimization of their behavior. Especially the real-time analysis of data, embedded into physical systems, can enable new forms of autonomous control. These in turn may lead to more sustainable applications, reducing waste and saving resources. IoT’s distributed and dynamic nature, resource constraints of sensors and embedded devices as well as the amounts of generated data are challenging even the most advanced automated data analysis methods known today. In particular, the IoT requires a new generation of distributed analysis methods. Many existing surveys have strongly focused on the centralization of data in the cloud and big data analysis, which follows the paradigm of parallel high-performance computing. However, bandwidth and energy can be too limited for the transmission of raw data, or it is prohibited due to privacy constraints. Such communication-constrained scenarios require decentralized analysis algorithms which at least partly work directly on the generating devices. After listing data-driven IoT applications, in contrast to existing surveys, we highlight the differences between cloudbased and decentralized analysis from an algorithmic perspective. We present the opportunities and challenges of research on communication-efficient decentralized analysis algorithms. Here, the focus is on the difficult scenario of vertically partitioned data, which covers common IoT use cases. The comprehensive bibliography aims at providing readers with a good starting point for their own work.",
"title": ""
}
] |
scidocsrr
|
60d3ac8b8e95ad8021e407f5dd1c5e63
|
Two years of short URLs internet measurement: security threats and countermeasures
|
[
{
"docid": "2af711baba40a79b259c8d9c1f14518c",
"text": "Twitter can suffer from malicious tweets containing suspicious URLs for spam, phishing, and malware distribution. Previous Twitter spam detection schemes have used account features such as the ratio of tweets containing URLs and the account creation date, or relation features in the Twitter graph. Malicious users, however, can easily fabricate account features. Moreover, extracting relation features from the Twitter graph is time and resource consuming. Previous suspicious URL detection schemes have classified URLs using several features including lexical features of URLs, URL redirection, HTML content, and dynamic behavior. However, evading techniques exist, such as time-based evasion and crawler evasion. In this paper, we propose WARNINGBIRD, a suspicious URL detection system for Twitter. Instead of focusing on the landing pages of individual URLs in each tweet, we consider correlated redirect chains of URLs in a number of tweets. Because attackers have limited resources and thus have to reuse them, a portion of their redirect chains will be shared. We focus on these shared resources to detect suspicious URLs. We have collected a large number of tweets from the Twitter public timeline and trained a statistical classifier with features derived from correlated URLs and tweet context information. Our classifier has high accuracy and low false-positive and falsenegative rates. We also present WARNINGBIRD as a realtime system for classifying suspicious URLs in the Twitter stream. ∗This research was supported by the MKE (The Ministry of Knowledge Economy), Korea, under the ITRC (Information Technology Research Center) support program supervised by the NIPA (National IT Industry Promotion Agency) (NIPA-2011-C1090-1131-0009) and World Class University program funded by the Ministry of Education, Science and Technology through the National Research Foundation of Korea(R31-10100).",
"title": ""
},
{
"docid": "d558f980b85bf970a7b57c00df361591",
"text": "URL shortener services today have come to play an important role in our social media landscape. They direct user attention and disseminate information in online social media such as Twitter or Facebook. Shortener services typically provide short URLs in exchange for long URLs. These short URLs can then be shared and diffused by users via online social media, e-mail or other forms of electronic communication. When another user clicks on the shortened URL, she will be redirected to the underlying long URL. Shortened URLs can serve many legitimate purposes, such as click tracking, but can also serve illicit behavior such as fraud, deceit and spam. Although usage of URL shortener services today is ubiquituous, our research community knows little about how exactly these services are used and what purposes they serve. In this paper, we study usage logs of a URL shortener service that has been operated by our group for more than a year. We expose the extent of spamming taking place in our logs, and provide first insights into the planetary-scale of this problem. Our results are relevant for researchers and engineers interested in understanding the emerging phenomenon and dangers of spamming via URL shortener services.",
"title": ""
},
{
"docid": "6cacb8cdc5a1cc17c701d4ffd71bdab1",
"text": "Phishing costs Internet users billions of dollars a year. Using various data sets collected in real-time, this paper analyzes various aspects of phisher modi operandi. We examine the anatomy of phishing URLs and domains, registration of phishing domains and time to activation, and the machines used to host the phishing sites. Our findings can be used as heuristics in filtering phishing-related emails and in identifying suspicious domain registrations.",
"title": ""
}
] |
[
{
"docid": "a2d06f6e3fbb7260346688acd02772c3",
"text": "Lane change is a crucial vehicle maneuver which needs coordination with surrounding vehicles. Automated lane changing functions built on rule-based models may perform well under pre-defined operating conditions, but they may be prone to failure when unexpected situations are encountered. In our study, we proposed a Reinforcement Learning based approach to train the vehicle agent to learn an automated lane change behavior such that it can intelligently make a lane change under diverse and even unforeseen scenarios. Particularly, we treated both state space and action space as continuous, and designed a Q-function approximator that has a closed-form greedy policy, which contributes to the computation efficiency of our deep Q-learning algorithm. Extensive simulations are conducted for training the algorithm, and the results illustrate that the Reinforcement Learning based vehicle agent is capable of learning a smooth and efficient driving policy for lane change maneuvers.",
"title": ""
},
{
"docid": "5f52b31afe9bf18f009a10343ccedaf0",
"text": "The preservation of image quality under various display conditions becomes more and more important in the multimedia era. A considerable amount of effort has been devoted to compensating the quality degradation caused by dim LCD backlight for mobile devices and desktop monitors. However, most previous enhancement methods for backlight-scaled images only consider the luminance component and overlook the impact of color appearance on image quality. In this paper, we propose a fast and elegant method that exploits the anchoring property of human visual system to preserve the color appearance of backlight-scaled images as much as possible. Our approach is distinguished from previous ones in many aspects. First, it has a sound theoretical basis. Second, it takes the luminance and chrominance components into account in an integral manner. Third, it has low complexity and can process 720p high-definition videos at 35 frames per second without flicker. The superior performance of the proposed method is verified through psychophysical tests.",
"title": ""
},
{
"docid": "7e0b9941d5019927fce0a1223a88d6b5",
"text": "Representation and recognition of events in a video is important for a number of tasks such as video surveillance, video browsing and content based video indexing. This paper describes the results of a \"Challenge Project on Video Event Taxonomy\" sponsored by the Advanced Research and Development Activity (ARDA) of the U.S. Government in the summer and fall of 2003. The project brought together more than 30 researchers in computer vision and knowledge representation and representatives of the user community. It resulted in the development of a formal language for describing an ontology of events, which we call VERL (Video Event Representation Language) and a companion language called VEML (Video Event Markup Language) to annotate instances of the events described in VERL. This paper provides a summary of VERL and VEML as well as the considerations associated with the specific design choices.",
"title": ""
},
{
"docid": "028246325da6891ce691fe60436943ec",
"text": "Resumen: Introducción. Las pruebas de Fluencia Verbal (FV) Semántica (FVS) y Fonológica (FVF) son muy empleadas en la práctica clínica. Disponer de diferentes pruebas alternativas que además tengan en cuenta el efecto de variables sociodemográficas mejorarían su uso como tests de cribado, permitiendo diferenciar a personas con y sin Enfermedad de Alzheimer (EA). Objetivos. Comparar la capacidad discriminativa de las tareas de FVS “cosas en una casa” y “alimentos” frente a la tarea “animales” entre pacientes con EA (n = 50) y sujetos sanos (n = 50); comparar el uso de los fonemas “P”-“M”-“R” como tarea alternativa y/o paralela a los fonemas “F”-“A”-“S”; y valorar el uso combinado de ambos tipos de tareas junto con el de variables sociodemográficas para la discriminación de pacientes con EA y sujetos sanos. Resultados. Tanto la categoría semántica como fonológica muestran resultados semejantes, una alta correlación, mantienen la validez de criterio y permiten su utilización de forma paralela. El modelo de predicción que logra discriminar correctamente al 91% de los sujetos evaluados es el que incluye las tareas “cosas en una casa”, los fonemas “A” “S” y las variables edad y años de escolarización. Conclusiones. La utilización paralela de pruebas de FVS y FVF, junto con variables sociodemográficas mejora la capacidad discriminativa de las pruebas de FV. Palabras-clave: Demencia; enfermedad de Alzheimer; cribado; exploración neuropsicológica; fluidez verbal; fluidez semántica; fluencia fonológica. Title: Age, schooling and Verbal Fluency tasks for the screening of Alzheimer ́s disease patients. Abstract: Introduction. The Verbal (VF), Semantic (SF) and Phonemic Fluency (PF) tests are commonly used in clinical practice. Having different alternative tests, that could also allow for the effect of demographic variables, would improve their use as screening tests, making it possible to differentiate patients with or without Alzheimer ́s disease (AD). Aims. (1) To compare the discriminatory ability of the SF tasks “things in a house” and “food” versus the task “animals” among patients with the AD (n = 50) and healthy subjects (n = 50). (2) To compare the use of the phonemes such as “P”, “M” and “R” as an alternative and/or parallel task to the phonemes “F”, “A” and “S”. (3) To assess the combined use of both tasks with the demographic variables for the screening of AD patients and the healthy ones. Results. Both semantic and phonemic categories indicate similar results, high correlation, support the criteria validity and allow for their use in a parallel way. Among all the different roles assessed, the most successful in screening correctly 91% of the evaluated subjects is the one that includes tasks such as “things in a house”, the phonemes “A” and “S” and the age and schooling time variables. Conclusion. The parallel use of VF and PF, plus the demographic variables improve the discriminatory ability of the VF tests.",
"title": ""
},
{
"docid": "210acdd097910d183ce1bcd5aefe5b05",
"text": "Imaging spectroscopy is of growing interest as a new apradiation with matter. Imaging spectroscopy in the solar proach to Earth remote sensing. The Airborne Visible/Inreflected spectrum was conceived for the same objective, frared Imaging Spectrometer (AVIRIS) was the first imbut from the Earth looking and regional perspective aging sensor to measure the solar reflected spectrum from (Fig. 1). Molecules and particles of the land, water and 400 nm to 2500 nm at 10 nm intervals. The calibration atmosphere environments interact with solar energy in accuracy and signal-to-noise of AVIRIS remain unique. the 400–2500 nm spectral region through absorption, reThe AVIRIS system as well as the science research and flection, and scattering processes. Imaging spectrometers applications have evolved significantly in recent years. The in the solar reflected spectrum are developed to measure initial design and upgraded characteristics of the AVIRIS spectra as images in some or all of this portion of this system are described in terms of the sensor, calibration, spectrum. These spectral measurements are used to dedata system, and flight operation. This update on the chartermine constituent composition through the physics and acteristics of AVIRIS provides the context for the science chemistry of spectroscopy for science research and appliresearch and applications that use AVIRIS data acquired cations over the regional scale of the image. in the past several years. Recent science research and apTo pursue the objective of imaging spectroscopy, the plications are reviewed spanning investigations of atmoJet Propulsion Laboratory proposed to design and despheric correction, ecology and vegetation, geology and velop the Airborne Visible/Infrared Imaging Spectromesoils, inland and coastal waters, the atmosphere, snow and ter (AVIRIS) in 1983. AVIRIS first measured spectral ice hydrology, biomass burning, environmental hazards, images in 1987 and was the first imaging spectrometer satellite simulation and calibration, commercial applicato measure the solar reflected spectrum from 400 nm to tions, spectral algorithms, human infrastructure, as well as 2500 nm (Fig. 2). AVIRIS measures upwelling radiance spectral modeling. Elsevier Science Inc., 1998 through 224 contiguous spectral channels at 10 nm intervals across the spectrum. These radiance spectra are measured as images of 11 km width and up to 800 km INTRODUCTION length with 20 m spatial resolution. AVIRIS spectral images are acquired from the Q-bay of a NASA ER-2 airSpectroscopy is used in the laboratory in the disciplines craft from an altitude of 20,000 m. The spectral, radioof physics, chemistry, and biology to investigate material metric, and spatial calibration of AVIRIS is determined properties based on the interaction of electromagnetic in laboratory and monitored inflight each year. More than 4 TB of AVIRIS data have been acquired, and the requested data has been calibrated and distributed to inJet Propulsion Laboratory, California Institute of Technology, Pasadena, California vestigators since the initial flights. Address correspondence to R. O. Green, JPL Mail-Stop 306-438, AVIRIS has measured spectral images for science 4800 Oak Grove Dr., Pasadena, CA 91109-8099. E-mail: rog@gomez. research and applications in every year since 1987. More jpl.nasa.gov Received 24 June 1998; accepted 8 July 1998. than 250 papers and abstracts have been written for the",
"title": ""
},
{
"docid": "341e0b7d04b333376674dac3c0888f50",
"text": "Software archives contain historical information about the development process of a software system. Using data mining techniques rules can be extracted from these archives. In this paper we discuss how standard visualization techniques can be applied to interactively explore these rules. To this end we extended the standard visualization techniques for association rules and sequence rules to also show the hierarchical order of items. Clusters and outliers in the resulting visualizations provide interesting insights into the relation between the temporal development of a system and its static structure. As an example we look at the large software archive of the MOZILLA open source project. Finally we discuss what kind of regularities and anomalies we found and how these can then be leveraged to support software engineers.",
"title": ""
},
{
"docid": "81a9c8a0314703f2c73789f46b394bfe",
"text": "In order to reproduce jaw motions and mechanics that match the human jaw function truthfully with the conception of bionics, a novel human jaw movement robot based on mechanical biomimetic principles was proposed. Firstly, based on the biomechanical properties of mandibular muscles, a jaw robot is built based on the 6-PSS parallel mechanism. Secondly, the inverse kinematics solution equations are derived. Finally, kinematics performances, such as workspace with the orientation constant, manipulability, dexterity of the jaw robot are obtained. These indices show that the parallel mechanism have a big enough flexible workspace, no singularity, and a good motion transfer performance for human chewing movement.",
"title": ""
},
{
"docid": "da4c868b35a235e25b96448337f07a0b",
"text": "In last few decades, human activity recognition grabbed considerable research attentions from a wide range of pattern recognition and human-computer interaction researchers due to its prominent applications such as smart home health care. For instance, activity recognition systems can be adopted in a smart home health care system to improve their rehabilitation processes of patients. There are various ways of using different sensors for human activity recognition in a smartly controlled environment. Among which, physical human activity recognition through wearable sensors provides valuable information about an individual’s degree of functional ability and lifestyle. In this paper, we present a smartphone inertial sensors-based approach for human activity recognition. Efficient features are first extracted from raw data. The features include mean, median, autoregressive coefficients, etc. The features are further processed by a kernel principal component analysis (KPCA) and linear discriminant analysis (LDA) to make them more robust. Finally, the features are trained with a Deep Belief Network (DBN) for successful activity recognition. The proposed approach was compared with traditional expression recognition approaches such as typical multiclass Support Vector Machine (SVM) and Artificial Neural Network (ANN) where it outperformed them. Keywords— Activity Recognition, Sensors, Smartphones, Deep Belief Network.",
"title": ""
},
{
"docid": "aef8b4098ade89a3218e01d15de01063",
"text": "This paper studies multidimensional matching between workers and jobs. Workers differ in manual and cognitive skills and sort into jobs that demand different combinations of these two skills. To study this multidimensional sorting, I develop a theoretical framework that generalizes the unidimensional notion of assortative matching. I derive the equilibrium in closed form and use this explicit solution to study biased technological change. The key finding is that an increase of worker-job complementarities in cognitive relative to manual inputs leads to more pronounced sorting and wage inequality across cognitive relative to manual skills. This can trigger wage polarization and boost aggregate wage dispersion. I then estimate the model for the US and identify sizeable technology shifts: During the 90s, worker-job complementarities in cognitive inputs increased by 15% whereas complementarities in manual inputs decreased by 41%. Besides this bias in complementarities, there has also been a strong cognitive skill -bias in production. Counterfactual exercises suggest that these technology shifts can account for observed changes in worker-job sorting, wage polarization and a significant part of the increase in US wage dispersion.",
"title": ""
},
{
"docid": "f25afc147ceb24fb1aca320caa939f10",
"text": "Third party intervention is a typical response to destructive and persistent social conflict and comes in a number of different forms attended by a variety of issues. Mediation is a common form of intervention designed to facilitate a negotiated settlement on substantive issues between conflicting parties. Mediators are usually external to the parties and carry an identity, motives and competencies required to play a useful role in addressing the dispute. While impartiality is generally seen as an important prerequisite for effective intervention, biased mediators also appear to have a role to play. This article lays out the different forms of third-party intervention in a taxonomy of six methods, and proposes a contingency model which matches each type of intervention to the appropriate stage of conflict escalation. Interventions are then sequenced, in order to assist the parties in de-escalating and resolving the conflict. It must be pointed out, however, that the mixing of interventions with different power bases raises a number of ethical and moral questions about the use of reward and coercive power by third parties. The article then discusses several issues around the practice of intervention. It is essential to give these issues careful consideration if third-party methods are to play their proper and useful role in the wider process of conflict transformation. Psychology from the University of Saskatchewan and a Ph.D. in Social Psychology from the University of Michigan. He has provided training and consulting services to various organizations and international institutes in conflict management. His current interests include third party intervention, interactive conflict resolution, and reconciliation in situations of ethnopolitical conflict. A b s t r a c t A b o u t t h e C o n t r i b u t o r",
"title": ""
},
{
"docid": "a68872f1835e1c477d04335ccce99862",
"text": "An industrial robot today uses measurements of its joint positions and models of its kinematics and dynamics to estimate and control its end-effector position. Substantially better end-effector position estimation and control performance would be obtainable if direct measurements of its end-effector position were also used. The subject of this paper is extended Kalman filtering for precise estimation of the position of the end-effector of a robot using, in addition to the usual measurements of the joint positions, direct measurements of the end-effector position. The estimation performances of extended Kalman filters are compared in applications to a planar two-axis robotic arm with very flexible links. The comparisons shed new light on the dependence of extended Kalman filter estimation performance on the quality of the model of the arm dynamics that the extended Kalman filter operates with. KEY WORDS—extended Kalman filter, estimation, flexible links, robot",
"title": ""
},
{
"docid": "ab54e41b0e79eed8f6f7fc1b0f9d9ddb",
"text": "The stemming is the process to derive the basic word by removing affix of the word. The stemming is tightly related to basic word or lemma and the sub lemmas. The lemma and sub lemma of Indonesian Language have been grown and absorb from foreign languages or Indonesian traditional languages. Our approach provides the easy way of stemming Indonesian language through flexibility affix classification. Therefore, the affix additional can be applied in easy way. We experiment with 1,704 text documents with 255,182 tokens and the stemmed words is 3,648 words. In this experiment, we compare our approach performance to the confix-stripping approach performance. The result shows that our performance can cover the failure in stemming reduplicated words of confix-stripping approach.",
"title": ""
},
{
"docid": "eb2459cbb99879b79b94653c3b9ea8ef",
"text": "Extending the success of deep neural networks to natural language understanding and symbolic reasoning requires complex operations and external memory. Recent neural program induction approaches have attempted to address this problem, but are typically limited to differentiable memory, and consequently cannot scale beyond small synthetic tasks. In this work, we propose the Manager-ProgrammerComputer framework, which integrates neural networks with non-differentiable memory to support abstract, scalable and precise operations through a friendly neural computer interface. Specifically, we introduce a Neural Symbolic Machine, which contains a sequence-to-sequence neural \"programmer\", and a nondifferentiable \"computer\" that is a Lisp interpreter with code assist. To successfully apply REINFORCE for training, we augment it with approximate gold programs found by an iterative maximum likelihood training process. NSM is able to learn a semantic parser from weak supervision over a large knowledge base. It achieves new state-of-the-art performance on WEBQUESTIONSSP, a challenging semantic parsing dataset, with weak supervision. Compared to previous approaches, NSM is end-to-end, therefore does not rely on feature engineering or domain specific knowledge.",
"title": ""
},
{
"docid": "8ac8ad61dc5357f3dc3ab1020db8bada",
"text": "We show how to learn many layers of features on color images and we use these features to initialize deep autoencoders. We then use the autoencoders to map images to short binary codes. Using semantic hashing [6], 28-bit codes can be used to retrieve images that are similar to a query image in a time that is independent of the size of the database. This extremely fast retrieval makes it possible to search using multiple di erent transformations of the query image. 256-bit binary codes allow much more accurate matching and can be used to prune the set of images found using the 28-bit codes.",
"title": ""
},
{
"docid": "adfe1398a35e63b0bfbf2fd55e7a9d81",
"text": "Neutrosophic numbers easily allow modeling uncertainties of prices universe, thus justifying the growing interest for theoretical and practical aspects of arithmetic generated by some special numbers in our work. At the beginning of this paper, we reconsider the importance in applied research of instrumental discernment, viewed as the main support of the final measurement validity. Theoretically, the need for discernment is revealed by decision logic, and more recently by the new neutrosophic logic and by constructing neutrosophic-type index numbers, exemplified in the context and applied to the world of prices, and, from a practical standpoint, by the possibility to use index numbers in characterization of some cyclical phenomena and economic processes, e.g. inflation rate. The neutrosophic index numbers or neutrosophic indexes are the key topic of this article. The next step is an interrogative and applicative one, drawing the coordinates of an optimized discernment centered on neutrosophic-type index numbers. The inevitable conclusions are optimistic in relation to the common future of the index method and neutrosophic logic, with statistical and economic meaning and utility.",
"title": ""
},
{
"docid": "fcf88ca7ca7ae03e7feea2ec7a5181a5",
"text": "Modern semantic segmentation frameworks usually combine low-level and high-level features from pre-trained backbone convolutional models to boost performance. In this paper, we first point out that a simple fusion of low-level and high-level features could be less effective because of the gap in semantic levels and spatial resolution. We find that introducing semantic information into low-level features and highresolution details into high-level features is more effective for the later fusion. Based on this observation, we propose a new framework, named ExFuse, to bridge the gap between low-level and high-level features thus significantly improve the segmentation quality by 4.0% in total. Furthermore, we evaluate our approach on the challenging PASCAL VOC 2012 segmentation benchmark and achieve 87.9% mean IoU, which outperforms the previous state-of-the-art results.",
"title": ""
},
{
"docid": "77a09b094d4622d01d09f042f1ae3045",
"text": "Depth maps captured by consumer-level depth cameras such as Kinect are usually degraded by noise, missing values, and quantization. In this paper, we present a data-driven approach for refining degraded RAWdepth maps that are coupled with an RGB image. The key idea of our approach is to take advantage of a training set of high-quality depth data and transfer its information to the RAW depth map through multi-scale dictionary learning. Utilizing a sparse representation, our method learns a dictionary of geometric primitives which captures the correlation between high-quality mesh data, RAW depth maps and RGB images. The dictionary is learned and applied in a manner that accounts for various practical issues that arise in dictionary-based depth refinement. Compared to previous approaches that only utilize the correlation between RAW depth maps and RGB images, our method produces improved depth maps without over-smoothing. Since our approach is data driven, the refinement can be targeted to a specific class of objects by employing a corresponding training set. In our experiments, we show that this leads to additional improvements in recovering depth maps of human faces.",
"title": ""
},
{
"docid": "15f6b6be4eec813fb08cb3dd8b9c97f2",
"text": "ACKNOWLEDGEMENTS First, I would like to thank my supervisor Professor H. Levent Akın for his guidance. This thesis would not have been possible without his encouragement and enthusiastic support. I would also like to thank all the staff at the Artificial Intelligence Laboratory for their encouragement throughout the year. Their success in RoboCup is always a good motivation. Sharing their precious ideas during the weekly seminars have always guided me to the right direction. Finally I am deeply grateful to my family and to my wife Derya. They always give me endless love and support, which has helped me to overcome the various challenges along the way. Thank you for your patience... The field of Intelligent Transport Systems (ITS) is improving rapidly in the world. Ultimate aim of such systems is to realize fully autonomous vehicle. The researches in the field offer the potential for significant enhancements in safety and operational efficiency. Lane tracking is an important topic in autonomous navigation because the navigable region usually stands between the lanes, especially in urban environments. Several approaches have been proposed, but Hough transform seems to be the dominant among all. A robust lane tracking method is also required for reducing the effect of the noise and achieving the required processing time. In this study, we present a new lane tracking method which uses a partitioning technique for obtaining Multiresolution Hough Transform (MHT) of the acquired vision data. After the detection process, a Hidden Markov Model (HMM) based method is proposed for tracking the detected lanes. Traffic signs are important instruments to indicate the rules on roads. This makes them an essential part of the ITS researches. It is clear that leaving traffic signs out of concern will cause serious consequences. Although the car manufacturers have started to deploy intelligent sign detection systems on their latest models, the road conditions and variations of actual signs on the roads require much more robust and fast detection and tracking methods. Localization of such systems is also necessary because traffic signs differ slightly between countries. This study also presents a fast and robust sign detection and tracking method based on geometric transformation and genetic algorithms (GA). Detection is done by a genetic algorithm (GA) approach supported by a radial symmetry check so that false alerts are considerably reduced. Classification v is achieved by a combination of SURF features with NN or SVM classifiers. A heuristic …",
"title": ""
},
{
"docid": "46fdba2028abec621e8b9fbd0919e043",
"text": "The HF band, located in between 3-30 MHz, can offer single hop communication channels over a very long distances - even up to around the world. Traditionally, the HF is seen primarily as a solution for long communication ranges although it may also be a perfect choice for much shorter communication ranges when high data rates are not a primary target. It is well known that the HF channel is a demanding environment to operate since it changes rapidly, i.e., channel is available at a moment but the next moment it is not. Therefore, a big problem in HF communications is channel access or channel selection. By choosing the used HF channels wisely, i.e., cognitively, the channel behavior and system reliability considerably improves. This paper discusses about a change of paradigm in HF communication that will take place after applying cognitive principles on the HF system.",
"title": ""
}
] |
scidocsrr
|
59102fb27954a7b9cb3b03d26043db34
|
On the statistical properties of viral misinformation in online social media
|
[
{
"docid": "832bed06d844fedb2867750bb7ec3989",
"text": "Viral diffusion allows a piece of information to widely and quickly spread within the network of users through word-ofmouth. In this paper, we study the problem of modeling both item and user factors that contribute to viral diffusion in Twitter network. We identify three behaviorial factors, namely user virality, user susceptibility and item virality, that contribute to viral diffusion. Instead of modeling these factors independently as done in previous research, we propose a model that measures all the factors simultaneously considering their mutual dependencies. The model has been evaluated on both synthetic and real datasets. The experiments show that our model outperforms the existing ones for synthetic data with ground truth labels. Our model also performs well for predicting the hashtags that have higher retweet likelihood. We finally present case examples that illustrate how the models differ from one another.",
"title": ""
}
] |
[
{
"docid": "97e4facde730c97a080ed160682f5dd0",
"text": "The application of deep learning to symbolic domains remains an active research endeavour. Graph neural networks (GNN), consisting of trained neural modules which can be arranged in different topologies at run time, are sound alternatives to tackle relational problems which lend themselves to graph representations. In this paper, we show that GNNs are capable of multitask learning, which can be naturally enforced by training the model to refine a single set of multidimensional embeddings ∈ R and decode them into multiple outputs by connecting MLPs at the end of the pipeline. We demonstrate the multitask learning capability of the model in the relevant relational problem of estimating network centrality measures, i.e. is vertex v1 more central than vertex v2 given centrality c?. We then show that a GNN can be trained to develop a lingua franca of vertex embeddings from which all relevant information about any of the trained centrality measures can be decoded. The proposed model achieves 89% accuracy on a test dataset of random instances with up to 128 vertices and is shown to generalise to larger problem sizes. The model is also shown to obtain reasonable accuracy on a dataset of real world instances with up to 4k vertices, vastly surpassing the sizes of the largest instances with which the model was trained (n = 128). Finally, we believe that our contributions attest to the potential of GNNs in symbolic domains in general and in relational learning in particular.",
"title": ""
},
{
"docid": "57eb8d5adbf8374710a3c40074fb38f8",
"text": "Information security and privacy in the healthcare sector is an issue of growing importance. The adoption of digital patient records, increased regulation, provider consolidation and the increasing need for information exchange between patients, providers and payers, all point towards the need for better information security. We critically survey the literature on information security and privacy in healthcare, published in information systems journals as well as many other related disciplines including health informatics, public health, law, medicine, the trade press and industry reports. In this paper, we provide a holistic view of the recent research and suggest new areas of interest to the information systems community.",
"title": ""
},
{
"docid": "ac4c584379ad2fac9b5e28b550e02b67",
"text": "Primary cilium dysfunction underlies the pathogenesis of Bardet-Biedl syndrome (BBS), a genetic disorder whose symptoms include obesity, retinal degeneration, and nephropathy. However, despite the identification of 12 BBS genes, the molecular basis of BBS remains elusive. Here we identify a complex composed of seven highly conserved BBS proteins. This complex, the BBSome, localizes to nonmembranous centriolar satellites in the cytoplasm but also to the membrane of the cilium. Interestingly, the BBSome is required for ciliogenesis but is dispensable for centriolar satellite function. This ciliogenic function is mediated in part by the Rab8 GDP/GTP exchange factor, which localizes to the basal body and contacts the BBSome. Strikingly, Rab8(GTP) enters the primary cilium and promotes extension of the ciliary membrane. Conversely, preventing Rab8(GTP) production blocks ciliation in cells and yields characteristic BBS phenotypes in zebrafish. Our data reveal that BBS may be caused by defects in vesicular transport to the cilium.",
"title": ""
},
{
"docid": "a82a4d82b2713e0fe0a562ac09d40fef",
"text": "The advent of new cryptographic methods in recent years also includes schemes related to functional encryption. Within these schemes Attribute-based Encryption (ABE) became the most popular, including ciphertext-policy and key-policy ABE. ABE and related schemes are widely discussed within the mathematical community. Unfortunately, there are only a few implementations circulating within the computer science and the applied cryptography community. Hence, it is very difficult to include these new cryptographic methods in real-world applications. This article gives an overview of existing implementations and elaborates on their value in specific cloud computing and IoT application scenarios. This also includes a summary of the additions the authors made to current implementations such as the introduction of dynamic attributes. Keywords—Attribute-based Encryption, Applied Cryptography, Internet of Things, Cloud Computing Security",
"title": ""
},
{
"docid": "6d9735b19ab2cb1251bd294045145367",
"text": "Waveguide twists are often necessary to provide polarization rotation between waveguide-based components. At terahertz frequencies, it is desirable to use a twist design that is compact in order to reduce loss; however, these designs are difficult if not impossible to realize using standard machining. This paper presents a micromachined compact waveguide twist for terahertz frequencies. The Rud-Kirilenko twist geometry is ideally suited to the micromachining processes developed at the University of Virginia. Measurements of a WR-1.5 micromachined twist exhibit a return loss near 20 dB and a median insertion loss of 0.5 dB from 600 to 750 GHz.",
"title": ""
},
{
"docid": "4ff132c6e66ddb34d4f4e537dc2e0883",
"text": "Obtaining reliable data and drawing meaningful and robust inferences from diffusion MRI can be challenging and is subject to many pitfalls. The process of quantifying diffusion indices and eventually comparing them between groups of subjects and/or correlating them with other parameters starts at the acquisition of the raw data, followed by a long pipeline of image processing steps. Each one of these steps is susceptible to sources of bias, which may not only limit the accuracy and precision, but can lead to substantial errors. This article provides a detailed review of the steps along the analysis pipeline and their associated pitfalls. These are grouped into 1 pre-processing of data; 2 estimation of the tensor; 3 derivation of voxelwise quantitative parameters; 4 strategies for extracting quantitative parameters; and finally 5 intra-subject and inter-subject comparison, including region of interest, histogram, tract-specific and voxel-based analyses. The article covers important aspects of diffusion MRI analysis, such as motion correction, susceptibility and eddy current distortion correction, model fitting, region of interest placement, histogram and voxel-based analysis. We have assembled 25 pitfalls (several previously unreported) into a single article, which should serve as a useful reference for those embarking on new diffusion MRI-based studies, and as a check for those who may already be running studies but may have overlooked some important confounds. While some of these problems are well known to diffusion experts, they might not be to other researchers wishing to undertake a clinical study based on diffusion MRI.",
"title": ""
},
{
"docid": "1c30cc0f1e69ec2309a93bcaecc2aeb3",
"text": "Agility and resilience requirements of future cellular networks may not be fully satisfied by terrestrial base stations in cases of unexpected or temporary events. A promising solution is assisting the cellular network via low-altitude unmanned aerial vehicles equipped with base stations, i.e., drone-cells. Although drone-cells provide a quick deployment opportunity as aerial base stations, efficient placement becomes one of the key issues. In addition to mobility of the drone-cells in the vertical dimension as well as the horizontal dimension, the differences between the air-to-ground and terrestrial channels cause the placement of the drone-cells to diverge from placement of terrestrial base stations. In this paper, we first highlight the properties of the drone-cell placement problem, and formulate it as a 3-D placement problem with the objective of maximizing the revenue of the network. After some mathematical manipulations, we formulate an equivalent quadratically-constrained mixed integer non-linear optimization problem and propose a computationally efficient numerical solution for this problem. We verify our analytical derivations with numerical simulations and enrich them with discussions which could serve as guidelines for researchers, mobile network operators, and policy makers.",
"title": ""
},
{
"docid": "321ae36452b9aac47c833db017116bb5",
"text": "Near Field Communication, NFCis one of the latest short range wireless communication technologies. NFC provides safe communication between electronic gadgets. NFC-enabled devices can just be pointed or touched by the users of their devices to other NFC-enabled devices to communicate with them. With NFC technology, communication is established when an NFC-compatible device is brought within a few centimetres of another i.e. around 20 cm theoretically (4cm is practical). The immense benefit of the short transmission range is that it prevents eavesdropping on NFC-enabled dealings. NFC technology enables several innovative usage scenarios for mobile devices. NFC technology works on the basis of RFID technology which uses magnetic field induction to commence communication between electronic devices in close vicinity. NFC operates at 13.56MHz and has 424kbps maximum data transfer rate. NFC is complementary to Bluetooth and 802.11 with their long distance capabilities. In card emulation mode NFC devices can offer contactless/wireless smart card standard. This technology enables smart phones to replace traditional plastic cards for the purpose of ticketing, payment, etc. Sharing (share files between phones), service discovery i.e. get information by touching smart phones etc. are other possible applications of NFC using smart phones. This paper provides an overview of NFC technology in a detailed manner including working principle, transmission details, protocols and standards, application scenarios, future market, security standards and vendor’s chipsets which are available for this standard. This comprehensive survey should serve as a useful guide for students, researchers and academicians who are interested in NFC Technology and its applications [1].",
"title": ""
},
{
"docid": "8e8b7ddf171a67cae9d41cc801506a89",
"text": "Many real-world problems are composed of several interacting components. In order to facilitate research on such interactions, the Traveling Thief Problem (TTP) was created in 2013 as the combination of two wellunderstood combinatorial optimization problems. With this article, we contribute in four ways. First, we create a comprehensive dataset that comprises the performance data of 21 TTP algorithms on the full original set of 9720 TTP instances. Second, we define 55 characteristics for all TPP instances that can be used to select the best algorithm on a per-instance basis. Third, we use these algorithms and features to construct the first algorithm portfolios for TTP, clearly outperforming the single best algorithm. Finally, we study which algorithms contribute most to this portfolio.",
"title": ""
},
{
"docid": "fafbcccd49d324ea45dbe4c341d4c7d9",
"text": "This paper discusses the technical issues that were required to adapt a KUKA Robocoaster for use as a real-time motion simulator. Within this context, the paper addresses the physical modifications and the software control structure that were needed to have a flexible and safe experimental setup. It also addresses the delays and transfer function of the system. The paper is divided into two sections. The first section describes the control and safety structures of the MPI Motion Simulator. The second section shows measurements of latencies and frequency responses of the motion simulator. The results show that the frequency responses of the MPI Motion Simulator compare favorably with high-end Stewart Platforms, and therefore demonstrate the suitability of robot-based motion simulators for flight simulation.",
"title": ""
},
{
"docid": "2ec9ac2c283fa0458eb97d1e359ec358",
"text": "Multiple automakers have in development or in production automated driving systems (ADS) that offer freeway-pilot functions. This type of ADS is typically limited to restricted-access freeways only, that is, the transition from manual to automated modes takes place only after the ramp merging process is completed manually. One major challenge to extend the automation to ramp merging is that the automated vehicle needs to incorporate and optimize long-term objectives (e.g. successful and smooth merge) when near-term actions must be safely executed. Moreover, the merging process involves interactions with other vehicles whose behaviors are sometimes hard to predict but may influence the merging vehicle's optimal actions. To tackle such a complicated control problem, we propose to apply Deep Reinforcement Learning (DRL) techniques for finding an optimal driving policy by maximizing the long-term reward in an interactive environment. Specifically, we apply a Long Short-Term Memory (LSTM) architecture to model the interactive environment, from which an internal state containing historical driving information is conveyed to a Deep Q-Network (DQN). The DQN is used to approximate the Q-function, which takes the internal state as input and generates Q-values as output for action selection. With this DRL architecture, the historical impact of interactive environment on the long-term reward can be captured and taken into account for deciding the optimal control policy. The proposed architecture has the potential to be extended and applied to other autonomous driving scenarios such as driving through a complex intersection or changing lanes under varying traffic flow conditions.",
"title": ""
},
{
"docid": "14100da75b1050cbc37ffc9496326432",
"text": "Speaker diarisation, the task of answering “who spoke when?”, is often considered to consist of three independent stages: speech activity detection, speaker segmentation and speaker clustering. These represent the separation of speech and nonspeech, the splitting into speaker homogeneous speech segments, followed by grouping together those which belong to the same speaker. This paper is concerned with speaker clustering, which is typically performed by bottom-up clustering using the Bayesian information criterion (BIC). We present a novel semi-supervised method of speaker clustering based on a deep neural network (DNN) model. A speaker separation DNN trained on independent data is used to iteratively relabel the test data set. This is achieved by reconfiguration of the output layer, combined with fine tuning in each iteration. A stopping criterion involving posteriors as confidence scores is investigated. Results are shown on a meeting task (RT07) for single distant microphones and compared with standard diarisation approaches. The new method achieves a diarisation error rate (DER) of 14.8%, compared to a baseline of 19.9%.",
"title": ""
},
{
"docid": "2da0db20b51b06036fa2fda8342202e3",
"text": "Recent advances in research tools for the systematic analysis of textual data are enabling exciting new research throughout the social sciences. For comparative politics, scholars who are often interested in nonEnglish and possibly multilingual textual datasets, these advances may be difficult to access. This article discusses practical issues that arise in the processing, management, translation, and analysis of textual data with a particular focus on how procedures differ across languages. These procedures are combined in two applied examples of automated text analysis using the recently introduced Structural Topic Model. We also show how the model can be used to analyze data that have been translated into a single language via machine translation tools. All the methods we describe here are implemented in open-source software packages available from the authors.",
"title": ""
},
{
"docid": "987146e03e20abf91e9bff365fa25f43",
"text": "This paper presents a novel non-native speech synthesis technique that preserves the individuality of a non-native speaker. Crosslingual speech synthesis based on voice conversion or Hidden Markov Model (HMM)-based speech synthesis is a technique to synthesize foreign language speech using a target speaker’s natural speech uttered in his/her mother tongue. Although the technique holds promise to improve a wide variety of applications, it tends to cause degradation of target speaker’s individuality in synthetic speech compared to intra-lingual speech synthesis. This paper proposes a new approach to speech synthesis that preserves speaker individuality by using non-native speech spoken by the target speaker. Although the use of non-native speech makes it possible to preserve the speaker individuality in the synthesized target speech, naturalness is significantly degraded as the synthesized speech waveform is directly affected by unnatural prosody and pronunciation often caused by differences in the linguistic systems of the source and target languages. To improve naturalness while preserving speaker individuality, we propose (1) a prosody correction method based on model adaptation, and (2) a phonetic correction method based on spectrum replacement for unvoiced consonants. The experimental results using English speech uttered by native Japanese speakers demonstrate that (1) the proposed methods are capable of significantly improving naturalness while preserving the speaker individuality in synthetic speech, and (2) the proposed methods also improve intelligibility as confirmed by a dictation test. key words: cross-lingual speech synthesis, English-Read-by-Japanese, speaker individuality, HMM-based speech synthesis, prosody correction, phonetic correction",
"title": ""
},
{
"docid": "71a262b1c91c89f379527b271e45e86e",
"text": "Geospatial object detection from high spatial resolution (HSR) remote sensing imagery is a heated and challenging problem in the field of automatic image interpretation. Despite convolutional neural networks (CNNs) having facilitated the development in this domain, the computation efficiency under real-time application and the accurate positioning on relatively small objects in HSR images are two noticeable obstacles which have largely restricted the performance of detection methods. To tackle the above issues, we first introduce semantic segmentation-aware CNN features to activate the detection feature maps from the lowest level layer. In conjunction with this segmentation branch, another module which consists of several global activation blocks is proposed to enrich the semantic information of feature maps from higher level layers. Then, these two parts are integrated and deployed into the original single shot detection framework. Finally, we use the modified multi-scale feature maps with enriched semantics and multi-task training strategy to achieve end-to-end detection with high efficiency. Extensive experiments and comprehensive evaluations on a publicly available 10-class object detection dataset have demonstrated the superiority of the presented method.",
"title": ""
},
{
"docid": "b0133ea142da1d4f2612407d4d8bf6c0",
"text": "The ability to transfer knowledge gained in previous tasks into new contexts is one of the most important mechanisms of human learning. Despite this, adapting autonomous behavior to be reused in partially similar settings is still an open problem in current robotics research. In this paper, we take a small step in this direction and propose a generic framework for learning transferable motion policies. Our goal is to solve a learning problem in a target domain by utilizing the training data in a different but related source domain. We present this in the context of an autonomous MAV flight using monocular reactive control, and demonstrate the efficacy of our proposed approach through extensive real-world flight experiments in outdoor cluttered environments.",
"title": ""
},
{
"docid": "158cdd1c7740f30ec87e10a19171721b",
"text": "The current practice of physical diagnosis is dependent on physician skills and biases, inductive reasoning, and time efficiency. Although the clinical utility of echocardiography is well known, few data exist on how to integrate 2-dimensional screening \"quick-look\" ultrasound applications into a novel, modernized cardiac physical examination. We discuss the evidence basis behind ultrasound \"signs\" pertinent to the cardiovascular system and elemental in synthesis of bedside diagnoses and propose the application of a brief cardiac limited ultrasound examination based on these signs. An ultrasound-augmented cardiac physical examination can be taught in traditional medical education and has the potential to improve bedside diagnosis and patient care.",
"title": ""
},
{
"docid": "e8c9067f13c9a57be46823425deb783b",
"text": "In order to utilize the tremendous computing power of graphics hardware and to automatically adapt to the fast and frequent changes in its architecture and performance characteristics, this paper implements an automatic tuning system to generate high-performance matrix-multiplication implementation on graphics hardware. The automatic tuning system uses a parameterized code generator to generate multiple versions of matrix multiplication, whose performances are empirically evaluated by actual execution on the target platform. An ad-hoc search engine is employed to search over the implementation space for the version that yields the best performance. In contrast to similar systems on CPUs, which utilize cache blocking, register tiling, instruction scheduling tuning strategies, this paper identifies and exploits several tuning strategies that are unique for graphics hardware. These tuning strategies include optimizing for multiple-render-targets, SIMD instructions with data packing, overcoming limitations on instruction count and dynamic branch instruction. The generated implementations have comparable performance with expert manually tuned version in spite of the significant overhead incurred due to the use of the high-level BrookGPU language.",
"title": ""
},
{
"docid": "e14f1292fd3d0f744f041219217f1e15",
"text": "Previous research highlights how adept people are at emotional recovery after rejection, but less research has examined factors that can prevent full recovery. In five studies, we investigate how changing one's self-definition in response to rejection causes more lasting damage. We demonstrate that people who endorse an entity theory of personality (i.e., personality cannot be changed) report alterations in their self-definitions when reflecting on past rejections (Studies 1, 2, and 3) or imagining novel rejection experiences (Studies 4 and 5). Further, these changes in self-definition hinder post-rejection recovery, causing individuals to feel haunted by their past, that is, to fear the recurrence of rejection and to experience lingering negative affect from the rejection. Thus, beliefs that prompt people to tie experiences of rejection to self-definition cause rejection's impact to linger.",
"title": ""
},
{
"docid": "e3c8f10316152f0bc775f4823b79c7f6",
"text": "The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision.",
"title": ""
}
] |
scidocsrr
|
841917e86f2a28882c4e5ac9d3079c02
|
Character and Subword-Based Word Representation for Neural Language Modeling Prediction
|
[
{
"docid": "497088def9f5f03dcb32e33d1b6fcb64",
"text": "In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy.",
"title": ""
},
{
"docid": "caa26d9aa26eaf91a1c942c9f116912e",
"text": "We present two recently released opensource taggers: NameTag is a free software for named entity recognition (NER) which achieves state-of-the-art performance on Czech; MorphoDiTa (Morphological Dictionary and Tagger) performs morphological analysis (with lemmatization), morphological generation, tagging and tokenization with state-of-the-art results for Czech and a throughput around 10-200K words per second. The taggers can be trained for any language for which annotated data exist, but they are specifically designed to be efficient for inflective languages, Both tools are free software under LGPL license and are distributed along with trained linguistic models which are free for non-commercial use under the CC BY-NC-SA license. The releases include standalone tools, C++ libraries with Java, Python and Perl bindings and web services.",
"title": ""
},
{
"docid": "920a3f7d43295ee45fe689b7af5c7088",
"text": "Morphological inflection generation is the task of generating the inflected form of a given lemma corresponding to a particular linguistic transformation. We model the problem of inflection generation as a character sequence to sequence learning problem and present a variant of the neural encoder-decoder model for solving it. Our model is language independent and can be trained in both supervised and semi-supervised settings. We evaluate our system on seven datasets of morphologically rich languages and achieve either better or comparable results to existing state-of-the-art models of inflection generation.",
"title": ""
},
{
"docid": "c9d833d872ab0550edb0aa26565ac76b",
"text": "In this paper we investigate the potential of the neural machine translation (NMT) when taking into consideration the linguistic aspect of target language. From this standpoint, the NMT approach with attention mechanism [1] is extended in order to produce several linguistically derived outputs. We train our model to simultaneously output the lemma and its corresponding factors (e.g. part-of-speech, gender, number). The word level translation is built with a mapping function using a priori linguistic information. Compared to the standard NMT system, factored architecture increases significantly the vocabulary coverage while decreasing the number of unknown words. With its richer architecture, the Factored NMT approach allows us to implement several training setup that will be discussed in detail along this paper. On the IWSLT’15 English-to-French task, FNMT model outperforms NMT model in terms of BLEU score. A qualitative analysis of the output on a set of test sentences shows the effectiveness of the FNMT model.",
"title": ""
},
{
"docid": "4a098609770618240fbaebbbc891883d",
"text": "We present CHARAGRAM embeddings, a simple approach for learning character-based compositional models to embed textual sequences. A word or sentence is represented using a character n-gram count vector, followed by a single nonlinear transformation to yield a low-dimensional embedding. We use three tasks for evaluation: word similarity, sentence similarity, and part-of-speech tagging. We demonstrate that CHARAGRAM embeddings outperform more complex architectures based on character-level recurrent and convolutional neural networks, achieving new state-of-the-art performance on several similarity tasks. 1",
"title": ""
}
] |
[
{
"docid": "1a4cb9038d3bd71ecd24187ed860e0f7",
"text": "One of the most important fields in discrete mathematics is graph theory. Graph theory is discrete structures, consisting of vertices and edges that connect these vertices. Problems in almost every conceivable discipline can be solved using graph models. The field graph theory started its journey from the problem of Konigsberg Bridges in 1735. This paper is a guide for the applied mathematician who would like to know more about network security, cryptography and cyber security based of graph theory. The paper gives a brief overview of the subject and the applications of graph theory in computer security, and provides pointers to key research and recent survey papers in the area.",
"title": ""
},
{
"docid": "d315aa25c69ad39164c458dabe914417",
"text": "The increase of scientific collaboration coincides with the technological and social advancement of social software applications which can change the way we research. Among social software, social network sites have recently gained immense popularity in a hedonic context. This paper focuses on social network sites as an emerging application designed for the specific needs of researchers. To give an overview about these sites we use a data set of 24 case studies and in-depth interviews with the founders of ten social research network sites. The gathered data leads to a first tentative taxonomy and to a definition of SRNS identifying four basic functionalities identity and network management, communication, information management, and collaboration. The sites in the sample correspond to one of the following four types: research directory sites, research awareness sites, research management sites and research collaboration sites. These results conclude with implications for providers of social research network sites.",
"title": ""
},
{
"docid": "79d5c9c6ec5314bab9a4868b5beb9fdf",
"text": "A good user experience is central for the success of interactive products. To improve products concerning these quality aspects it is thus also important to be able to measure user experience in an efficient and reliable way. But measuring user experience is not an end in itself. Several different questions can be the reason behind the wish to measure the user experience of a product quantitatively. We discuss several typical questions associated with the measurement of user experience and we show how these questions can be answered with a questionnaire with relatively low effort. In this paper the user experience questionnaire UEQ is used, but the general approach may be transferred to other questionnaires as well.",
"title": ""
},
{
"docid": "530319923688ae7731e263d5ec4ff7c9",
"text": "In this paper we present some preliminary results on the generation of word embeddings for the Italian language. We compare two popular word representation models, word2vec and GloVe, and train them on two datasets with different stylistic properties. We test the generated word embeddings on a word analogy test derived from the one originally proposed for word2vec, adapted to capture some of the linguistic aspects that are specific of Italian. Results show that the tested models are able to create syntactically and semantically meaningful word embeddings despite the higher morphological complexity of Italian with respect to English. Moreover, we have found that the stylistic properties of the training dataset plays a relevant role in the type of information captured by the produced vectors.",
"title": ""
},
{
"docid": "47192ebdd7c5998359e5cf0a059b5434",
"text": "In this paper, we present a hybrid approach for performing token and sentence levels Dialect Identification in Arabic. Specifically we try to identify whether each token in a given sentence belongs to Modern Standard Arabic (MSA), Egyptian Dialectal Arabic (EDA) or some other class and whether the whole sentence is mostly EDA or MSA. The token level component relies on a Conditional Random Field (CRF) classifier that uses decisions from several underlying components such as language models, a named entity recognizer and and a morphological analyzer to label each word in the sentence. The sentence level component uses a classifier ensemble system that relies on two independent underlying classifiers that model different aspects of the language. Using a featureselection heuristic, we select the best set of features for each of these two classifiers. We then train another classifier that uses the class labels and the confidence scores generated by each of the two underlying classifiers to decide upon the final class for each sentence. The token level component yields a new state of the art F-score of 90.6% (compared to previous state of the art of 86.8%) and the sentence level component yields an accuracy of 90.8% (compared to 86.6% obtained by the best state of the art system).",
"title": ""
},
{
"docid": "531a79a362839a66e3d33609c48c69a0",
"text": "Renewable Energy Sources (RES) plays a vital role today, owing to its excess availability and they are gaining importance because of the merits like pollution free, eco friendly, no maintenance, sustainability etc. Solar rays consist of photons, which are considered to be stream of energetic light particles. The solar cell or Photovoltaic cell (PV) converts photons present in solar rays to current. Intensity of sunlight striking the surface, efficiency and size of PV are the factors responsible for current generation from PV cell. However, change in irradiance, temperature or shadow effect will affect the overall PV performance. This will lead to decrease in output power from the solar cell. While modeling the PV in MATLAB, single diode model is given more importance over double diode model, because of the advantages like moderate complexity and acceptable accuracy. The PV is modeled in MATLAB considering the design equations of photon current and reverse saturation current. The design of power electronic converter is very important. To ensure reliability, safety, and to provide maximum efficiency to the PV system, selection and design of power electronic converters should be correct as well as optimal. The power converter is interfaced between PV panel and load. The Reference voltage is fixed based on the open circuit voltage available at the output of the converter. The benefits of zeta converter over other converters include non pulsating output current, lower settling time, adaptability, etc. In order to satisfy load voltage ripple requirement, and because of the non pulsating output current, zeta converter permits the use of small output capacitor. The objective of this paper is to maintain the constant output voltage, irrespective of change in irradiance. Change in irradiance, causes the change of output voltage from PV panel, which causes the duty cycle (D) to vary and duty cycle depends on both output voltage from PV module and reference voltage. Change in duty cycle makes the zeta converter to operate either in buck or boost mode.",
"title": ""
},
{
"docid": "655f28b1eeed4c571237474c96ac84a0",
"text": "We present six cases of extra-axial lesions: three meningiomas [including one intraventricular and one cerebellopontine angle (CPA) meningioma], one dural metastasis, one CPA schwannoma and one choroid plexus papilloma which were chosen from a larger cohort of extra-axial tumors evaluated in our institution. Apart from conventional MR examinations, all the patients also underwent perfusion-weighted imaging (PWI) using dynamic susceptibility contrast method on a 1.5 T MR unit (contrast: 0.3 mmol/kg, rate 5 ml/s). Though the presented tumors showed very similar appearance on conventional MR images, they differed significantly in perfusion examinations. The article draws special attention to the usefulness of PWI in the differentiation of various extra-axial tumors and its contribution in reaching final correct diagnoses. Finding a dural lesion with low perfusion parameters strongly argues against the diagnosis of meningioma and should raise a suspicion of a dural metastasis. In cases of CPA tumors, a lesion with low relative cerebral blood volume values should be suspected to be schwannoma, allowing exclusion of meningioma to be made. In intraventricular tumors arising from choroid plexus, low perfusion parameters can exclude a diagnosis of meningioma. In our opinion, PWI as an easy and quick to perform functional technique should be incorporated into the MR protocol of all intracranial tumors including extra-axial neoplasms.",
"title": ""
},
{
"docid": "9b628f47102a0eee67e469e223ece837",
"text": "We present a method for automatically extracting from a running system an indexable signature that distills the essential characteristic from a system state and that can be subjected to automated clustering and similarity-based retrieval to identify when an observed system state is similar to a previously-observed state. This allows operators to identify and quantify the frequency of recurrent problems, to leverage previous diagnostic efforts, and to establish whether problems seen at different installations of the same site are similar or distinct. We show that the naive approach to constructing these signatures based on simply recording the actual ``raw'' values of collected measurements is ineffective, leading us to a more sophisticated approach based on statistical modeling and inference. Our method requires only that the system's metric of merit (such as average transaction response time) as well as a collection of lower-level operational metrics be collected, as is done by existing commercial monitoring tools. Even if the traces have no annotations of prior diagnoses of observed incidents (as is typical), our technique successfully clusters system states corresponding to similar problems, allowing diagnosticians to identify recurring problems and to characterize the ``syndrome'' of a group of problems. We validate our approach on both synthetic traces and several weeks of production traces from a customer-facing geoplexed 24 x 7 system; in the latter case, our approach identified a recurring problem that had required extensive manual diagnosis, and also aided the operators in correcting a previous misdiagnosis of a different problem.",
"title": ""
},
{
"docid": "5fd1f96ae4fd4159bc99bd2d4b02c6da",
"text": "Question generation has been a research topic for a long time, where a big challenge is how to generate deep and natural questions. To tackle this challenge, we propose a system to generate natural language questions from a domain-specific knowledge base (KB) by utilizing rich web information. A small number of question templates are first created based on the KB and instantiated into questions, which are used as seed set and further expanded through the web to get more question candidates. A filtering model is then applied to select candidates with high grammaticality and domain relevance. The system is able to generate large amount of in-domain natural language questions with considerable semantic diversity and is easily applicable to other domains. We evaluate the quality of the generated questions by human judgments and the results show the effectiveness of our proposed system.",
"title": ""
},
{
"docid": "6470c8a921a9095adb96afccaa0bf97b",
"text": "Complex tasks with a visually rich component, like diagnosing seizures based on patient video cases, not only require the acquisition of conceptual but also of perceptual skills. Medical education has found that besides biomedical knowledge (knowledge of scientific facts) clinical knowledge (actual experience with patients) is crucial. One important aspect of clinical knowledge that medical education has hardly focused on, yet, are perceptual skills, like visually searching, detecting, and interpreting relevant features. Research on instructional design has shown that in a visually rich, but simple classification task perceptual skills could be conveyed by means of showing the eye movements of a didactically behaving expert. The current study applied this method to medical education in a complex task. This was done by example video cases, which were verbally explained by an expert. In addition the experimental groups saw a display of the expert’s eye movements recorded, while he performed the task. Results show that blurring non-attended areas of the expert enhances diagnostic performance of epileptic seizures by medical students in contrast to displaying attended areas as a circle and to a control group without attention guidance. These findings show that attention guidance fosters learning of perceptual aspects of clinical knowledge, if implemented in a spotlight manner.",
"title": ""
},
{
"docid": "c4a74726ac56b0127e5920098e6f0258",
"text": "BACKGROUND\nAttention Deficit Hyperactivity disorder (ADHD) is one of the most common and challenging childhood neurobehavioral disorders. ADHD is known to negatively impact children, their families, and their community. About one-third to one-half of patients with ADHD will have persistent symptoms into adulthood. The prevalence in the United States is estimated at 5-11%, representing 6.4 million children nationwide. The variability in the prevalence of ADHD worldwide and within the US may be due to the wide range of factors that affect accurate assessment of children and youth. Because of these obstacles to assessment, ADHD is under-diagnosed, misdiagnosed, and undertreated.\n\n\nOBJECTIVES\nWe examined factors associated with making and receiving the diagnosis of ADHD. We sought to review the consequences of a lack of diagnosis and treatment for ADHD on children's and adolescent's lives and how their families and the community may be involved in these consequences.\n\n\nMETHODS\nWe reviewed scientific articles looking for factors that impact the identification and diagnosis of ADHD and articles that demonstrate naturalistic outcomes of diagnosis and treatment. The data bases PubMed and Google scholar were searched from the year 1995 to 2015 using the search terms \"ADHD, diagnosis, outcomes.\" We then reviewed abstracts and reference lists within those articles to rule out or rule in these or other articles.\n\n\nRESULTS\nMultiple factors have significant impact in the identification and diagnosis of ADHD including parents, healthcare providers, teachers, and aspects of the environment. Only a few studies detailed the impact of not diagnosing ADHD, with unclear consequences independent of treatment. A more significant number of studies have examined the impact of untreated ADHD. The experience around receiving a diagnosis described by individuals with ADHD provides some additional insights.\n\n\nCONCLUSION\nADHD diagnosis is influenced by perceptions of many different members of a child's community. A lack of clear understanding of ADHD and the importance of its diagnosis and treatment still exists among many members of the community including parents, teachers, and healthcare providers. More basic and clinical research will improve methods of diagnosis and information dissemination. Even before further advancements in science, strong partnerships between clinicians and patients with ADHD may be the best way to reduce the negative impacts of this disorder.",
"title": ""
},
{
"docid": "1d1fdf869a30a8ba9437e3b18bc8c872",
"text": "Automated nuclear detection is a critical step for a number of computer assisted pathology related image analysis algorithms such as for automated grading of breast cancer tissue specimens. The Nottingham Histologic Score system is highly correlated with the shape and appearance of breast cancer nuclei in histopathological images. However, automated nucleus detection is complicated by 1) the large number of nuclei and the size of high resolution digitized pathology images, and 2) the variability in size, shape, appearance, and texture of the individual nuclei. Recently there has been interest in the application of “Deep Learning” strategies for classification and analysis of big image data. Histopathology, given its size and complexity, represents an excellent use case for application of deep learning strategies. In this paper, a Stacked Sparse Autoencoder (SSAE), an instance of a deep learning strategy, is presented for efficient nuclei detection on high-resolution histopathological images of breast cancer. The SSAE learns high-level features from just pixel intensities alone in order to identify distinguishing features of nuclei. A sliding window operation is applied to each image in order to represent image patches via high-level features obtained via the auto-encoder, which are then subsequently fed to a classifier which categorizes each image patch as nuclear or non-nuclear. Across a cohort of 500 histopathological images (2200 × 2200) and approximately 3500 manually segmented individual nuclei serving as the groundtruth, SSAE was shown to have an improved F-measure 84.49% and an average area under Precision-Recall curve (AveP) 78.83%. The SSAE approach also out-performed nine other state of the art nuclear detection strategies.",
"title": ""
},
{
"docid": "b57b06d861b5c4666095e356ee7e010b",
"text": "Phishing is a form of electronic identity theft in which a combination of social engineering and Web site spoofing techniques is used to trick a user into revealing confidential information with economic value. The problem of social engineering attack is that there is no single solution to eliminate it completely, since it deals largely with the human factor. This is why implementing empirical experiments is very crucial in order to study and to analyze all malicious and deceiving phishing Web site attack techniques and strategies. In this paper, three different kinds of phishing experiment case studies have been conducted to shed some light into social engineering attacks, such as phone phishing and phishing Web site attacks for designing effective countermeasures and analyzing the efficiency of performing security awareness about phishing threats. Results and reactions to our experiments show the importance of conducting phishing training awareness for all users and doubling our efforts in developing phishing prevention techniques. Results also suggest that traditional standard security phishing factor indicators are not always effective for detecting phishing websites, and alternative intelligent phishing detection approaches are needed.",
"title": ""
},
{
"docid": "ce3ac7716734e2ebd814900d77ca3dfb",
"text": "The large pose discrepancy between two face images is one of the fundamental challenges in automatic face recognition. Conventional approaches to pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes a Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator enables DR-GAN to learn a representation that is both generative and discriminative, which can be used for face image synthesis and pose-invariant face recognition. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one unified identity representation along with an arbitrary number of synthetic face images. Extensive quantitative and qualitative evaluation on a number of controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art in both learning representations and rotating large-pose face images.",
"title": ""
},
{
"docid": "209ee5fc48584ce98c7dcad664be11ac",
"text": "A traffic surveillance system includes detection of vehicles which involves the detection and identification of license plate numbers. This paper proposes an intelligent approach of detecting vehicular number plates automatically using three efficient algorithms namely Ant colony optimization (ACO) used in plate localization for identifying the edges, a character segmentation and extraction algorithm and a hierarchical combined classification method based on inductive learning and SVM for individual character recognition. Initially the performance of the Ant Colony Optimization algorithm is compared with the existing algorithms for edge detection namely Canny, Prewitt, Roberts, Mexican Hat and Sobel operators. The Ant Colony Optimization used in communication systems has certain limitations when used in edge detection like random initial ant position in the image and the heuristic information being highly dictated by transition probabilities. In this paper, modifications like assigning a well-defined initial ant position and making use of weights to calculate heuristic value which will provide additional information about transition probabilities are used to overcome the limitations. Further a character extraction and segmentation algorithm which uses the concept of Kohonen neural network to identify the position and dimensions of characters is presented along with a comparison with the existing Histogram and Connected Pixels approach. Finally an inductive learning based classification method is compared with the Support Vector Machine based classification method and a combined classification method which uses both inductive learning and Support Vector Machine based approach for character recognition is proposed. The proposed character recognition algorithm may be more efficient than the other two.",
"title": ""
},
{
"docid": "cb7955dc05925c4d7033e20762f53dd9",
"text": "We propose a shape-based approach to curve evolution for the segmentation of medical images containing known object types. In particular, motivated by the work of Leventon, Grimson, and Faugeras (2000), we derive a parametric model for an implicit representation of the segmenting curve by applying principal component analysis to a collection of signed distance representations of the training data. The parameters of this representation are then manipulated to minimize an objective function for segmentation. The resulting algorithm is able to handle multidimensional data, can deal with topological changes of the curve, is robust to noise and initial contour placements, and is computationally efficient. At the same time, it avoids the need for point correspondences during the training phase of the algorithm. We demonstrate this technique by applying it to two medical applications; two-dimensional segmentation of cardiac magnetic resonance imaging (MRI) and three-dimensional segmentation of prostate MRI.",
"title": ""
},
{
"docid": "87f93c4d02b23b5d9488645bd39e49b8",
"text": "Information fusion is a field of research that strives to establish theories, techniques and tools that exploit synergies in data retrieved from multiple sources. In many real-world applications huge amounts of data need to be gathered, evaluated and analyzed in order to make the right decisions. An important key element of information fusion is the adequate presentation of the data that guides decision-making processes efficiently. This is where theories and tools developed in information visualization, visual data mining and human computer interaction (HCI) research can be of great support. This report presents an overview of information fusion and information visualization, highlighting the importance of the latter in information fusion research. Information visualization techniques that can be used in information fusion are presented and analyzed providing insights into its strengths and weakness. Problems and challenges regarding the presentation of information that the decision maker faces in the ground situation awareness scenario (GSA) lead to open questions that are assumed to be the focus of further research.",
"title": ""
},
{
"docid": "0ccf20f28baf8a11c78d593efb9f6a52",
"text": "From a traction application point of view, proper operation of the synchronous reluctance motor over a wide speed range and mechanical robustness is desired. This paper presents new methods to improve the rotor mechanical integrity and the flux weakening capability at high speed using geometrical and variable ampere-turns concepts. The results from computer-aided analysis and experiment are compared to evaluate the methods. It is shown that, to achieve a proper design at high speed, the magnetic and mechanical performances need to be simultaneously analyzed due to their mutual effect.",
"title": ""
},
{
"docid": "6a470404c36867a18a98fafa9df6848f",
"text": "Memory links use variable-impedance drivers, feed-forward equalization (FFE) [1], on-die termination (ODT) and slew-rate control to optimize the signal integrity (SI). An asymmetric DRAM link configuration exploits the availability of a fast CMOS technology on the memory controller side to implement powerful equalization, while keeping the circuit complexity on the DRAM side relatively simple. This paper proposes the use of Tomlinson Harashima precoding (THP) [2-4] in a memory controller as replacement of the afore-mentioned SI optimization techniques. THP is a transmitter equalization technique in which post-cursor inter-symbol interference (ISI) is cancelled by means of an infinite impulse response (IIR) filter with modulo-based amplitude limitation; similar to a decision feedback equalizer (DFE) on the receive side. However, in contrast to a DFE, THP does not suffer from error propagation.",
"title": ""
},
{
"docid": "f5ac7e29f54819f318a73f3e7c15091c",
"text": "The dismemberment of a corpse is fairly rare in forensic medicine. It is usually performed with different types of sharp tools and used as a method of concealing the body and thus erasing proof of murder. In this context, the disarticulation of body parts is an even rarer event. The authors present the analysis of six dismemberment cases (well-preserved corpses or skeletonized remains with clear signs of dismemberment), arising from different contexts and in which different types of sharp tools were used. Two cases in particular showed peculiar features where separation of the forearms and limbs from the rest of the body was performed not by cutting through bones but through a meticulous disarticulation. The importance of a thorough anthropological investigation is thus highlighted, since it provides crucial information on the manner of dismemberment/disarticulation, the types of tools used and the general context in which the crime was perpetrated.",
"title": ""
}
] |
scidocsrr
|
a2a7b12b8a08fbcd25fe20136bc79e98
|
Learning to Rank Query Graphs for Complex Question Answering over Knowledge Graphs
|
[
{
"docid": "6e4f0a770fe2a34f99957f252110b6bd",
"text": "Universal Dependencies (UD) provides a cross-linguistically uniform syntactic representation, with the aim of advancing multilingual applications of parsing and natural language understanding. Reddy et al. (2016) recently developed a semantic interface for (English) Stanford Dependencies, based on the lambda calculus. In this work, we introduce UDEPLAMBDA, a similar semantic interface for UD, which allows mapping natural language to logical forms in an almost language-independent framework. We evaluate our approach on semantic parsing for the task of question answering against Freebase. To facilitate multilingual evaluation, we provide German and Spanish translations of the WebQuestions and GraphQuestions datasets. Results show that UDEPLAMBDA outperforms strong baselines across languages and datasets. For English, it achieves the strongest result to date on GraphQuestions, with competitive results on WebQuestions.",
"title": ""
},
{
"docid": "ded1f366eedb42d57bc927de05cefdab",
"text": "A typical knowledge-based question answering (KB-QA) system faces two challenges: one is to transform natural language questions into their meaning representations (MRs); the other is to retrieve answers from knowledge bases (KBs) using generated MRs. Unlike previous methods which treat them in a cascaded manner, we present a translation-based approach to solve these two tasks in one unified framework. We translate questions to answers based on CYK parsing. Answers as translations of the span covered by each CYK cell are obtained by a question translation method, which first generates formal triple queries as MRs for the span based on question patterns and relation expressions, and then retrieves answers from a given KB based on triple queries generated. A linear model is defined over derivations, and minimum error rate training is used to tune feature weights based on a set of question-answer pairs. Compared to a KB-QA system using a state-of-the-art semantic parser, our method achieves better results.",
"title": ""
},
{
"docid": "b4ab51818d868b2f9796540c71a7bd17",
"text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"title": ""
},
{
"docid": "be8b65d39ee74dbee0835052092040da",
"text": "We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact. Adopting a straightforward decomposition of the problem into entity detection, entity linking, relation prediction, and evidence combination, we explore simple yet strong baselines. On the popular SIMPLEQUESTIONS dataset, we find that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach the state of the art, and techniques that do not use neural networks also perform reasonably well. These results show that gains from sophisticated deep learning techniques proposed in the literature are quite modest and that some previous models exhibit unnecessary complexity.",
"title": ""
}
] |
[
{
"docid": "9377e5de9d7a440aa5e73db10aa630f4",
"text": ". Micro-finance programmes targeting women became a major plank of donor poverty alleviation and gender strategies in the 1990s. Increasing evidence of the centrality of gender equality to poverty reduction and women’s higher credit repayment rates led to a general consensus on the desirability of targeting women. Not only ‘reaching’ but also ‘empowering’ women became the second official goal of the Micro-credit Summit Campaign.",
"title": ""
},
{
"docid": "74141327edf56eb5a198f446d12998a0",
"text": "Intramuscular myxomas of the hand are rare entities. Primarily found in the myocardium, these lesions also affect the bone and soft tissues in other parts of the body. This article describes a case of hypothenar muscles myxoma treated with local surgical excision after frozen section biopsy with tumor-free margins. Radiographic images of the axial and appendicular skeleton were negative for fibrous dysplasia, and endocrine studies were within normal limits. The 8-year follow-up period has been uneventful, with no complications. The patient is currently recurrence free, with normal intrinsic hand function.",
"title": ""
},
{
"docid": "5b4f1b4725393a87a83abfe14516dd0c",
"text": "The goal of traffic forecasting is to predict the future vital indicators (such as speed, volume and density) of the local traffic network in reasonable response time. Due to the dynamics and complexity of traffic network flow, typical simulation experiments and classic statistical methods cannot satisfy the requirements of mid-and-long term forecasting. In this work, we propose a novel deep learning framework, Spatio-Temporal Graph Convolutional Neural Network (STGCNN), to tackle this spatio-temporal sequence forecasting task. Instead of applying recurrent models to sequence learning, we build our model entirely on convolutional neural networks (CNNs) with gated linear units (GLU) and highway networks. The proposed architecture fully employs the graph structure of the road networks and enables faster training. Experiments show that our ST-GCNN network captures comprehensive spatio-temporal correlations throughout complex traffic network and consistently outperforms state-of-the-art baseline algorithms on several real-world traffic datasets.",
"title": ""
},
{
"docid": "a04dd1bd1b6107747b2091b8aa2dfeb7",
"text": "This paper presents 300-GHz step-profiled corrugated horn antennas, aiming at their integration in low-temperature co-fired ceramic (LTCC) packages. Using substrate integrated waveguide technology, the cavity inside the multi-layer LTCC substrate and a surrounding via fence are used to form a feeding hollow waveguide and horn structure. Owing to the vertical configuration, we were able to design the corrugations and stepped profile of horn antennas to approximate smooth metallic surface. To verify the design experimentally, the LTCC waveguides and horn antennas were fabricated with an LTCC multi-layer process. The LTCC waveguide exhibits insertion loss of 0.6 dB/mm, and the LTCC horn antenna exhibits 18-dBi peak gain and 100-GHz bandwidth with more than 10-dB return loss. The size of the horn antenna is only 5×5×2.8 mm3, which makes it easy to integrate it in LTCC transceiver modules.",
"title": ""
},
{
"docid": "49ff711b6c91c9ec42e16ce2f3bb435b",
"text": "In this letter, a wideband three-section branch-line hybrid with harmonic suppression is designed using a novel transmission line model. The proposed topology is constructed using a coupled line, two series transmission lines, and open-ended stubs. The required design equations are obtained by applying even- and odd-mode analysis. To support these equations, a three-section branch-line hybrid working at 0.9 GHz is fabricated and tested. The physical area of the prototype is reduced by 87.7% of the conventional hybrid and the fractional bandwidth is greater than 52%. In addition, the proposed technique can eliminate second harmonic by a level better than 15 dB.",
"title": ""
},
{
"docid": "0b17e52a3fd306c1e990b628d41a973f",
"text": "Electronic health records (EHRs) have contributed to the computerization of patient records so that it can be used not only for efficient and systematic medical services, but also for research on data science. In this paper, we compared disease prediction performance of generative adversarial networks (GANs) and conventional learning algorithms in combination with missing value prediction methods. As a result, the highest accuracy of 98.05% was obtained using stacked autoencoder as the missing value prediction method and auxiliary classifier GANs (AC-GANs) as the disease predicting method. Results show that the combination of stacked autoencoder and AC-GANs performs significantly greater than existing algorithms at the problem of disease prediction in which missing values and class imbalance exist.",
"title": ""
},
{
"docid": "21d9828d0851b4ded34e13f8552f3e24",
"text": "Light field cameras have been recently shown to be very effective in applications such as digital refocusing and 3D reconstruction. In a single snapshot these cameras provide a sample of the light field of a scene by trading off spatial resolution with angular resolution. Current methods produce images at a resolution that is much lower than that of traditional imaging devices. However, by explicitly modeling the image formation process and incorporating priors such as Lambertianity and texture statistics, these types of images can be reconstructed at a higher resolution. We formulate this method in a variational Bayesian framework and perform the reconstruction of both the surface of the scene and the (superresolved) light field. The method is demonstrated on both synthetic and real images captured with our light-field camera prototype.",
"title": ""
},
{
"docid": "8470245ef870eb5246d65fa3eb1e760a",
"text": "Educational spaces play an important role in enhancing learning productivity levels of society people as the most important places to human train. Considering the cost, time and energy spending on these spaces, trying to design efficient and optimized environment is a necessity. Achieving efficient environments requires changing environmental criteria so that they can have a positive impact on the activities and learning in users. Therefore, creating suitable conditions for promoting learning in users requires full utilization of the comprehensive knowledge of architecture and the design of the physical environment with respect to the environmental, social and aesthetic dimensions; Which will naturally increase the usefulness of people in space and make optimal use of the expenses spent on building schools and the time spent on education and training.The main aim of this study was to find physical variables affecting on increasing productivity in learning environments. This study is quantitative-qualitative and was done in two research methods: a) survey research methods (survey) b) correlation method. The samples were teachers and students in secondary schools’ in Zahedan city, the sample size was 310 people. Variables were extracted using the literature review and deep interviews with professors and experts. The questionnaire was obtained using variables and it is used to collect the views of teachers and students. Cronbach’s alpha coefficient was 0.89 which indicates that the information gathering tool is acceptable. The findings shows that there are four main physical factor as: 1. Physical comfort, 2. Space layouts, 3. Psychological factors and 4. Visual factors thet they are affecting positively on space productivity. Each of the environmental factors play an important role in improving the learning quality and increasing interest in attending learning environments; therefore, the desired environment improves the productivity of the educational spaces by improving the components of productivity.",
"title": ""
},
{
"docid": "cbc59d5b33865b56e549fd2ffbc43c4a",
"text": "We propose a theory that gives formal semantics to word-level alignments defined over parallel corpora. We use our theory to introduce a linear algorithm that can be used to derive from word-aligned, parallel corpora the minimal set of syntactically motivated transformation rules that explain human translation data.",
"title": ""
},
{
"docid": "f48639ad675b863a28bb1bc773664ab0",
"text": "The definition and phenomenological features of 'burnout' and its eventual relationship with depression and other clinical conditions are reviewed. Work is an indispensable way to make a decent and meaningful way of living, but can also be a source of stress for a variety of reasons. Feelings of inadequate control over one's work, frustrated hopes and expectations and the feeling of losing of life's meaning, seem to be independent causes of burnout, a term that describes a condition of professional exhaustion. It is not synonymous with 'job stress', 'fatigue', 'alienation' or 'depression'. Burnout is more common than generally believed and may affect every aspect of the individual's functioning, have a deleterious effect on interpersonal and family relationships and lead to a negative attitude towards life in general. Empirical research suggests that burnout and depression are separate entities, although they may share several 'qualitative' characteristics, especially in the more severe forms of burnout, and in vulnerable individuals, low levels of satisfaction derived from their everyday work. These final issues need further clarification and should be the focus of future clinical research.",
"title": ""
},
{
"docid": "8aabafcfbb8a1b23e986fc9f4dbf5b01",
"text": "OBJECTIVE\nTo examine the factors associated with the persistence of childhood gender dysphoria (GD), and to assess the feelings of GD, body image, and sexual orientation in adolescence.\n\n\nMETHOD\nThe sample consisted of 127 adolescents (79 boys, 48 girls), who were referred for GD in childhood (<12 years of age) and followed up in adolescence. We examined childhood differences among persisters and desisters in demographics, psychological functioning, quality of peer relations and childhood GD, and adolescent reports of GD, body image, and sexual orientation. We examined contributions of childhood factors on the probability of persistence of GD into adolescence.\n\n\nRESULTS\nWe found a link between the intensity of GD in childhood and persistence of GD, as well as a higher probability of persistence among natal girls. Psychological functioning and the quality of peer relations did not predict the persistence of childhood GD. Formerly nonsignificant (age at childhood assessment) and unstudied factors (a cognitive and/or affective cross-gender identification and a social role transition) were associated with the persistence of childhood GD, and varied among natal boys and girls.\n\n\nCONCLUSION\nIntensity of early GD appears to be an important predictor of persistence of GD. Clinical recommendations for the support of children with GD may need to be developed independently for natal boys and for girls, as the presentation of boys and girls with GD is different, and different factors are predictive for the persistence of GD.",
"title": ""
},
{
"docid": "476bb80edf6c54f0b6415d19f027ee19",
"text": "Spin-transfer torque (STT) switching demonstrated in submicron sized magnetic tunnel junctions (MTJs) has stimulated considerable interest for developments of STT switched magnetic random access memory (STT-MRAM). Remarkable progress in STT switching with MgO MTJs and increasing interest in STTMRAM in semiconductor industry have been witnessed in recent years. This paper will present a review on the progress in the intrinsic switching current density reduction and STT-MRAM prototype chip demonstration. Challenges to overcome in order for STT-MRAM to be a mainstream memory technology in future technology nodes will be discussed. Finally, potential applications of STT-MRAM in embedded and standalone memory markets will be outlined.",
"title": ""
},
{
"docid": "7cc3da275067df8f6c017da37025856c",
"text": "A simple, green method is described for the synthesis of Gold (Au) and Silver (Ag) nanoparticles (NPs) from the stem extract of Breynia rhamnoides. Unlike other biological methods for NP synthesis, the uniqueness of our method lies in its fast synthesis rates (~7 min for AuNPs) and the ability to tune the nanoparticle size (and subsequently their catalytic activity) via the extract concentration used in the experiment. The phenolic glycosides and reducing sugars present in the extract are largely responsible for the rapid reduction rates of Au(3+) ions to AuNPs. Efficient reduction of 4-nitrophenol (4-NP) to 4-aminophenol (4-AP) in the presence of AuNPs (or AgNPs) and NaBH(4) was observed and was found to depend upon the nanoparticle size or the stem extract concentration used for synthesis.",
"title": ""
},
{
"docid": "3921107e01c28a9b739f10c51a48505f",
"text": "The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software/hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.",
"title": ""
},
{
"docid": "ac53cbf7b760978a4a4c7fa80095fd31",
"text": "Aggregation queries on data streams are evaluated over evolving and often overlapping logical views called windows. While the aggregation of periodic windows were extensively studied in the past through the use of aggregate sharing techniques such as Panes and Pairs, little to no work has been put in optimizing the aggregation of very common, non-periodic windows. Typical examples of non-periodic windows are punctuations and sessions which can implement complex business logic and are often expressed as user-defined operators on platforms such as Google Dataflow or Apache Storm. The aggregation of such non-periodic or user-defined windows either falls back to expensive, best-effort aggregate sharing methods, or is not optimized at all.\n In this paper we present a technique to perform efficient aggregate sharing for data stream windows, which are declared as user-defined functions (UDFs) and can contain arbitrary business logic. To this end, we first introduce the concept of User-Defined Windows (UDWs), a simple, UDF-based programming abstraction that allows users to programmatically define custom windows. We then define semantics for UDWs, based on which we design Cutty, a low-cost aggregate sharing technique. Cutty improves and outperforms the state of the art for aggregate sharing on single and multiple queries. Moreover, it enables aggregate sharing for a broad class of non-periodic UDWs. We implemented our techniques on Apache Flink, an open source stream processing system, and performed experiments demonstrating orders of magnitude of reduction in aggregation costs compared to the state of the art.",
"title": ""
},
{
"docid": "84e8986eff7cb95808de8df9ac286e37",
"text": "The purpose of this thesis is to describe one-shot-learning gesture recognition systems developed on the ChaLearn Gesture Dataset [3]. We use RGB and depth images and combine appearance (Histograms of Oriented Gradients) and motion descriptors (Histogram of Optical Flow) for parallel temporal segmentation and recognition. The Quadratic-Chi distance family is used to measure differences between histograms to capture cross-bin relationships. We also propose a new algorithm for trimming videos — to remove all the unimportant frames from videos. Our two methods both outperform other published methods and help narrow down the gap between human performance and algorithms on this task. The code has been made publicly available in the MLOSS repository.",
"title": ""
},
{
"docid": "ae687136682fd78e9a92797c2c24ddb0",
"text": "Not all global health issues are truly global, but the neglected epidemic of stillbirths is one such urgent concern. The Lancet’s fi rst Series on stillbirths was published in 2011. Thanks to tenacious eff orts by the authors of that Series, led by Joy Lawn, together with the impetus of a wider maternal and child health community, stillbirths have been recognised as an essential part of the post-2015 sustainable development agenda, expressed through a new Global Strategy for Women’s, Children’s and Adolescents’ Health which was launched at the UN General Assembly in 2015. But recognising is not the same as doing. We now present a second Series on stillbirths, which is predicated on the idea of ending preventable stillbirth deaths by 2030. As this Series amply proves, such an ambitious goal is possible. The fi ve Series papers off er a roadmap for eliminating one of the most neglected tragedies in global health today. Perhaps the greatest obstacle to addressing stillbirths is stigma. The utter despair and hopelessness felt by families who suff er a stillbirth is often turned inwards to fuel feelings of shame and failure. The idea of demanding action would be anathema for many women and men who have experienced the loss of a child in this appalling way. This Series dispels any notion that such self-recrimination is justifi ed. Most stillbirths have preventable causes—maternal infections, chronic diseases, undernutrition, obesity, to name only a few. The solutions to ending preventable stillbirths are therefore practicable, feasible, and cost eff ective. They form a core part of the continuum of care—from prenatal care and antenatal care, through skilled birth attendance, to newborn care. The number of stillbirths remains alarmingly high: 2·6 million stillbirths annually, with little reduction this past decade. But the truly horrifi c fi gure is 1·3 million intrapartum stillbirths. The idea of a child being alive at the beginning of labour and dying for entirely preventable reasons during the next few hours should be a health scandal of international proportions. Yet it is not. Our Series aims to make it so. When a stillbirth does occur, the health system can fail parents further by the absence of respectful, empathetic services, including bereavement care. Yet provision of such care is not only humane and necessary, it can also mitigate a range of negative emotional and psychological symptoms that mothers and fathers experience after the death of their baby, some of which can persist long after their loss. Ten nations account for two-thirds of stillbirths: India, Nigeria, Pakistan, China, Ethiopia, Democratic Republic of the Congo, Bangladesh, Indonesia, Tanzania, and Niger. Although 98% of stillbirths take place in low-income and middle-income countries, stillbirth rates also remain unacceptably high in high-income settings. Why? Partly because stillbirths are strongly linked to adverse social and economic determinants of health. The health system alone cannot address entirely the predicament of stillbirths. Only by tackling the causes of the causes of stillbirths will rates be defl ected downwards in high-income settings. There is one action we believe off ers promising prospects for accelerating progress to end stillbirths—stronger independent accountability both within countries and globally. By accountability, we mean better monitoring (with investment in high-quality data collection), stronger review (including, especially, civil society organisations), and more robust action (high-level political leadership, and not merely from a Ministry of Health). The UN’s new Independent Accountability Panel has an important part to play in this process. But the really urgent need is for stronger independent accountability in countries. And here is where a virtuous alliance might lie between health professionals, clinical and public health scientists, and civil society, including bereaved parents. We believe this Series off ers the spark to ignite a new alliance of common interests to end preventable stillbirths by 2030.",
"title": ""
},
{
"docid": "2f737bc87916e67b68aa96910d27b2cb",
"text": "-Imbalanced data set problem occurs in classification, where the number of instances of one class is much lower than the instances of the other classes. The main challenge in imbalance problem is that the small classes are often more useful, but standard classifiers tend to be weighed down by the huge classes and ignore the tiny ones. In machine learning the imbalanced datasets has become a critical problem and also usually found in many applications such as detection of fraudulent calls, bio-medical, engineering, remote-sensing, computer society and manufacturing industries. In order to overcome the problems several approaches have been proposed. In this paper a study on Imbalanced dataset problem and the solution is given.",
"title": ""
},
{
"docid": "dde9424652393fa66350ec6510c20e97",
"text": "Framed under a cognitive approach to task-based L2 learning, this study used a pedagogical approach to investigate the effects of three vocabulary lessons (one traditional and two task-based) on acquisition of basic meanings, forms and morphological aspects of Spanish words. Quantitative analysis performed on the data suggests that the type of pedagogical approach had no impact on immediate retrieval (after treatment) of targeted word forms, but it had an impact on long-term retrieval (one week) of targeted forms. In particular, task-based lessons seemed to be more effective than the Presentation, Practice and Production (PPP) lesson. The analysis also suggests that a task-based lesson with an explicit focus-on-forms component was more effective than a task-based lesson that did not incorporate this component in promoting acquisition of word morphological aspects. The results also indicate that the explicit focus on forms component may be more effective when placed at the end of the lesson, when meaning has been acquired. Results are explained in terms of qualitative differences in amounts of focus on form and meaning, type of form-focused instruction provided, and opportunities for on-line targeted output retrieval. The findings of this study provide evidence for the value of a proactive (Doughty and Williams, 1998a) form-focused approach to Task-Based L2 vocabulary learning, especially structure-based production tasks (Ellis, 2003). Overall, they suggest an important role of pedagogical tasks in teaching L2 vocabulary.",
"title": ""
},
{
"docid": "62cc85ab7517797f50ce5026fbc5617a",
"text": "OBJECTIVE\nTo assess for the first time the morphology of the lymphatic system in patients with lipedema and lipo-lymphedema of the lower extremities by MR lymphangiography.\n\n\nMATERIALS AND METHODS\n26 lower extremities in 13 consecutive patients (5 lipedema, 8 lipo-lymphedema) were examined by MR lymphangiography. 18 mL of gadoteridol and 1 mL of mepivacainhydrochloride 1% were subdivided into 10 portions and injected intracutaneously in the forefoot. MR imaging was performed with a 1.5-T system equipped with high-performance gradients. For MR lymphangiography, a 3D-spoiled gradient-echo sequence was used. For evaluation of the lymphedema a heavily T2-weighted 3D-TSE sequence was performed.\n\n\nRESULTS\nIn all 16 lower extremities (100%) with lipo-lymphedema, high signal intensity areas in the epifascial region could be detected on the 3D-TSE sequence. In the 16 examined lower extremities with lipo-lymphedema, 8 lower legs and 3 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 3 mm. In two lower legs with lipo-lymphedema, an area of dermal back-flow was seen, indicating lymphatic outflow obstruction. In the 10 examined lower extremities with clinically pure lipedema, 4 lower legs and 2 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 2 mm, indicating a subclinical status of lymphedema. In all examined extremities, the inguinal lymph nodes demonstrated a contrast material enhancement in the first image acquisition 15 min after injection.\n\n\nCONCLUSION\nMR lymphangiography is a safe and accurate minimal-invasive imaging modality for the evaluation of the lymphatic circulation in patients with lipedema and lipo-lymphedema of the lower extremities. If the extent of lymphatic involvement is unclear at the initial clinical examination or requires a better definition for optimal therapeutic planning, MR lymphangiography is able to identify the anatomic and physiological derangements and to establish an objective baseline.",
"title": ""
}
] |
scidocsrr
|
d76ce933b91644bf8250b42b29799b67
|
A Description Logic Primer
|
[
{
"docid": "2c5b384a66fe8b3abef31fc605f9daf0",
"text": "Since achieving W3C recommendation status in 2004, the Web Ontology Language (OWL) has been successfully applied to many problems in computer science. Practical experience with OWL has been quite positive in general; however, it has also revealed room for improvement in several areas. We systematically analyze the identified shortcomings of OWL, such as expressivity issues, problems with its syntaxes, and deficiencies in the definition of OWL species. Furthermore, we present an overview of OWL 2—an extension to and revision of OWL that is currently being developed within the W3C OWL Working Group. Many aspects of OWL have been thoroughly reengineered in OWL 2, thus producing a robust platform for future development of the language.",
"title": ""
},
{
"docid": "79cdb154262b6588abec7c374f6a289f",
"text": "We propose a new family of description logics (DLs), called DL-Lite, specifically tailored to capture basic ontology languages, while keeping low complexity of reasoning. Reasoning here means not only computing subsumption between concepts and checking satisfiability of the whole knowledge base, but also answering complex queries (in particular, unions of conjunctive queries) over the instance level (ABox) of the DL knowledge base. We show that, for the DLs of the DL-Lite family, the usual DL reasoning tasks are polynomial in the size of the TBox, and query answering is LogSpace in the size of the ABox (i.e., in data complexity). To the best of our knowledge, this is the first result of polynomial-time data complexity for query answering over DL knowledge bases. Notably our logics allow for a separation between TBox and ABox reasoning during query evaluation: the part of the process requiring TBox reasoning is independent of the ABox, and the part of the process requiring access to the ABox can be carried out by an SQL engine, thus taking advantage of the query optimization strategies provided by current database management systems. Since even slight extensions to the logics of the DL-Lite family make query answering at least NLogSpace in data complexity, thus ruling out the possibility of using on-the-shelf relational technology for query processing, we can conclude that the logics of the DL-Lite family are the maximal DLs supporting efficient query answering over large amounts of instances.",
"title": ""
},
{
"docid": "9814af3a2c855717806ad7496d21f40e",
"text": "This chapter gives an extended introduction to the lightweight profiles OWL EL, OWL QL, and OWL RL of the Web Ontology Language OWL. The three ontology language standards are sublanguages of OWL DL that are restricted in ways that significantly simplify ontological reasoning. Compared to OWL DL as a whole, reasoning algorithms for the OWL profiles show higher performance, are easier to implement, and can scale to larger amounts of data. Since ontological reasoning is of great importance for designing and deploying OWL ontologies, the profiles are highly attractive for many applications. These advantages come at a price: various modelling features of OWL are not available in all or some of the OWL profiles. Moreover, the profiles are mutually incomparable in the sense that each of them offers a combination of features that is available in none of the others. This chapter provides an overview of these differences and explains why some of them are essential to retain the desired properties. To this end, we recall the relationship between OWL and description logics (DLs), and show how each of the profiles is typically treated in reasoning algorithms.",
"title": ""
}
] |
[
{
"docid": "3906637b2c1df46a4eaa8b3e762a2c68",
"text": "In this paper, we investigate factors and issues related to human locomotion behavior and proxemics in the presence of a real or virtual human in augmented reality (AR). First, we discuss a unique issue with current-state optical see-through head-mounted displays, namely the mismatch between a small augmented visual field and a large unaugmented periphery, and its potential impact on locomotion behavior in close proximity of virtual content. We discuss a potential simple solution based on restricting the field of view to the central region, and we present the results of a controlled human-subject study. The study results show objective benefits for this approach in producing behaviors that more closely match those that occur when seeing a real human, but also some drawbacks in overall acceptance of the restricted field of view. Second, we discuss the limited multimodal feedback provided by virtual humans in AR, present a potential improvement based on vibrotactile feedback induced via the floor to compensate for the limited augmented visual field, and report results showing that benefits of such vibrations are less visible in objective locomotion behavior than in subjective estimates of co-presence. Third, we investigate and document significant differences in the effects that real and virtual humans have on locomotion behavior in AR with respect to clearance distances, walking speed, and head motions. We discuss potential explanations for these effects related to social expectations, and analyze effects of different types of behaviors including idle standing, jumping, and walking that such real or virtual humans may exhibit in the presence of an observer.",
"title": ""
},
{
"docid": "94a6106cac2ecd3362c81fc6fd93df28",
"text": "We present a simple encoding for unlabeled noncrossing graphs and show how its latent counterpart helps us to represent several families of directed and undirected graphs used in syntactic and semantic parsing of natural language as contextfree languages. The families are separated purely on the basis of forbidden patterns in latent encoding, eliminating the need to differentiate the families of non-crossing graphs in inference algorithms: one algorithm works for all when the search space can be controlled in parser input.",
"title": ""
},
{
"docid": "0d5ba680571a9051e70ababf0c685546",
"text": "• Current deep RL techniques require large amounts of data to find a good policy • Once found, the policy remains a black box to practitioners • Practitioners cannot verify that the policy is making decisions based on reasonable information • MOREL (Motion-Oriented REinforcement Learning) automatically detects moving objects and uses the relevant information for action selection • We gather a dataset using a uniform random policy • Train a network without supervision to capture a structured representation of motion between frames • Network predicts object masks, object motion, and camera motion to warp one frame into the next Introduction Learning to Segment Moving Objects Experiments Visualization",
"title": ""
},
{
"docid": "35fbdf776186afa7d8991fa4ff22503d",
"text": "Lang Linguist Compass 2016; 10: 701–719 wileyo Abstract Research and industry are becoming more and more interested in finding automatically the polarised opinion of the general public regarding a specific subject. The advent of social networks has opened the possibility of having access to massive blogs, recommendations, and reviews. The challenge is to extract the polarity from these data, which is a task of opinion mining or sentiment analysis. The specific difficulties inherent in this task include issues related to subjective interpretation and linguistic phenomena that affect the polarity of words. Recently, deep learning has become a popular method of addressing this task. However, different approaches have been proposed in the literature. This article provides an overview of deep learning for sentiment analysis in order to place these approaches in context.",
"title": ""
},
{
"docid": "17ed052368311073f7f18fd423c817e9",
"text": "We adopt and analyze a synchronous K-step averaging stochastic gradient descent algorithm which we call K-AVG for solving large scale machine learning problems. We establish the convergence results of K-AVG for nonconvex objectives. Our analysis of K-AVG applies to many existing variants of synchronous SGD. We explain why the Kstep delay is necessary and leads to better performance than traditional parallel stochastic gradient descent which is equivalent to K-AVG withK = 1. We also show that K-AVG scales better with the number of learners than asynchronous stochastic gradient descent (ASGD). Another advantage of K-AVG over ASGD is that it allows larger stepsizes and facilitates faster convergence. On a cluster of 128 GPUs, K-AVG is faster than ASGD implementations and achieves better accuracies and faster convergence for training with the CIFAR-10 dataset.",
"title": ""
},
{
"docid": "ff72ade7fdfba55c0f6ab7b5f8b74eb7",
"text": "Automatic detection of facial features in an image is important stage for various facial image interpretation work, such as face recognition, facial expression recognition, 3Dface modeling and facial features tracking. Detection of facial features like eye, pupil, mouth, nose, nostrils, lip corners, eye corners etc., with different facial expression and illumination is a challenging task. In this paper, we presented different methods for fully automatic detection of facial features. Viola-Jones' object detector along with haar-like cascaded features are used to detect face, eyes and nose. Novel techniques using the basic concepts of facial geometry, are proposed to locate the mouth position, nose position and eyes position. The estimation of detection region for features like eye, nose and mouth enhanced the detection accuracy significantly. An algorithm, using the H-plane of the HSV color space is proposed for detecting eye pupil from the eye detected region. FEI database of frontal face images is mainly used to test the algorithm. Proposed algorithm is tested over 100 frontal face images with two different facial expression (neutral face and smiling face). The results obtained are found to be 100% accurate for lip, lip corners, nose and nostrils detection. The eye corners, and eye pupil detection is giving approximately 95% accurate results.",
"title": ""
},
{
"docid": "75ed4cabbb53d4c75fda3a291ea0ab67",
"text": "Optimization of energy consumption in future intelligent energy networks (or Smart Grids) will be based on grid-integrated near-real-time communications between various grid elements in generation, transmission, distribution and loads. This paper discusses some of the challenges and opportunities of communications research in the areas of smart grid and smart metering. In particular, we focus on some of the key communications challenges for realizing interoperable and future-proof smart grid/metering networks, smart grid security and privacy, and how some of the existing networking technologies can be applied to energy management. Finally, we also discuss the coordinated standardization efforts in Europe to harmonize communications standards and protocols.",
"title": ""
},
{
"docid": "5fd3046c02e2051399c0569a0765d2bf",
"text": "Five test runs were performed to assess possible bias when performing the loss on ignition (LOI) method to estimate organic matter and carbonate content of lake sediments. An accurate and stable weight loss was achieved after 2 h of burning pure CaCO 3 at 950 °C, whereas LOI of pure graphite at 530 °C showed a direct relation to sample size and exposure time, with only 40–70% of the possible weight loss reached after 2 h of exposure and smaller samples losing weight faster than larger ones. Experiments with a standardised lake sediment revealed a strong initial weight loss at 550 °C, but samples continued to lose weight at a slow rate at exposure of up to 64 h, which was likely the effect of loss of volatile salts, structural water of clay minerals or metal oxides, or of inorganic carbon after the initial burning of organic matter. A further test-run revealed that at 550 °C samples in the centre of the furnace lost more weight than marginal samples. At 950 °C this pattern was still apparent but the differences became negligible. Again, LOI was dependent on sample size. An analytical LOI quality control experiment including ten different laboratories was carried out using each laboratory’s own LOI procedure as well as a standardised LOI procedure to analyse three different sediments. The range of LOI values between laboratories measured at 550 °C was generally larger when each laboratory used its own method than when using the standard method. This was similar for 950 °C, although the range of values tended to be smaller. The within-laboratory range of LOI measurements for a given sediment was generally small. Comparisons of the results of the individual and the standardised method suggest that there is a laboratory-specific pattern in the results, probably due to differences in laboratory equipment and/or handling that could not be eliminated by standardising the LOI procedure. Factors such as sample size, exposure time, position of samples in the furnace and the laboratory measuring affected LOI results, with LOI at 550 °C being more susceptible to these factors than LOI at 950 °C. We, therefore, recommend analysts to be consistent in the LOI method used in relation to the ignition temperatures, exposure times, and the sample size and to include information on these three parameters when referring to the method.",
"title": ""
},
{
"docid": "4a37bf3434b581102fc3b6247bd7b84a",
"text": "Business Intelligence (BI) has become one of the most important research areas that helps organizations and managers to better decision making process. This paper aims to show the barriers to BI adoption and discusses the most commonly used Business Intelligence Maturity Models (BIMMs). The aim also is to highlight the pitfalls of these BIMMs in order reach a solution. Using new techniques such as Service Oriented Architecture (SOA), Service Oriented Business Intelligence (SOBI) or Event Driven Architecture (EDA) leads to a new model. The proposed model named Service-Oriented Business Intelligence Maturity Model (SOBIMM) is briefly described in this paper.",
"title": ""
},
{
"docid": "3ebf234cbd1e0af70b1289d7f2e109d7",
"text": "This article reviews the evolutionary origins and functions of the capacity for anxiety, and relevant clinical and research issues. Normal anxiety is an emotion that helps organisms defend against a wide variety of threats. There is a general capacity for normal defensive arousal, and subtypes of normal anxiety protect against particular kinds of threats. These normal subtypes correspond somewhat to mild forms of various anxiety disorders. Anxiety disorders arise from dysregulation of normal defensive responses, raising the possibility of a bypophobic disorder (too little anxiety). If a drug were discovered that abolished all defensive anxiety, it could do harm as well as good. Factors that have shaped anxiety-regulation mechanisms can explain prepotent and prepared tendencies to associate anxiety more quickly with certain cues than with others. These tendencies lead to excess fear of largely archaic dangers, like snakes, and too little fear of new threats, like cars. An understanding of the evolutionary origins, functions, and mechanisms of anxiety suggests new questions about anxiety disorders.",
"title": ""
},
{
"docid": "99d84e588208ac09629a02a8349c560a",
"text": "Psilocybin (4-phosphoryloxy-N,N-dimethyltryptamine) is the major psychoactive alkaloid of some species of mushrooms distributed worldwide. These mushrooms represent a growing problem regarding hallucinogenic drug abuse. Despite its experimental medical use in the 1960s, only very few pharmacological data about psilocybin were known until recently. Because of its still growing capacity for abuse and the widely dispersed data this review presents all the available pharmacological data about psilocybin.",
"title": ""
},
{
"docid": "c609be6ff8dce8917a5009eb4e40f1af",
"text": "Pattern recognition algorithms are useful in bioimage informatics applications such as quantifying cellular and subcellular objects, annotating gene expressions, and classifying phenotypes. To provide effective and efficient image classification and annotation for the ever-increasing microscopic images, it is desirable to have tools that can combine and compare various algorithms, and build customizable solution for different biological problems. However, current tools often offer a limited solution in generating user-friendly and extensible tools for annotating higher dimensional images that correspond to multiple complicated categories. We develop the BIOimage Classification and Annotation Tool (BIOCAT). It is able to apply pattern recognition algorithms to two- and three-dimensional biological image sets as well as regions of interest (ROIs) in individual images for automatic classification and annotation. We also propose a 3D anisotropic wavelet feature extractor for extracting textural features from 3D images with xy-z resolution disparity. The extractor is one of the about 20 built-in algorithms of feature extractors, selectors and classifiers in BIOCAT. The algorithms are modularized so that they can be “chained” in a customizable way to form adaptive solution for various problems, and the plugin-based extensibility gives the tool an open architecture to incorporate future algorithms. We have applied BIOCAT to classification and annotation of images and ROIs of different properties with applications in cell biology and neuroscience. BIOCAT provides a user-friendly, portable platform for pattern recognition based biological image classification of two- and three- dimensional images and ROIs. We show, via diverse case studies, that different algorithms and their combinations have different suitability for various problems. The customizability of BIOCAT is thus expected to be useful for providing effective and efficient solutions for a variety of biological problems involving image classification and annotation. We also demonstrate the effectiveness of 3D anisotropic wavelet in classifying both 3D image sets and ROIs.",
"title": ""
},
{
"docid": "c46d7018ecca531dad19013496ef95a1",
"text": "A new method of logo detection in document images is proposed in this paper. It is based on the boundary extension of feature rectangles of which the definition is also given in this paper. This novel method takes advantage of a layout assumption that logos have background (white spaces) surrounding it in a document. Compared with other logo detection methods, this new method has the advantage that it is independent on logo shapes and very fast. After the logo candidates are detected, a simple decision tree is used to reduce the false positive from the logo candidate pool. We have tested our method on a public image database involving logos. Experiments show that our method is more precise and robust than the previous methods and is well qualified as an effective assistance in document retrieval.",
"title": ""
},
{
"docid": "3bfe197c8ba46a626fd19c95234392be",
"text": "In this paper, we introduce Recipe1M, a new large-scale, structured corpus of over 1m cooking recipes and 800k food images. As the largest publicly available collection of recipe data, Recipe1M affords the ability to train high-capacity models on aligned, multi-modal data. Using these data, we train a neural network to find a joint embedding of recipes and images that yields impressive results on an image-recipe retrieval task. Additionally, we demonstrate that regularization via the addition of a high-level classification objective both improves retrieval performance to rival that of humans and enables semantic vector arithmetic. We postulate that these embeddings will provide a basis for further exploration of the Recipe1M dataset and food and cooking in general. Code, data and models are publicly available",
"title": ""
},
{
"docid": "5828218248b4da8991b18dc698ef25ee",
"text": "Little is known about the mechanisms of smartphone features that are used in sealing relationships between psychopathology and problematic smartphone use. Our purpose was to investigate two specific smartphone usage types e process use and social use e for associations with depression and anxiety; and in accounting for relationships between anxiety/depression and problematic smartphone use. Social smartphone usage involves social feature engagement (e.g., social networking, messaging), while process usage involves non-social feature engagement (e.g., news consumption, entertainment, relaxation). 308 participants from Amazon's Mechanical Turk internet labor market answered questionnaires about their depression and anxiety symptoms, and problematic smartphone use along with process and social smartphone use dimensions. Statistically adjusting for age and sex, we discovered the association between anxiety symptoms was stronger with process versus social smartphone use. Depression symptom severity was negatively associated with greater social smartphone use. Process smartphone use was more strongly associated with problematic smartphone use. Finally, process smartphone use accounted for relationships between anxiety severity and problematic smartphone use. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cdd9a541e8735c605b73d7293d4c4eb4",
"text": "With the increased use of high degree-of-freedom robots that must perform tasks in real-time, there is a need for fast algorithms for motion planning. In this work, we view motion planning from a probabilistic perspective. We consider smooth continuous-time trajectories as samples from a Gaussian process (GP) and formulate the planning problem as probabilistic inference. We use factor graphs and numerical optimization to perform inference quickly, and we show how GP interpolation can further increase the speed of the algorithm. Our framework also allows us to incrementally update the solution of the planning problem to contend with changing conditions. We benchmark our algorithm against several recent trajectory optimization algorithms on planning problems in multiple environments. Our evaluation reveals that our approach is several times faster than previous algorithms while retaining robustness. Finally, we demonstrate the incremental version of our algorithm on replanning problems, and show that it often can find successful solutions in a fraction of the time required to replan from scratch.",
"title": ""
},
{
"docid": "47d2ebd3794647708d41c6b3d604e796",
"text": "Most stream data classification algorithms apply the supervised learning strategy which requires massive labeled data. Such approaches are impractical since labeled data are usually hard to obtain in reality. In this paper, we build a clustering feature decision tree model, CFDT, from data streams having both unlabeled and a small number of labeled examples. CFDT applies a micro-clustering algorithm that scans the data only once to provide the statistical summaries of the data for incremental decision tree induction. Micro-clusters also serve as classifiers in tree leaves to improve classification accuracy and reinforce the any-time property. Our experiments on synthetic and real-world datasets show that CFDT is highly scalable for data streams while generating high classification accuracy with high speed.",
"title": ""
},
{
"docid": "63b63bbaa2f61b2b39b46643655bad0a",
"text": "A tire-road friction coefficient estimation approach is proposed which makes use of the uncoupled lateral deflection profile of the tire carcass measured from inside the tire through the entire contact patch. The unique design of the developed wireless piezoelectric sensor enables the decoupling of the lateral carcass deformations from the radial and tangential deformations. The estimation of the tire-road friction coefficient depends on the estimation of slip angle, lateral tire force, aligning moment, and the use of a brush model. The tire slip angle is estimated as the slope of the lateral deflection curve at the leading edge of the contact patch. The portion of the deflection profile measured in the contact patch is assumed to be a superposition of three types of lateral carcass deformations, namely, shift, yaw, and bend. The force and moment acting on the tire are obtained by using the coefficients of a parabolic function which approximates the deflection profile inside the contact patch and whose terms represent each type of deformation. The estimated force, moment, and slip angle variables are then plugged into the brush model to estimate the tire-road friction coefficient. A specially constructed tire test rig is used to experimentally evaluate the performance of the developed estimation approach and the tire sensor. Experimental results show that the developed sensor can provide good estimation of both slip angle and tire-road friction coefficient.",
"title": ""
},
{
"docid": "5124bfe94345f2abe6f91fe717731945",
"text": "Recently, IT trends such as big data, cloud computing, internet of things (IoT), 3D visualization, network, and so on demand terabyte/s bandwidth computer performance in a graphics card. In order to meet these performance, terabyte/s bandwidth graphics module using 2.5D-IC with high bandwidth memory (HBM) technology has been emerged. Due to the difference in scale of interconnect pitch between GPU or HBM and package substrate, the HBM interposer is certainly required for terabyte/s bandwidth graphics module. In this paper, the electrical performance of the HBM interposer channel in consideration of the manufacturing capabilities is analyzed by simulation both the frequency- and time-domain. Furthermore, although the silicon substrate is most widely employed for the HBM interposer fabrication, the organic and glass substrate are also proposed to replace the high cost and high loss silicon substrate. Therefore, comparison and analysis of the electrical performance of the HBM interposer channel using silicon, organic, and glass substrate are conducted.",
"title": ""
},
{
"docid": "ec181b897706d101136dcbcef6e84de9",
"text": "Working with large swarms of robots has challenges in calibration, sensing, tracking, and control due to the associated scalability and time requirements. Kilobots solve this through their ease of maintenance and programming, and are widely used in several research laboratories worldwide where their low cost enables large-scale swarms studies. However, the small, inexpensive nature of the Kilobots limits their range of capabilities as they are only equipped with a single sensor. In some studies, this limitation can be a source of motivation and inspiration, while in others it is an impediment. As such, we designed, implemented, and tested a novel system to communicate personalized location-and-state-based information to each robot, and receive information on each robots’ state. In this way, the Kilobots can sense additional information from a virtual environment in real time; for example, a value on a gradient, a direction toward a reference point or a pheromone trail. The augmented reality for Kilobots ( ARK) system implements this in flexible base control software which allows users to define varying virtual environments within a single experiment using integrated overhead tracking and control. We showcase the different functionalities of the system through three demos involving hundreds of Kilobots. The ARK provides Kilobots with additional and unique capabilities through an open-source tool which can be implemented with inexpensive, off-the-shelf hardware.",
"title": ""
}
] |
scidocsrr
|
debc3f6df4852c23fdeb6c1085980d2d
|
Learning Structured Sparsity in Deep Neural Networks
|
[
{
"docid": "28c03f6fb14ed3b7d023d0983cb1e12b",
"text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.",
"title": ""
}
] |
[
{
"docid": "74fa56730057ae21f438df46054041c4",
"text": "Facial fractures can lead to long-term sequelae if not repaired. Complications from surgical approaches can be equally detrimental to the patient. Periorbital approaches via the lower lid can lead to ectropion, entropion, scleral show, canthal malposition, and lid edema.1–6 Ectropion can cause epiphora, whereas entropion often causes pain and irritation due to contact between the cilia and cornea. Transcutaneous and tranconjunctival approaches are commonly used to address fractures of the infraorbital rim and orbital floor. The transconjunctival approach is popular among otolaryngologists and ophthalmologists, whereas transcutaneous approaches are more commonly used by oral maxillofacial surgeons and plastic surgeons.7Ridgwayet al reported in theirmeta-analysis that lid complications are highest with the subciliary approach (19.1%) and lowest with transconjunctival approach (2.1%).5 Raschke et al also found a lower incidence of lower lid malpositionvia the transconjunctival approach comparedwith the subciliary approach.8 Regardless of approach, complications occur and thefacial traumasurgeonmustknowhowtomanage these issues. In this article, we will review the common complications of lower lid surgery and their treatment.",
"title": ""
},
{
"docid": "db6a91e0216440a4573aee6c78c78cbf",
"text": "ObjectiveHeart rate monitoring using wrist type Photoplethysmographic (PPG) signals is getting popularity because of construction simplicity and low cost of wearable devices. The task becomes very difficult due to the presence of various motion artifacts. The objective is to develop algorithms to reduce the effect of motion artifacts and thus obtain accurate heart rate estimation. MethodsProposed heart rate estimation scheme utilizes both time and frequency domain analyses. Unlike conventional single stage adaptive filter, multi-stage cascaded adaptive filtering is introduced by using three channel accelerometer data to reduce the effect of motion artifacts. Both recursive least squares (RLS) and least mean squares (LMS) adaptive filters are tested. Moreover, singular spectrum analysis (SSA) is employed to obtain improved spectral peak tracking. The outputs from the filter block and SSA operation are logically combined and used for spectral domain heart rate estimation. Finally, a tracking algorithm is incorporated considering neighbouring estimates. ResultsThe proposed method provides an average absolute error of 1.16 beat per minute (BPM) with a standard deviation of 1.74 BPM while tested on publicly available database consisting of recordings from 12 subjects during physical activities. ConclusionIt is found that the proposed method provides consistently better heart rate estimation performance in comparison to that recently reported by TROIKA, JOSS and SPECTRAP methods. SignificanceThe proposed method offers very low estimation error and a smooth heart rate tracking with simple algorithmic approach and thus feasible for implementing in wearable devices to monitor heart rate for fitness and clinical purpose.",
"title": ""
},
{
"docid": "18dc7688d96eff7658e0cffc5b844231",
"text": "Neural networks have recently become good at engaging in dialog. However, current approaches are based solely on verbal text, lacking the richness of a real face-to-face conversation. We propose a neural conversation model that aims to read and generate facial gestures alongside with text. This allows our model to adapt its response based on the \"mood\" of the conversation. In particular, we introduce an RNN encoder-decoder that exploits the movement of facial muscles, as well as the verbal conversation. The decoder consists of two layers, where the lower layer aims at generating the verbal response and coarse facial expressions, while the second layer fills in the subtle gestures, making the generated output more smooth and natural. We train our neural network by having it \"watch\" 250 movies. We showcase our joint face-text model in generating more natural conversations through automatic metrics and a human study. We demonstrate an example application with a face-to-face chatting avatar.",
"title": ""
},
{
"docid": "fb00601b60bcd1f7a112e34d93d55d01",
"text": "Long Short-Term Memory (LSTM) has achieved state-of-the-art performances on a wide range of tasks. Its outstanding performance is guaranteed by the long-term memory ability which matches the sequential data perfectly and the gating structure controlling the information flow. However, LSTMs are prone to be memory-bandwidth limited in realistic applications and need an unbearable period of training and inference time as the model size is ever-increasing. To tackle this problem, various efficient model compression methods have been proposed. Most of them need a big and expensive pre-trained model which is a nightmare for resource-limited devices where the memory budget is strictly limited. To remedy this situation, in this paper, we incorporate the Sparse Evolutionary Training (SET) procedure into LSTM, proposing a novel model dubbed SET-LSTM. Rather than starting with a fully-connected architecture, SET-LSTM has a sparse topology and dramatically fewer parameters in both phases, training and inference. Considering the specific architecture of LSTMs, we replace the LSTM cells and embedding layers with sparse structures and further on, use an evolutionary strategy to adapt the sparse connectivity to the data. Additionally, we find that SET-LSTM can provide many different good combinations of sparse connectivity to substitute the overparameterized optimization problem of dense neural networks. Evaluated on four sentiment analysis classification datasets, the results demonstrate that our proposed model is able to achieve usually better performance than its fully connected counterpart while having less than 4% of its parameters. Department of Mathematics and Computer Science, Eindhoven University of Technology, Netherlands. Correspondence to: Shiwei Liu <s.liu3@tue.nl>.",
"title": ""
},
{
"docid": "5da1f7c3459b489564e731cbb41fa028",
"text": "We describe an algorithm and experimental work for vehicle detection using sensor node data. Both acoustic and magnetic signals are processed for vehicle detection. We propose a real-time vehicle detection algorithm called the adaptive threshold algorithm (ATA). The algorithm first computes the time-domain energy distribution curve and then slices the energy curve using a threshold updated adaptively by some decision states. Finally, the hard decision results from threshold slicing are passed to a finite-state machine, which makes the final vehicle detection decision. Real-time tests and offline simulations both demonstrate that the proposed algorithm is effective.",
"title": ""
},
{
"docid": "d2a205f2a6c6deff5d9560af8cf8ff7f",
"text": "MIDI files, when paired with corresponding audio recordings, can be used as ground truth for many music information retrieval tasks. We present a system which can efficiently match and align MIDI files to entries in a large corpus of audio content based solely on content, i.e., without using any metadata. The core of our approach is a convolutional network-based cross-modality hashing scheme which transforms feature matrices into sequences of vectors in a common Hamming space. Once represented in this way, we can efficiently perform large-scale dynamic time warping searches to match MIDI data to audio recordings. We evaluate our approach on the task of matching a huge corpus of MIDI files to the Million Song Dataset. 1. TRAINING DATA FOR MIR Central to the task of content-based Music Information Retrieval (MIR) is the curation of ground-truth data for tasks of interest (e.g. timestamped chord labels for automatic chord estimation, beat positions for beat tracking, prominent melody time series for melody extraction, etc.). The quantity and quality of this ground-truth is often instrumental in the success of MIR systems which utilize it as training data. Creating appropriate labels for a recording of a given song by hand typically requires person-hours on the order of the duration of the data, and so training data availability is a frequent bottleneck in content-based MIR tasks. MIDI files that are time-aligned to matching audio can provide ground-truth information [8,25] and can be utilized in score-informed source separation systems [9, 10]. A MIDI file can serve as a timed sequence of note annotations (a “piano roll”). It is much easier to estimate information such as beat locations, chord labels, or predominant melody from these representations than from an audio signal. A number of tools have been developed for inferring this kind of information from MIDI files [6, 7, 17, 19]. Halevy et al. [11] argue that some of the biggest successes in machine learning came about because “...a large training set of the input-output behavior that we seek to automate is available to us in the wild.” The motivation behind c Colin Raffel, Daniel P. W. Ellis. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: Colin Raffel, Daniel P. W. Ellis. “LargeScale Content-Based Matching of MIDI and Audio Files”, 16th International Society for Music Information Retrieval Conference, 2015. J/Jerseygi.mid",
"title": ""
},
{
"docid": "1171b827d9057796a0dccc86ae414ea1",
"text": "The diffusion of new digital technologies renders digital transformation relevant for nearly every industry. Therefore, the maturity of firms in mastering this fundamental organizational change is increasingly discussed in practice-oriented literature. These studies, however, suffer from some shortcomings. Most importantly, digital maturity is typically described along a linear scale, thus assuming that all firms do and need to proceed through the same path. We challenge this assumption and derive a more differentiated classification scheme based on a comprehensive literature review as well as an exploratory analysis of a survey on digital transformation amongst 327 managers. Based on these findings we propose two scales for describing a firm’s digital maturity: first, the impact that digital transformation has on a specific firm; second, the readiness of the firm to master the upcoming changes. We demonstrate the usefulness of this two scale measure by empirically deriving five digital maturity clusters as well as further empirical evidence. Our framework illuminates the monolithic block of digital maturity by allowing for a more differentiated firm-specific assessment – thus, it may serve as a first foundation for future research on digital maturity.",
"title": ""
},
{
"docid": "0e852d6b3f5dbffc9324b25094fabd06",
"text": "Because of the increasing number of electronic components, the automotive manufacturers introduced data bus systems to decrease the number of discrete lines. Inside modern vehicles there are several bus systems that are used for communication to provide many safety-relevant functions with direct impact to the vehicle's behaviour. Due to missing security services, these in-car networks are unprotected against malicious attacks. Exemplarily focussing CAN, this article explains that the missing of authenticity and confidentiality are the most important issues concerning security risks for in-car communication. A flexible and adaptive solution using trusted communication groups is presented that enables confidential communication between components of a vehicle and guarantees that only authentic controllers - holding a certificate signed by the manufacturer - are able to be part of these closed communication groups.",
"title": ""
},
{
"docid": "e8f5efad22957b0a587ce94689f20c20",
"text": "Aiming at efficient similarity search, hash functions are designed to embed high-dimensional feature descriptors to low-dimensional binary codes such that similar descriptors will lead to binary codes with a short distance in the Hamming space. It is critical to effectively maintain the intrinsic structure and preserve the original information of data in a hashing algorithm. In this paper, we propose a novel hashing algorithm called Latent Structure Preserving Hashing (LSPH), with the target of finding a well-structured low-dimensional data representation from the original high-dimensional data through a novel objective function based on Nonnegative Matrix Factorization (NMF) with their corresponding Kullback-Leibler divergence of data distribution as the regularization term. Via exploiting the joint probabilistic distribution of data, LSPH can automatically learn the latent information and successfully preserve the structure of high-dimensional data. To further achieve robust performance with complex and nonlinear data, in this paper, we also contribute a more generalized multi-layer LSPH (ML-LSPH) framework, in which hierarchical representations can be effectively learned by a multiplicative up-propagation algorithm. Once obtaining the latent representations, the hash functions can be easily acquired through multi-variable logistic regression. Experimental results on three large-scale retrieval datasets, i.e., SIFT 1M, GIST 1M and 500 K TinyImage, show that ML-LSPH can achieve better performance than the single-layer LSPH and both of them outperform existing hashing techniques on large-scale data.",
"title": ""
},
{
"docid": "1f753b8e3c0178cabbc8a9f594c40c8c",
"text": "For easy comprehensibility, rules are preferrable to non-linear kernel functions in the analysis of bio-medical data. In this paper, we describe two rule induction approaches—C4.5 and our PCL classifier—for discovering rules from both traditional clinical data and recent gene expression or proteomic profiling data. C4.5 is a widely used method, but it has two weaknesses, the single coverage constraint and the fragmentation problem, that affect its accuracy. PCL is a new rule-based classifier that overcomes these two weaknesses of decision trees by using many significant rules. We present a thorough comparison to show that our PCL method is much more accurate than C4.5, and it is also superior to Bagging and Boosting in general.",
"title": ""
},
{
"docid": "d7072b82cb57b9ca7e4ebaf592e48a21",
"text": "Internet of Things (IoT) devices are typically deployed in resource (energy, computational capacity) constrained environments. Connecting such devices to the cloud is not practical due to variable network behavior as well as high latency overheads. Fog computing refers to a scalable, distributed computing architecture which moves computational tasks closer to Edge devices or smart gateways. As an example of mobile IoT scenarios, in robotic deployments, computationally intensive tasks such as run time mapping may be performed on peer robots or smart gateways. Most of these computational tasks involve running optimization algorithms inside compute nodes at run time and taking rapid decisions based on results. In this paper, we incorporate optimization libraries within the Robot Operating System (ROS) deployed on robotic sensor-actuators. Using the ROS based simulation environment Gazebo, we demonstrate case-study scenarios for runtime optimization. The use of optimized distributed computations are shown to provide significant improvement in latency and battery saving for large computational loads. The possibility to perform run time optimization opens up a wide range of use-cases in mobile IoT deployments.",
"title": ""
},
{
"docid": "363cdcc34c855e712707b5b920fbd113",
"text": "This paper presents the design and experimental validation of an anthropomorphic underactuated robotic hand with 15 degrees of freedom and a single actuator. First, the force transmission design of underactuated fingers is revisited. An optimal geometry of the tendon-driven fingers is then obtained. Then, underactuation between the fingers is addressed using differential mechanisms. Tendon routings are proposed and verified experimentally. Finally, a prototype of a 15-degree-of-freedom hand is built and tested. The results demonstrate the feasibility of a humanoid hand with many degrees of freedom and one single degree of actuation.",
"title": ""
},
{
"docid": "24ae75e7ed48507a2c5d5cbcf7f6c059",
"text": "Relative positioning systems play a vital role in current multirobot systems. We present a self-contained detection and tracking approach, where a robot estimates a distance (range) and an angle (bearing) to another robot using measurements extracted from the raw data provided by two laser range finders. We propose a method based on the detection of circular features with least-squares fitting and filtering out outliers using a map-based selection. We improve the estimate of the relative robot position and reduce its uncertainty by feeding measurements into a Kalman filter, resulting in an accurate tracking system. We evaluate the performance of the algorithm in a realistic indoor environment to demonstrate its robustness and reliability.",
"title": ""
},
{
"docid": "eaca5794d84a96f8c8e7807cf83c3f00",
"text": "Background Women represent 15% of practicing general surgeons. Gender-based discrimination has been implicated as discouraging women from surgery. We sought to determine women's perceptions of gender-based discrimination in the surgical training and working environment. Methods Following IRB approval, we fielded a pilot survey measuring perceptions and impact of gender-based discrimination in medical school, residency training, and surgical practice. It was sent electronically to 1,065 individual members of the Association of Women Surgeons. Results We received 334 responses from medical students, residents, and practicing physicians with a response rate of 31%. Eighty-seven percent experienced gender-based discrimination in medical school, 88% in residency, and 91% in practice. Perceived sources of gender-based discrimination included superiors, physician peers, clinical support staff, and patients, with 40% emanating from women and 60% from men. Conclusions The majority of responses indicated perceived gender-based discrimination during medical school, residency, and practice. Gender-based discrimination comes from both sexes and has a significant impact on women surgeons.",
"title": ""
},
{
"docid": "a57832d14088f76e694a06e1e455a0f9",
"text": "In this paper, we propose a novel Spin-Transfer Torque Magnetic Random-Access Memory (STT-MRAM) array design that could simultaneously work as non-volatile memory and implement a reconfigure in-memory logic operation without add-on logic circuits to the memory chip. The computed output could be simply read out like a typical MRAM bit-cell through the modified peripheral circuit. Such intrinsic in-memory computation can be used to process data locally and transfers the \"cooked\" data to the primary processing unit (i.e. CPU or GPU) for complex computation with high precision requirement. It greatly reduces power-hungry and long distance data communication, and further leads to extreme parallelism within memory. In this work, we further propose an in-memory edge extraction algorithm as a case study to demonstrate the efficiency of in-memory preprocessing methodology. The simulation results show that our edge extraction method reduces data communication as much as 8x for grayscale image, thus greatly reducing system energy consumption. Meanwhile, the F-measure result shows only ∼10% degradation compared to conventional edge detection operators, such as Prewitt, Sobel and Roberts.",
"title": ""
},
{
"docid": "e7035280ce0fed2690ee32da002d27e0",
"text": "Just as email spam has negatively impacted the user messaging experience, the rise of Web spam is threatening to severely degrade the quality of information on the World Wide Web. Fundamentally, Web spam is designed to pollute search engines and corrupt the user experience by driving traffic to particular spammed Web pages, regardless of the merits of those pages. In this paper, we identify an interesting link between email spam and Web spam, and we use this link to propose a novel technique for extracting large Web spam samples from the Web. Then, we present the Webb Spam Corpus – a first-of-its-kind, large-scale, and publicly available Web spam data set that was created using our automated Web spam collection method. The corpus consists of nearly 350,000 Web spam pages, making it more than two orders of magnitude larger than any other previously cited Web spam data set. Finally, we identify several application areas where the Webb Spam Corpus may be especially helpful. Interestingly, since the Webb Spam Corpus bridges the worlds of email spam and Web spam, we note that it can be used to aid traditional email spam classification algorithms through an analysis of the characteristics of the Web pages referenced by email messages.",
"title": ""
},
{
"docid": "15d3618efa3413456c6aebf474b18c92",
"text": "The aim of this paper is to elucidate the implications of quantum computing in present cryptography and to introduce the reader to basic post-quantum algorithms. In particular the reader can delve into the following subjects: present cryptographic schemes (symmetric and asymmetric), differences between quantum and classical computing, challenges in quantum computing, quantum algorithms (Shor’s and Grover’s), public key encryption schemes affected, symmetric schemes affected, the impact on hash functions, and post quantum cryptography. Specifically, the section of Post-Quantum Cryptography deals with different quantum key distribution methods and mathematicalbased solutions, such as the BB84 protocol, lattice-based cryptography, multivariate-based cryptography, hash-based signatures and code-based cryptography. Keywords—quantum computers; post-quantum cryptography; Shor’s algorithm; Grover’s algorithm; asymmetric cryptography; symmetric cryptography",
"title": ""
},
{
"docid": "ef3ec9af6f5fe3ff71f5c54a1de262d8",
"text": "This paper proposes an information theoretic criterion for comparing two partitions, or clusterings, of the same data set. The criterion, called variation of information (VI), measures the amount of information lost and gained in changing from clustering C to clustering C′. The basic properties of VI are presented and discussed. We focus on two kinds of properties: (1) those that help one build intuition about the new criterion (in particular, it is shown the VI is a true metric on the space of clusterings), and (2) those that pertain to the comparability of VI values over different experimental conditions. As the latter properties have rarely been discussed explicitly before, other existing comparison criteria are also examined in their light. Finally we present the VI from an axiomatic point of view, showing that it is the only “sensible” criterion for comparing partitions that is both aligned to the lattice and convexely additive. As a consequence, we prove an impossibility result for comparing partitions: there is no criterion for comparing partitions that simultaneoulsly satisfies the above two desirable properties and is bounded.",
"title": ""
},
{
"docid": "d6d0069b9903860bef39f812471d2946",
"text": "Internet content has become one of the most important resources of information. Much of this information is in the form of natural language text and one of the important components of natural language text is named entities. So automatic recognition and classification of named entities has attracted researchers for many years. Named entities are mentioned in different textual forms in different documents. Also, the same textual mention may refer to different named entities. This problem is well known in NLP as a disambiguation problem. Named Entity Disambiguation (NED) refers to the task of mapping different named entity mentions in running text to their correct interpretations in a specific knowledge base (KB). NED is important for many applications like search engines and software agents that aim to aggregate information on real world entities from sources such as the Web. The main goal of this research is to develop new methods for named entity disambiguation, emphasising the importance of interdependency of named entity candidates of different textual mentions in the document. The thesis focuses on two connected problems related to disambiguation. The first is Candidates Generation, the process of finding a small set of named entity candidate entries in the knowledge base for a specific textual mention, where this set contains the correct entry in the knowledge base. The second problem is Collective Disambiguation, where all named entity textual mentions in the document are disambiguated jointly, using interdependence and semantic relations between the different NE candidates of different textual mentions. Wikipedia is used as a reference knowledge base in this research. An information retrieval framework is used to generate the named entity candidates for a textual mention. A novel document similarity function (NEBSim) based on NE co-occurrence",
"title": ""
},
{
"docid": "9868b4d1c4ab5eb92b9d8fbe2f1715a1",
"text": "The work presented in this paper focuses on the design of a novel flexure-based mechanism capable of delivering planar motion with three degrees of freedom (3-DOF). Pseudo rigid body modeling (PRBM) and kinematic analysis of the mechanism are used to predict the motion of the mechanism in the X-, Y- and θ-directions. Lever based amplification is used to enhance the displacement of the mechanism. The presented design is small and compact in size (about 142mm by 110mm). The presented 3-DOF flexure-based miniature micro/nano mechanism delivers smooth motion in X, Y and θ, with maximum displacements of 142.09 μm in X-direction, 120.36 μm in Y-direction and 6.026 mrad in θ-rotation.",
"title": ""
}
] |
scidocsrr
|
e4a74ef5419006c286539acfddfefb03
|
Dopamine, learning and motivation
|
[
{
"docid": "49c19e5417aa6a01c59f666ba7cc3522",
"text": "The effect of various drugs on the extracellular concentration of dopamine in two terminal dopaminergic areas, the nucleus accumbens septi (a limbic area) and the dorsal caudate nucleus (a subcortical motor area), was studied in freely moving rats by using brain dialysis. Drugs abused by humans (e.g., opiates, ethanol, nicotine, amphetamine, and cocaine) increased extracellular dopamine concentrations in both areas, but especially in the accumbens, and elicited hypermotility at low doses. On the other hand, drugs with aversive properties (e.g., agonists of kappa opioid receptors, U-50,488, tifluadom, and bremazocine) reduced dopamine release in the accumbens and in the caudate and elicited hypomotility. Haloperidol, a neuroleptic drug, increased extracellular dopamine concentrations, but this effect was not preferential for the accumbens and was associated with hypomotility and sedation. Drugs not abused by humans [e.g., imipramine (an antidepressant), atropine (an antimuscarinic drug), and diphenhydramine (an antihistamine)] failed to modify synaptic dopamine concentrations. These results provide biochemical evidence for the hypothesis that stimulation of dopamine transmission in the limbic system might be a fundamental property of drugs that are abused.",
"title": ""
}
] |
[
{
"docid": "6b00269aca800918836e1e0c759165fc",
"text": "We add an interpretable semantics to the paraphrase database (PPDB). To date, the relationship between phrase pairs in the database has been weakly defined as approximately equivalent. We show that these pairs represent a variety of relations, including directed entailment (little girl/girl) and exclusion (nobody/someone). We automatically assign semantic entailment relations to entries in PPDB using features derived from past work on discovering inference rules from text and semantic taxonomy induction. We demonstrate that our model assigns these relations with high accuracy. In a downstream RTE task, our labels rival relations from WordNet and improve the coverage of a proof-based RTE system by 17%.",
"title": ""
},
{
"docid": "0f17511a99f77a00930f4e8be525f1f9",
"text": "The fourth member of the leucine-rich repeat-containing GPCR family (LGR4, frequently referred to as GPR48) and its cognate ligands, R-spondins (RSPOs) play crucial roles in the development of multiple organs as well as the survival of adult stem cells by activation of canonical Wnt signaling. Wnt/β-catenin signaling acts to regulate breast cancer; however, the molecular mechanisms determining its spatiotemporal regulation are largely unknown. In this study, we identified LGR4 as a master controller of Wnt/β-catenin signaling-mediated breast cancer tumorigenesis, metastasis, and cancer stem cell (CSC) maintenance. LGR4 expression in breast tumors correlated with poor prognosis. Either Lgr4 haploinsufficiency or mammary-specific deletion inhibited mouse mammary tumor virus (MMTV)- PyMT- and MMTV- Wnt1-driven mammary tumorigenesis and metastasis. Moreover, LGR4 down-regulation decreased in vitro migration and in vivo xenograft tumor growth and lung metastasis. Furthermore, Lgr4 deletion in MMTV- Wnt1 tumor cells or knockdown in human breast cancer cells decreased the number of functional CSCs by ∼90%. Canonical Wnt signaling was impaired in LGR4-deficient breast cancer cells, and LGR4 knockdown resulted in increased E-cadherin and decreased expression of N-cadherin and snail transcription factor -2 ( SNAI2) (also called SLUG), implicating LGR4 in regulation of epithelial-mesenchymal transition. Our findings support a crucial role of the Wnt signaling component LGR4 in breast cancer initiation, metastasis, and breast CSCs.-Yue, Z., Yuan, Z., Zeng, L., Wang, Y., Lai, L., Li, J., Sun, P., Xue, X., Qi, J., Yang, Z., Zheng, Y., Fang, Y., Li, D., Siwko, S., Li, Y., Luo, J., Liu, M. LGR4 modulates breast cancer initiation, metastasis, and cancer stem cells.",
"title": ""
},
{
"docid": "838e6c58f3bb7a0b8350d12d45813b5a",
"text": "Heterogeneous networks not only present a challenge of heterogeneity in the types of nodes and relations, but also the attributes and content associated with the nodes. While recent works have looked at representation learning on homogeneous and heterogeneous networks, there is no work that has collectively addressed the following challenges: (a) the heterogeneous structural information of the network consisting of multiple types of nodes and relations; (b) the unstructured semantic content (e.g., text) associated with nodes; and (c) online updates due to incoming new nodes in growing network. We address these challenges by developing a Content-Aware Representation Learning model (CARL). CARL performs joint optimization of heterogeneous SkipGram and deep semantic encoding for capturing both heterogeneous structural closeness and unstructured semantic relations among all nodes, as function of node content, that exist in the network. Furthermore, an additional online update module is proposed for efficiently learning representations of incoming nodes. Extensive experiments demonstrate that CARL outperforms state-of-the-art baselines in various heterogeneous network mining tasks, such as link prediction, document retrieval, node recommendation and relevance search. We also demonstrate the effectiveness of the CARL’s online update module through a category visualization study.",
"title": ""
},
{
"docid": "2bf0219394d87654d2824c805844fcaa",
"text": "Wei-yu Kevin Chiang • Dilip Chhajed • James D. Hess Department of Information Systems, University of Maryland at Baltimore County, Baltimore, Maryland 21250 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 kevin@wchiang.net • chhajed@uiuc.edu • jhess@uiuc.edu",
"title": ""
},
{
"docid": "98a65cca7217dfa720dd4ed2972c3bdd",
"text": "Intramuscular fat percentage (IMF%) has been shown to have a positive influence on the eating quality of red meat. Selection of Australian lambs for increased lean tissue and reduced carcass fatness using Australian Sheep Breeding Values has been shown to decrease IMF% of the Muscularis longissimus lumborum. The impact this selection has on the IMF% of other muscle depots is unknown. This study examined IMF% in five different muscles from 400 lambs (M. longissimus lumborum, Muscularis semimembranosus, Muscularis semitendinosus, Muscularis supraspinatus, Muscularis infraspinatus). The sires of these lambs had a broad range in carcass breeding values for post-weaning weight, eye muscle depth and fat depth over the 12th rib (c-site fat depth). Results showed IMF% to be highest in the M. supraspinatus (4.87 ± 0.1, P<0.01) and lowest in the M. semimembranosus (3.58 ± 0.1, P<0.01). Hot carcass weight was positively associated with IMF% of all muscles. Selection for decreasing c-site fat depth reduced IMF% in the M. longissimus lumborum, M. semimembranosus and M. semitendinosus. Higher breeding values for post-weaning weight and eye muscle depth increased and decreased IMF%, respectively, but only in the lambs born as multiples and raised as singles. For each per cent increase in lean meat yield percentage (LMY%), there was a reduction in IMF% of 0.16 in all five muscles examined. Given the drive within the lamb industry to improve LMY%, our results indicate the importance of continued monitoring of IMF% throughout the different carcass regions, given its importance for eating quality.",
"title": ""
},
{
"docid": "e4632cf52719eea1565d04ec4e068e16",
"text": "This study examined the correlation between body mass index as independent variable, and body image and fear of negative evaluation as dependent variables, as well as the moderating role of self-esteem in these correlations. A total of 318 Malaysian young adults were conveniently recruited to do the self-administered survey on the demographic characteristics body image, fear of negative evaluation, and self-esteem. Partial least squares structural equation modeling was used to test the research hypotheses. The results revealed that body mass index was negatively associated with body image, while no such correlation was found with fear of negative evaluation. Meanwhile, the negative correlation of body mass index with body image was stronger among those with lower self-esteem, while a positive association of body mass index with fear of negative evaluation was significant only among individuals with low self-esteem.",
"title": ""
},
{
"docid": "c0d7b92c1b88a2c234eac67c5677dc4d",
"text": "To appear in G Tesauro D S Touretzky and T K Leen eds Advances in Neural Information Processing Systems MIT Press Cambridge MA A straightforward approach to the curse of dimensionality in re inforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neu ral net Although this has been successful in the domain of backgam mon there is no guarantee of convergence In this paper we show that the combination of dynamic programming and function approx imation is not robust and in even very benign cases may produce an entirely wrong policy We then introduce Grow Support a new algorithm which is safe from divergence yet can still reap the bene ts of successful generalization",
"title": ""
},
{
"docid": "5501dc3f77d1117d84ecbd947a31df19",
"text": "How to make robot vision work robustly under varying lighting conditions and without the constraint of the current color-coded environment are two of the most challenging issues in the RoboCup community. In this paper, we present a robust omnidirectional vision sensor to deal with these issues for the RoboCup Middle Size League soccer robots, in which two novel algorithms are applied. The first one is a camera parameters auto-adjusting algorithm based on image entropy. The relationship between image entropy and camera parameters is verified by experiments, and camera parameters are optimized by maximizing image entropy to adapt the output of the omnidirectional vision to the varying illumination. The second one is a ball recognition method based on the omnidirectional vision without color classification. The conclusion is derived that the ball on the field can be imaged to be an ellipse approximately in our omnidirectional vision, and the arbitrary FIFA ball can be recognized by detecting the ellipse imaged by the ball. The experimental results show that a robust omnidirectional vision sensor can be realized by using the two algorithms mentioned above. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "194bea0d713d5d167e145e43b3c8b4e2",
"text": "Users can enjoy personalized services provided by various context-aware applications that collect users' contexts through sensor-equipped smartphones. Meanwhile, serious privacy concerns arise due to the lack of privacy preservation mechanisms. Currently, most mechanisms apply passive defense policies in which the released contexts from a privacy preservation system are always real, leading to a great probability with which an adversary infers the hidden sensitive contexts about the users. In this paper, we apply a deception policy for privacy preservation and present a novel technique, FakeMask, in which fake contexts may be released to provably preserve users' privacy. The output sequence of contexts by FakeMask can be accessed by the untrusted context-aware applications or be used to answer queries from those applications. Since the output contexts may be different from the original contexts, an adversary has greater difficulty in inferring the real contexts. Therefore, FakeMask limits what adversaries can learn from the output sequence of contexts about the user being in sensitive contexts, even if the adversaries are powerful enough to have the knowledge about the system and the temporal correlations among the contexts. The essence of FakeMask is a privacy checking algorithm which decides whether to release a fake context for the current context of the user. We present a novel privacy checking algorithm and an efficient one to accelerate the privacy checking process. Extensive evaluation experiments on real smartphone context traces of users demonstrate the improved performance of FakeMask over other approaches.",
"title": ""
},
{
"docid": "611f7b5564c9168f73f778e7466d1709",
"text": "A fold-back current-limit circuit, with load-insensitive quiescent current characteristic for CMOS low dropout regulator (LDO), is proposed in this paper. This method has been designed in 0.35 µm CMOS technology and verified by Hspice simulation. The quiescent current of the LDO is 5.7 µA at 100-mA load condition. It is only 2.2% more than it in no-load condition, 5.58 µA. The maximum current limit is set to be 197 mA, and the short-current limit is 77 mA. Thus, the power consumption can be saved up to 61% at the short-circuit condition, which also decreases the risk of damaging the power transistor. Moreover, the thermal protection can be simplified and the LDO will be more reliable.",
"title": ""
},
{
"docid": "6b064b9f4c90a60fab788f9d5aee8b58",
"text": "Extracorporeal photopheresis (ECP) is a technique that was developed > 20 years ago to treat erythrodermic cutaneous T-cell lymphoma (CTCL). The technique involves removal of peripheral blood, separation of the buffy coat, and photoactivation with a photosensitizer and ultraviolet A irradiation before re-infusion of cells. More than 1000 patients with CTCL have been treated with ECP, with response rates of 31-100%. ECP has been used in a number of other conditions, most widely in the treatment of chronic graft-versus-host disease (cGvHD) with response rates of 29-100%. ECP has also been used in several other autoimmune diseases including acute GVHD, solid organ transplant rejection and Crohn's disease, with some success. ECP is a relatively safe procedure, and side-effects are typically mild and transient. Severe reactions including vasovagal syncope or infections are uncommon. This is very valuable in conditions for which alternative treatments are highly toxic. The mechanism of action of ECP remains elusive. ECP produces a number of immunological changes and in some patients produces immune homeostasis with resultant clinical improvement. ECP is available in seven centres in the UK. Experts from all these centres formed an Expert Photopheresis Group and published the UK consensus statement for ECP in 2008. All centres consider patients with erythrodermic CTCL and steroid-refractory cGvHD for treatment. The National Institute for Health and Clinical Excellence endorsed the use of ECP for CTCL and suggested a need for expansion while recommending its use in specialist centres. ECP is safe, effective, and improves quality of life in erythrodermic CTCL and cGvHD, and should be more widely available for these patients.",
"title": ""
},
{
"docid": "60c887b5df030cc35ad805494d0d8c57",
"text": "Robots typically possess sensors of different modalities, such as colour cameras, inertial measurement units, and 3D laser scanners. Often, solving a particular problem becomes easier when more than one modality is used. However, while there are undeniable benefits to combine sensors of different modalities the process tends to be complicated. Segmenting scenes observed by the robot into a discrete set of classes is a central requirement for autonomy as understanding the scene is the first step to reason about future situations. Scene segmentation is commonly performed using either image data or 3D point cloud data. In computer vision many successful methods for scene segmentation are based on conditional random fields (CRF) where the maximum a posteriori (MAP) solution to the segmentation can be obtained by inference. In this paper we devise a new CRF inference method for scene segmentation that incorporates global constraints, enforcing the sets of nodes are assigned the same class label. To do this efficiently, the CRF is formulated as a relaxed quadratic program whose MAP solution is found using a gradient-based optimisation approach. The proposed method is evaluated on images and 3D point cloud data gathered in urban environments where image data provides the appearance features needed by the CRF, while the 3D point cloud data provides global spatial constraints over sets of nodes. Comparisons with belief propagation, conventional quadratic programming relaxation, and higher order potential CRF show the benefits of the proposed method.",
"title": ""
},
{
"docid": "4b97ee592753138c916b4c5621bee6fe",
"text": "We propose the very first non-intrusive measurement methodology for quantifying the performance of commodity Virtual Reality (VR) systems. Our methodology considers the VR system under test as a black-box and works with any VR applications. Multiple performance metrics on timing and positioning accuracy are considered, and detailed testbed setup and measurement steps are presented. We also apply our methodology to several VR systems in the market, and carefully analyze the experiment results. We make several observations: (i) 3D scene complexity affects the timing accuracy the most, (ii) most VR systems implement the dead reckoning algorithm, which incurs a non-trivial correction latency after incorrect predictions, and (iii) there exists an inherent trade-off between two positioning accuracy metrics: precision and sensitivity.",
"title": ""
},
{
"docid": "aaec22c0af0c2745d1bf5e4aa44f74f3",
"text": "Most users on social media have intrinsic characteristics, such as interests and political views, that can be exploited to identify and track them. It raises privacy and identity issues in online communities. In this paper we investigate the problem of user identity linkage on two behavior datasets collected from different experiments. Specifically, we focus on user linkage based on users' interaction behaviors with respect to content topics. We propose an embedding method to model a topic as a vector in a latent space so as to interpret its deep semantics. Then a user is modeled as a vector based on his or her interactions with topics. The embedding representations of topics are learned by optimizing the joint-objective: the compatibility between topics with similar semantics, the discriminative abilities of topics to distinguish identities, and the consistency of the same user's characteristics fromtwo datasets. The effectiveness of our method is verified on real-life datasets and the results show that it outperforms related methods.",
"title": ""
},
{
"docid": "f442fa8d061e32891f486a14c3a76748",
"text": "We compare and discuss various approaches to the problem of part of speech (POS) tagging of texts written in Kazakh, an agglutinative and highly inflectional Turkic language. In Kazakh a single root may produce hundreds of word forms, and it is difficult, if at all possible, to label enough training data to account for a vast set of all possible word forms in the language. Thus, current state of the art statistical POS taggers may not be as effective for Kazakh as for morphologically less complex languages, e.g. English. Also the choice of a POS tag set may influence the informativeness and the accuracy of tagging.",
"title": ""
},
{
"docid": "920a3f7d43295ee45fe689b7af5c7088",
"text": "Morphological inflection generation is the task of generating the inflected form of a given lemma corresponding to a particular linguistic transformation. We model the problem of inflection generation as a character sequence to sequence learning problem and present a variant of the neural encoder-decoder model for solving it. Our model is language independent and can be trained in both supervised and semi-supervised settings. We evaluate our system on seven datasets of morphologically rich languages and achieve either better or comparable results to existing state-of-the-art models of inflection generation.",
"title": ""
},
{
"docid": "b1c62a59a8ce3dd57ab2c00f7657cfef",
"text": "We developed a new method for estimation of vigilance level by using both EEG and EMG signals recorded during transition from wakefulness to sleep. Previous studies used only EEG signals for estimating the vigilance levels. In this study, it was aimed to estimate vigilance level by using both EEG and EMG signals for increasing the accuracy of the estimation rate. In our work, EEG and EMG signals were obtained from 30 subjects. In data preparation stage, EEG signals were separated to its subbands using wavelet transform for efficient discrimination, and chin EMG was used to verify and eliminate the movement artifacts. The changes in EEG and EMG were diagnosed while transition from wakefulness to sleep by using developed artificial neural network (ANN). Training and testing data sets consist of the subbanded components of EEG and power density of EMG signals were applied to the ANN for training and testing the system which gives three situations for the vigilance level of the subject: awake, drowsy, and sleep. The accuracy of estimation was about 98–99% while the accuracy of the previous study, which uses only EEG, was 95–96%.",
"title": ""
},
{
"docid": "a1d6a739b10ec93229c33e0a8607e75e",
"text": "We present and discuss the important business problem of estimating the effect of retention efforts on the Lifetime Value of a customer in the Telecommunications industry. We discuss the components of this problem, in particular customer value and length of service (or tenure) modeling, and present a novel segment-based approach, motivated by the segment-level view marketing analysts usually employ. We then describe how we build on this approach to estimate the effects of retention on Lifetime Value. Our solution has been successfully implemented in Amdocs' Business Insight (BI) platform, and we illustrate its usefulness in real-world scenarios.",
"title": ""
},
{
"docid": "66649e0b17ead976731bffcbfef16fd8",
"text": "This paper describes several low-cost methods for fabricating flexible electronic circuits on paper. The circuits comprise i) metallic wires (e.g., tin or zinc) that are deposited on the substrate by evaporation, sputtering, or airbrushing, and ii) discrete surface-mountable electronic components that are fastened with conductive adhesive directly to the wires. These electronic circuits—like conventional printed circuit boards—can be produced with electronic components that connect on both sides of the substrate. Unlike printed circuit boards made from fiberglass, ceramics, or polyimides, however, paper can be folded and creased (repeatedly), shaped to form threedimensional structures, trimmed using scissors, used to wick fluids (e.g., for microfluidic applications) and disposed of by incineration. Paper-based electronic circuits are thin and lightweight; they should be useful for applications in consumer electronics and packaging, for disposable systems for uses in the military and homeland security, for applications in medical sensing or low-cost portable diagnostics, for paper-based microelectromechanical systems, and for applications involving textiles.",
"title": ""
}
] |
scidocsrr
|
acc53c8a72c0289b45e76dbefcb269ee
|
Turning scientists into data explorers
|
[
{
"docid": "79c14cc420caa8db93bc74916ce5bb4d",
"text": "Hadoop has become the de facto platform for large-scale data analysis in commercial applications, and increasingly so in scientific applications. However, Hadoop's byte stream data model causes inefficiencies when used to process scientific data that is commonly stored in highly-structured, array-based binary file formats resulting in limited scalability of Hadoop applications in science. We introduce Sci-Hadoop, a Hadoop plugin allowing scientists to specify logical queries over array-based data models. Sci-Hadoop executes queries as map/reduce programs defined over the logical data model. We describe the implementation of a Sci-Hadoop prototype for NetCDF data sets and quantify the performance of five separate optimizations that address the following goals for several representative aggregate queries: reduce total data transfers, reduce remote reads, and reduce unnecessary reads. Two optimizations allow holistic aggregate queries to be evaluated opportunistically during the map phase; two additional optimizations intelligently partition input data to increase read locality, and one optimization avoids block scans by examining the data dependencies of an executing query to prune input partitions. Experiments involving a holistic function show run-time improvements of up to 8x, with drastic reductions of IO, both locally and over the network.",
"title": ""
}
] |
[
{
"docid": "4f2926c570fbb614f5bdfa20a9688a07",
"text": "We present a neural architecture for containment relation identification between medical events and/or temporal expressions. We experiment on a corpus of deidentified clinical notes in English from the Mayo Clinic, namely the THYME corpus. Our model achieves an F-measure of 0.613 and outperforms the best result reported on this corpus to date.",
"title": ""
},
{
"docid": "06abf2a7c6d0c25cfe54422268300e58",
"text": "The purpose of the present study is to provide useful data that could be applied to various types of periodontal plastic surgery by detailing the topography of the greater palatine artery (GPA), looking in particular at its depth from the palatal masticatory mucosa (PMM) and conducting a morphometric analysis of the palatal vault. Forty-three hemisectioned hard palates from embalmed Korean adult cadavers were used in this study. The morphometry of the palatal vault was analyzed, and then the specimens were decalcified and sectioned. Six parameters were measured using an image-analysis system after performing a standard calibration. In one specimen, the PMM was separated from the hard palate and subjected to a partial Sihler's staining technique, allowing the branching pattern of the GPA to be observed in a new method. The distances between the GPA and the gingival margin, and between the GPA and the cementoenamel junction were greatest at the maxillary second premolar. The shortest vertical distance between the GPA and the PMM decreased gradually as it proceeded anteriorly. The GPA was located deeper in the high-vault group than in the low-vault group. The premolar region should be recommended as the optimal donor site for tissue grafting, and in particular the second premolar region. The maximum size and thickness of tissue that can be harvested from the region were 9.3 mm and 4.0 mm, respectively.",
"title": ""
},
{
"docid": "4ba866eb1a9c541f87c9e3b7632cc5bf",
"text": "Biologists worry that the rapid rates of warming projected for the planet (1) will doom many species to extinction. Species could face extinction with climate change if climatically suitable habitat disappears or is made inaccessible by geographic barriers or species' inability to disperse (see the figure, panels A to E). Previous studies have provided region- or taxon-specific estimates of biodiversity loss with climate change that range from 0% to 54%, making it difficult to assess the seriousness of this problem. On page 571 of this issue, Urban (2) provides a synthetic and sobering estimate of climate change–induced biodiversity loss by applying a model-averaging approach to 131 of these studies. The result is a projection that up to one-sixth of all species may go extinct if we follow “business as usual” trajectories of carbon emissions.",
"title": ""
},
{
"docid": "543a4aacf3d0f3c33071b0543b699d3c",
"text": "This paper describes a buffer sharing technique that strikes a balance between the use of disk bandwidth and memory in order to maximize the performance of a video-on-demand server. We make the key observation that the configuration parameters of the system should be independent of the physical characteristics of the data (e.g., popularity of a clip). Instead, the configuration parameters are fixed and our strategy adjusts itself dynamically at run-time to support a pattern of access to the video clips.",
"title": ""
},
{
"docid": "36f31dea196f2d7a74bc442f1c184024",
"text": "The causes of Parkinson's disease (PD), the second most common neurodegenerative disorder, are still largely unknown. Current thinking is that major gene mutations cause only a small proportion of all cases and that in most cases, non-genetic factors play a part, probably in interaction with susceptibility genes. Numerous epidemiological studies have been done to identify such non-genetic risk factors, but most were small and methodologically limited. Larger, well-designed prospective cohort studies have only recently reached a stage at which they have enough incident patients and person-years of follow-up to investigate possible risk factors and their interactions. In this article, we review what is known about the prevalence, incidence, risk factors, and prognosis of PD from epidemiological studies.",
"title": ""
},
{
"docid": "2059f42692358bb141fc716cc58510d2",
"text": "Airline ticket purchase timing is a strategic problem that requires both historical data and domain knowledge to solve consistently. Even with some historical information (often a feature of modern travel reservation web sites), it is difficult for consumers to make true cost-minimizing decisions. To address this problem, we introduce an automated agent which is able to optimize purchase timing on behalf of customers and provide performance estimates of its computed action policy based on past performance. We apply machine learning to recent ticket price quotes from many competing airlines for the target flight route. Our novelty lies in extending this using a systematic feature extraction technique incorporating elementary user-provided domain knowledge that greatly enhances the performance of machine learning algorithms. Using this technique, our agent achieves much closer to the optimal purchase policy than other proposed decision theoretic approaches for this domain.",
"title": ""
},
{
"docid": "8e1947a9e890ef110c75a52d706eec2a",
"text": "Despite the rapid increase in online shopping, the literature is silent in terms of the interrelationship between perceived risk factors, the marketing impacts, and their influence on product and web-vendor consumer trust. This research focuses on holidaymakers’ perspectives using Internet bookings for their holidays. The findings reveal the associations between Internet perceived risks and the relatively equal influence of product and e-channel risks in consumers’ trust, and that online purchasing intentions are equally influenced by product and e-channel consumer trust. They also illustrate the relationship between marketing strategies and perceived risks, and provide managerial suggestions for further e-purchasing tourism improvement.",
"title": ""
},
{
"docid": "1f4df0db1c554d83cf1fd4b429e9ef9a",
"text": "This paper presents a position-varied plate utilized for Thai license plate recognition using back propagation neural netwo rk (BPNN). In this method, a dimension image of the car is suitably decreased by image resizing (e.g. interpolation method), and then they are converted to gray images for inputs to plate localization process. The plate localization process is used to find the area position of the license plate for inputting to image segmentation process which is used to find edges of main characters in the license plate. After that, each of image characters received from character segmentation process is inserted into neural network to analyze the probable characters and numbers. In this experiment, the images of numbers and Thai characters are cross-validated by BPNN (training, validation and testing sets), and then 100 images of Thai license plate are used for testing. The results reveal that an accuracy of analysis is at approximately 97 % for the distance of the car and camera between 0.5m to 1m, and the angle of inclined plate varied from ±13 degrees.",
"title": ""
},
{
"docid": "8ba2b376995e3a6a02720a73012d590b",
"text": "This paper focuses on reducing the power consumption of wireless microsensor networks. Therefore, a communication protocol named LEACH (Low-Energy Adaptive Clustering Hierarchy) is modified. We extend LEACH’s stochastic clusterhead selection algorithm by a deterministic component. Depending on the network configuration an increase of network lifetime by about 30 % can be accomplished. Furthermore, we present a new approach to define lifetime of microsensor networks using three new metrics FND (First Node Dies), HNA (Half of the Nodes Alive), and LND (Last Node Dies).",
"title": ""
},
{
"docid": "7c974eacb24368a0c5acfeda45d60f64",
"text": "We propose a novel approach for verifying model hypotheses in cluttered and heavily occluded 3D scenes. Instead of verifying one hypothesis at a time, as done by most state-of-the-art 3D object recognition methods, we determine object and pose instances according to a global optimization stage based on a cost function which encompasses geometrical cues. Peculiar to our approach is the inherent ability to detect significantly occluded objects without increasing the amount of false positives, so that the operating point of the object recognition algorithm can nicely move toward a higher recall without sacrificing precision. Our approach outperforms state-of-the-art on a challenging dataset including 35 household models obtained with the Kinect sensor, as well as on the standard 3D object recognition benchmark dataset.",
"title": ""
},
{
"docid": "124c73eb861c0b2fb64d0084b3961859",
"text": "Treemaps are an important and commonly-used approach to hierarchy visualization, but an important limitation of treemaps is the difficulty of discerning the structure of a hierarchy. This paper presents cascaded treemaps, a new approach to treemap presentation that is based in cascaded rectangles instead of the traditional nested rectangles. Cascading uses less space to present the same containment relationship, and the space savings enable a depth effect and natural padding between siblings in complex hierarchies. In addition, we discuss two general limitations of existing treemap layout algorithms: disparities between node weight and relative node size that are introduced by layout algorithms ignoring the space dedicated to presenting internal nodes, and a lack of stability when generating views of different levels of treemaps as a part of supporting interactive zooming. We finally present a two-stage layout process that addresses both concerns, computing a stable structure for the treemap and then using that structure to consider the presentation of internal nodes when arranging the treemap. All of this work is presented in the context of two large real-world hierarchies, the Java package hierarchy and the eBay auction hierarchy.",
"title": ""
},
{
"docid": "ce2590b39ef85a1a3e7d5b4914746a62",
"text": "In the smart grid system, an advanced meter infrastructure (AMI) is an integral subsystem mainly used to collect monthly consumption and load profile. Hence, a large amount of information will be exchanged within these systems. Data concentrator unit (DCU) is used to collect the information from smart meters before forwarding to meter data management system. In order to meet the AMI's QoS such as throughput and delay, the optimal placement for DCU has to be thoroughly investigated. This paper aims at developing an optimal location algorithm for the DCU placement in a non-beacon-mode IEEE 802.15.4 smart grid network. The optimization algorithm preliminarily computes the DCU position based on a minimum hop count metric. Nevertheless, it is possible that multiple positions achieving the minimum hop count may be found; therefore, the additional performance metric, i.e. the averaged throughput and delay, will be used to select the ultimately optimal location. In this paper, the maximum throughput with the acceptable averaged delay constraint is proposed by considering the behavior of the AMI meters which is almost stationary in the network. From the simulation results, it is obvious that the proposed methodology is significantly effective.",
"title": ""
},
{
"docid": "ce2d1c0e113aafdb0db35a3e21c7f0ff",
"text": "Previous works on facial expression analysis have shown that person specific models are advantageous with respect to generic ones for recognizing facial expressions of new users added to the gallery set. This finding is not surprising, due to the often significant inter-individual variability: different persons have different morphological aspects and express their emotions in different ways. However, acquiring person-specific labeled data for learning models is a very time consuming process. In this work we propose a new transfer learning method to compute personalized models without labeled target data Our approach is based on learning multiple person-specific classifiers for a set of source subjects and then directly transfer knowledge about the parameters of these classifiers to the target individual. The transfer process is obtained by learning a regression function which maps the data distribution associated to each source subject to the corresponding classifier's parameters. We tested our approach on two different application domains, Action Units (AUs) detection and spontaneous pain recognition, using publicly available datasets and showing its advantages with respect to the state-of-the-art both in term of accuracy and computational cost.",
"title": ""
},
{
"docid": "7eb150a364984512de830025a6e93e0c",
"text": "The mobile ecosystem is characterized by a large and complex network of companies interacting with each other, directly and indirectly, to provide a broad array of mobile products and services to end-customers. With the convergence of enabling technologies, the complexity of the mobile ecosystem is increasing multifold as new actors are emerging, new relations are formed, and the traditional distribution of power is shifted. Drawing on theories of complex systems, interfirm relationships, and the creative art and science of network visualization, this paper identifies key catalysts and develops a method to effectively map the complex structure and dynamics of over 7,000 global companies and 18,000 relationships in the mobile ecosystem. Our visual approach enables decision makers to explore the complexity of interfirm relations in the mobile ecosystem, understand their firmpsilas competitive position in a network context, and identify patterns that may influence their choice of innovation strategy or business models.",
"title": ""
},
{
"docid": "7cd992aec08167cb16ea1192a511f9aa",
"text": "In this thesis, we will present an Echo State Network (ESN) to investigate hierarchical cognitive control, one of the functions of Prefrontal Cortex (PFC). This ESN is designed with the intention to implement it as a robot controller, making it useful for biologically inspired robot control and for embodied and embedded PFC research. We will apply the ESN to a n-back task and a Wisconsin Card Sorting task to confirm the hypothesis that topological mapping of temporal and policy abstraction over the PFC can be explained by the effects of two requirements: a better preservation of information when information is processed in different areas, versus a better integration of information when information is processed in a single area.",
"title": ""
},
{
"docid": "5404f89c379ffc79de345414baf1e084",
"text": "OBJECTIVES\nTo describe pelvic organ prolapse surgical success rates using a variety of definitions with differing requirements for anatomic, symptomatic, or re-treatment outcomes.\n\n\nMETHODS\nEighteen different surgical success definitions were evaluated in participants who underwent abdominal sacrocolpopexy within the Colpopexy and Urinary Reduction Efforts trial. The participants' assessments of overall improvement and rating of treatment success were compared between surgical success and failure for each of the definitions studied. The Wilcoxon rank sum test was used to identify significant differences in outcomes between success and failure.\n\n\nRESULTS\nTreatment success varied widely depending on definition used (19.2-97.2%). Approximately 71% of the participants considered their surgery \"very successful,\" and 85.2% considered themselves \"much better\" than before surgery. Definitions of success requiring all anatomic support to be proximal to the hymen had the lowest treatment success (19.2-57.6%). Approximately 94% achieved surgical success when it was defined as the absence of prolapse beyond the hymen. Subjective cure (absence of bulge symptoms) occurred in 92.1% while absence of re-treatment occurred in 97.2% of participants. Subjective cure was associated with significant improvements in the patient's assessment of both treatment success and overall improvement, more so than any other definition considered (P<.001 and <.001, respectively). Similarly, the greatest difference in symptom burden and health-related quality of life as measured by the Pelvic Organ Prolapse Distress Inventory and Pelvic Organ Prolapse Impact Questionnaire scores between treatment successes and failures was noted when success was defined as subjective cure (P<.001).\n\n\nCONCLUSION\nThe definition of success substantially affects treatment success rates after pelvic organ prolapse surgery. The absence of vaginal bulge symptoms postoperatively has a significant relationship with a patient's assessment of overall improvement, while anatomic success alone does not.\n\n\nLEVEL OF EVIDENCE\nII.",
"title": ""
},
{
"docid": "f17b32e8a6a4604d102ab699da145a7d",
"text": "BACKGROUND\nHealthcare-seeking behaviour in patients with diabetes mellitus (DM) has been investigated to a limited extent, and not in developing countries. Switches between different health sectors may interrupt glycaemic control, affecting health. The aim of the study was to explore healthcare-seeking behaviour, including use of complementary alternative medicine (CAM) and traditional healers, in Ugandans diagnosed with DM. Further, to study whether gender influenced healthcare-seeking behaviour.\n\n\nMETHODS\nThis is a descriptive study with a snowball sample from a community in Uganda. Semi-structured interviews were held with 16 women and 8 men, aged 25-70. Data were analysed by qualitative content analysis.\n\n\nRESULTS\nHealthcare was mainly sought among doctors and nurses in the professional sector because of severe symptoms related to DM and/or glycaemic control. Females more often focused on follow-up of DM and chronic pain in joints, while males described fewer problems. Among those who felt that healthcare had failed, most had turned to traditional healers in the folk sector for prescription of herbs or food supplements, more so in women than men. Males more often turned to private for-profit clinics while females more often used free governmental institutions.\n\n\nCONCLUSIONS\nHealthcare was mainly sought from nurses and physicians in the professional sector and females used more free-of-charge governmental institutions. Perceived failure in health care to manage DM or related complications led many, particularly women, to seek alternative treatment from CAM practitioners in the folk sector. Living conditions, including healthcare organisation and gender, seemed to influence healthcare seeking, but further studies are needed.",
"title": ""
},
{
"docid": "dbf683e908ea9e5962d0830e6b8d24fd",
"text": "This paper studies physical layer security in a wireless ad hoc network with numerous legitimate transmitter–receiver pairs and eavesdroppers. A hybrid full-duplex (FD)/half-duplex receiver deployment strategy is proposed to secure legitimate transmissions, by letting a fraction of legitimate receivers work in the FD mode sending jamming signals to confuse eavesdroppers upon their information receptions, and letting the other receivers work in the half-duplex mode just receiving their desired signals. The objective of this paper is to choose properly the fraction of FD receivers for achieving the optimal network security performance. Both accurate expressions and tractable approximations for the connection outage probability and the secrecy outage probability of an arbitrary legitimate link are derived, based on which the area secure link number, network-wide secrecy throughput, and network-wide secrecy energy efficiency are optimized, respectively. Various insights into the optimal fraction are further developed, and its closed-form expressions are also derived under perfect self-interference cancellation or in a dense network. It is concluded that the fraction of FD receivers triggers a non-trivial tradeoff between reliability and secrecy, and the proposed strategy can significantly enhance the network security performance.",
"title": ""
},
{
"docid": "509075d64990cf7258c13dd0dfd5e282",
"text": "In recent years we have seen a tremendous growth in applications of passive sensor-enabled RFID technology by researchers; however, their usability in applications such as activity recognition is limited by a key issue associated with their incapability to handle unintentional brownout events leading to missing significant sensed events such as a fall from a chair. Furthermore, due to the need to power and sample a sensor the practical operating range of passive-sensor enabled RFID tags are also limited with respect to passive RFID tags. Although using active or semi-passive tags can provide alternative solutions, they are not without the often undesirable maintenance and limited lifespan issues due to the need for batteries. In this article we propose a new hybrid powered sensor-enabled RFID tag concept which can sustain the supply voltage to the tag circuitry during brownouts and increase the operating range of the tag by combining the concepts from passive RFID tags and semipassive RFID tags, while potentially eliminating shortcomings of electric batteries. We have designed and built our concept, evaluated its desirable properties through extensive experiments and demonstrate its significance in the context of a human activity recognition application.",
"title": ""
},
{
"docid": "205ed1eba187918ac6b4a98da863a6f2",
"text": "Since the first papers on asymptotic waveform evaluation (AWE), Pade-based reduced order models have become standard for improving coupled circuit-interconnect simulation efficiency. Such models can be accurately computed using bi-orthogonalization algorithms like Pade via Lanczos (PVL), but the resulting Pade approximates can still be unstable even when generated from stable RLC circuits. For certain classes of RC circuits it has been shown that congruence transforms, like the Arnoldi algorithm, can generate guaranteed stable and passive reduced-order models. In this paper we present a computationally efficient model-order reduction technique, the coordinate-transformed Arnoldi algorithm, and show that this method generates arbitrarily accurate and guaranteed stable reduced-order models for RLC circuits. Examples are presented which demonstrates the enhanced stability and efficiency of the new method.",
"title": ""
}
] |
scidocsrr
|
40075f172fc46bc3f3ab982b5b1663ca
|
Android Rooting: Methods, Detection, and Evasion
|
[
{
"docid": "0e4722012aeed8dc356aa8c49da8c74f",
"text": "The Android software stack for mobile devices defines and enforces its own security model for apps through its application-layer permissions model. However, at its foundation, Android relies upon the Linux kernel to protect the system from malicious or flawed apps and to isolate apps from one another. At present, Android leverages Linux discretionary access control (DAC) to enforce these guarantees, despite the known shortcomings of DAC. In this paper, we motivate and describe our work to bring flexible mandatory access control (MAC) to Android by enabling the effective use of Security Enhanced Linux (SELinux) for kernel-level MAC and by developing a set of middleware MAC extensions to the Android permissions model. We then demonstrate the benefits of our security enhancements for Android through a detailed analysis of how they mitigate a number of previously published exploits and vulnerabilities for Android. Finally, we evaluate the overheads imposed by our security enhancements.",
"title": ""
},
{
"docid": "948d3835e90c530c4290e18f541d5ef2",
"text": "Each time a user installs an application on their Android phone they are presented with a full screen of information describing what access they will be granting that application. This information is intended to help them make two choices: whether or not they trust that the application will not damage the security of their device and whether or not they are willing to share their information with the application, developer, and partners in question. We performed a series of semi-structured interviews in two cities to determine whether people read and understand these permissions screens, and to better understand how people perceive the implications of these decisions. We find that the permissions displays are generally viewed and read, but not understood by Android users. Alarmingly, we find that people are unaware of the security risks associated with mobile apps and believe that app marketplaces test and reject applications. In sum, users are not currently well prepared to make informed privacy and security decisions around installing applications.",
"title": ""
}
] |
[
{
"docid": "1719ad98795f32a55f4e920e075ee798",
"text": "BACKGROUND\nUrinary tract infections (UTIs) are one of main health problems caused by many microorganisms, including uropathogenic Escherichia coli (UPEC). UPEC strains are the most frequent pathogens responsible for 85% and 50% of community and hospital acquired UTIs, respectively. UPEC strains have special virulence factors, including type 1 fimbriae, which can result in worsening of UTIs.\n\n\nOBJECTIVES\nThis study was performed to detect type 1 fimbriae (the FimH gene) among UPEC strains by molecular method.\n\n\nMATERIALS AND METHODS\nA total of 140 isolated E. coli strains from patients with UTI were identified using biochemical tests and then evaluated for the FimH gene by polymerase chain reaction (PCR) analysis.\n\n\nRESULTS\nThe UPEC isolates were identified using biochemical tests and were screened by PCR. The fimH gene was amplified using specific primers and showed a band about 164 bp. The FimH gene was found in 130 isolates (92.8%) of the UPEC strains. Of 130 isolates positive for the FimH gene, 62 (47.7%) and 68 (52.3%) belonged to hospitalized patients and outpatients, respectively.\n\n\nCONCLUSIONS\nThe results of this study indicated that more than 90% of E. coli isolates harbored the FimH gene. The high binding ability of FimH could result in the increased pathogenicity of E. coli; thus, FimH could be used as a possible diagnostic marker and/or vaccine candidate.",
"title": ""
},
{
"docid": "2445f9a80dc0f31ea39ade0ae8941f26",
"text": "Various groups of ascertainable individuals have been granted the status of “persons” under American law, while that status has been denied to other groups This article examines various analogies that might be drawn by courts in deciding whether to extend “person” status to intelligent machines, and the limitations that might be placed upon such recognition As an alternative analysis: this article questions the legal status of various human/machine interfaces, and notes the difficulty in establishing an absolute point beyond which legal recognition will not extend COMPUTERS INCREASINGLY RESEMBLE their human creators More precisely, it is becoming increasingly difficult to distinguish some computer information-processing from that of humans, judging from the final product. Computers have proven capable of far more physical and mental “human” functions than most people believed was possible. The increasing similarity between humans and machines might eventually require legal recognition of computers as “persons.” In the United States, there are two triers t’o such Views expressed here are those of the author @ Llarshal S. Willick 1982 41 rights reserved Editor’s Note: This article is written by an attorney using a common reference style for legal citations The system of citation is more complex than systems ordinarily used in scientific publications since it must provide numerous variations for different sources of evidence and jurisdictions We have decided not to change t.his article’s format for citations. legal recognition. The first tier determines which ascertainable individuals are considered persons (e g., blacks, yes; fetuses, no.) The second tier determines which rights and obligations are vested in the recognized persons, based on their observed or presumed capacities (e.g., the insane are restricted; eighteen-year-olds can vote.) The legal system is more evolutionary than revolutionary, however. Changes in which individuals should be recognized as persons under the law tend to be in response to changing cult,ural and economic realities, rather than the result of advance planning. Similarly, shifts in the allocation of legal rights and obligations are usually the result of societal pressures that do not result from a dispassionate masterplanning of society. Courts attempt to analogize new problems to those previously settled, where possible: the process is necessarily haphazard. As “intelligent” machines appear, t,hey will pervade a society in which computers play an increasingly significant part, but in which they will have no recognized legal personality. The question of what rights they should have will most probably not have been addressed. It is therefore most likely that computers will enter the legal arena through the courts The myriad acts of countless individuals will eventually give rise to a situat,ion in which some judicial decision regarding computer personality is needed in order to determine the rights of the parties to a THE AI MAGAZINE Summer 1983 5 AI Magazine Volume 4 Number 2 (1983) (© AAAI)",
"title": ""
},
{
"docid": "4284f5cb44a2c466dd7ea9e7ee2fc387",
"text": "As an iMetrics technique, co-word analysis is used to describe the status of various subject areas, however, iMetrics itself is not examined by a co-word analysis. For the purpose of using co-word analysis, this study tries to investigate the intellectual structure of iMetrics during the period of 1978 to 2014. The research data are retrieved from two core journals on iMetrics research ( Scientometrics , and Journal of Informetrics ) and relevant articles in six journals publishing iMetrics studies. Application of hierarchical clustering led to the formation of 11 clusters representing the intellectual structure of iMetrics, including “Scientometric Databases and Indicators,” “Citation Analysis,” “Sociology of Science,” “Issues Related to Rankings of Universities, Journals, etc.,” “Information Visualization and Retrieval,” “Mapping Intellectual Structure of Science,” “Webometrics,” “Industry–University– Government Relations,” “Technometrics (Innovation and Patents), “Scientific Collaboration in Universities”, and “Basics of Network Analysis.” Furthermore, a two-dimensional map and a strategic diagram are drawn to clarify the structure, maturity, and cohesion of clusters. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3b90d2c7858680a9d90c49e63d39c7c6",
"text": "With multiple crowd gatherings of millions of people every year in events ranging from pilgrimages to protests, concerts to marathons, and festivals to funerals; visual crowd analysis is emerging as a new frontier in computer vision. In particular, counting in highly dense crowds is a challenging problem with far-reaching applicability in crowd safety and management, as well as gauging political significance of protests and demonstrations. In this paper, we propose a novel approach that simultaneously solves the problems of counting, density map estimation and localization of people in a given dense crowd image. Our formulation is based on an important observation that the three problems are inherently related to each other making the loss function for optimizing a deep CNN decomposable. Since localization requires high-quality images and annotations, we introduce UCF-QNRF dataset that overcomes the shortcomings of previous datasets, and contains 1.25 million humans manually marked with dot annotations. Finally, we present evaluation measures and comparison with recent deep CNN networks, including those developed specifically for crowd counting. Our approach significantly outperforms state-of-the-art on the new dataset, which is the most challenging dataset with the largest number of crowd annotations in the most diverse set of scenes.",
"title": ""
},
{
"docid": "788f02363d1cd96cf1786e98deac0a8c",
"text": "This paper investigates the use of color information when used within a state-of-the-art large scale image search system. We introduce a simple yet effective and efficient color signature generation procedure. It is used either to produce global or local descriptors. As a global descriptor, it outperforms several state-of-the-art color description methods, in particular the bag-of-words method based on color SIFT. As a local descriptor, our signature is used jointly with SIFT descriptors (no color) to provide complementary information. This significantly improves the recognition rate, outperforming the state of the art on two image search benchmarks. We provide an open source package of our signature (http://www.kooaba.com/en/learnmore/labs/).",
"title": ""
},
{
"docid": "17c4ad36c7e97097d783382d7450279c",
"text": "Popular content such as software updates is requested by a large number of users. Traditionally, to satisfy a large number of requests, lager server farms or mirroring are used, both of which are expensive. An inexpensive alternative are peer-to-peer based replication systems, where users who retrieve the file, act simultaneously as clients and servers. In this paper, we study BitTorrent, a new and already very popular peerto-peer application that allows distribution of very large contents to a large set of hosts. Our analysis of BitTorrent is based on measurements collected on a five months long period that involved thousands of peers. We assess the performance of the algorithms used in BitTorrent through several metrics. Our conclusions indicate that BitTorrent is a realistic and inexpensive alternative to the classical server-based content distribution.",
"title": ""
},
{
"docid": "930cc322737ea975cd077dcec2935f4d",
"text": "Metaphor is one of the most studied and widespread figures of speech and an essential element of individual style. In this paper we look at metaphor identification in Adjective-Noun pairs. We show that using a single neural network combined with pre-trained vector embeddings can outperform the state of the art in terms of accuracy. In specific, the approach presented in this paper is based on two ideas: a) transfer learning via using pre-trained vectors representing adjective noun pairs, and b) a neural network as a model of composition that predicts a metaphoricity score as output. We present several different architectures for our system and evaluate their performances. Variations on dataset size and on the kinds of embeddings are also investigated. We show considerable improvement over the previous approaches both in terms of accuracy and w.r.t the size of annotated training data.",
"title": ""
},
{
"docid": "631b6c1bce729a25c02f499464df7a4f",
"text": "Natural language artifacts, such as requirements specifications, often explicitly state the security requirements for software systems. However, these artifacts may also imply additional security requirements that developers may overlook but should consider to strengthen the overall security of the system. The goal of this research is to aid requirements engineers in producing a more comprehensive and classified set of security requirements by (1) automatically identifying security-relevant sentences in natural language requirements artifacts, and (2) providing context-specific security requirements templates to help translate the security-relevant sentences into functional security requirements. Using machine learning techniques, we have developed a tool-assisted process that takes as input a set of natural language artifacts. Our process automatically identifies security-relevant sentences in the artifacts and classifies them according to the security objectives, either explicitly stated or implied by the sentences. We classified 10,963 sentences in six different documents from healthcare domain and extracted corresponding security objectives. Our manual analysis showed that 46% of the sentences were security-relevant. Of these, 28% explicitly mention security while 72% of the sentences are functional requirements with security implications. Using our tool, we correctly predict and classify 82% of the security objectives for all the sentences (precision). We identify 79% of all security objectives implied by the sentences within the documents (recall). Based on our analysis, we develop context-specific templates that can be instantiated into a set of functional security requirements by filling in key information from security-relevant sentences.",
"title": ""
},
{
"docid": "c9018c9ba911bae219e29d8b32f7452a",
"text": "Automatic character generation, expected to save much time and labor, is an appealing solution for new typeface design. Inspired by the recent advancement in Generative Adversarial Networks (GANs), this paper proposes a Hierarchical Generative Adversarial Network (HGAN) for typeface transformation. The proposed HGAN consists of two sub-networks: (1) a transfer network mapping characters from one typeface to another preserving the corresponding structural information, which includes a content encoder and a hierarchical generator. (2) a hierarchical adversarial discriminator which distinguishes samples generated by the transfer network from real samples. Considering the unique properties of characters, different from original GANs, a hierarchical structure is proposed, which output the transferred characters in different phase of generator and at the same time, making the True/False judgment not only based on the final extracting features but also intermediate features in discriminator. Experimenting with Chinese typeface transformation, we show that HGAN is an effective framework for font style transfer, from standard printed typeface to personal handwriting styles.",
"title": ""
},
{
"docid": "524b3d3948f5d3d7e6e1896b03a359e5",
"text": "The reduction in the operating voltage play a majorrole in improving the performance of the integratedcircuits.Apart from that lesser power consumption, reducedarea and smaller size of transistors are also the vital factors inthe design criteria and fabrication of the systems. This articleapproaches towards the increasing performance of the systemsby comparing different types of adder circuits. In this article, anew circuit has been designed using the TG technology. Basedon different parameters like average power consumption anddelay, it has been observed that the Carry look-ahead adderand Carry bypass adder consumes more power. TheComparative analysis of TG based 8-bit different AdderDesigns using 180nm technology using TANNER tool has beenconsidered.",
"title": ""
},
{
"docid": "caf0e4b601252125a65aaa7e7a3cba5a",
"text": "Recent advances in visual tracking methods allow following a given object or individual in presence of significant clutter or partial occl usions in a single or a set of overlapping camera views. The question of when person detections in different views or at different time instants can be linked to the same individual is of funda mental importance to the video analysis in large-scale network of cameras. This is the pers on reidentification problem. The paper focuses on algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Methods that effec tively address the challenges associated with changes in illumination, pose, and clothing a ppearance variation are discussed. More specifically, the development of a set of models that ca pture the overall appearance of an individual and can effectively be used for information retrieval are reviewed. Some of them provide a holistic description of a person, and some o th rs require an intermediate step where specific body parts need to be identified. Some ar e designed to extract appearance features over time, and some others can operate reliabl y also on single images. The paper discusses algorithms for speeding up the computation of signatures. In particular it describes very fast procedures for computing co-occurrenc e matrices by leveraging a generalization of the integral representation of images. The alg orithms are deployed and tested in a camera network comprising of three cameras with non-overl apping field of views, where a multi-camera multi-target tracker links the tracks in dif ferent cameras by reidentifying the same people appearing in different views.",
"title": ""
},
{
"docid": "249d835b11078e26bc406ae98e773df6",
"text": "This paper addresses the problem of simultaneous estimation of a vehicle's ego motion and motions of multiple moving objects in the scene-called eoru motions-through a monocular vehicle-mounted camera. Localization of multiple moving objects and estimation of their motions is crucial for autonomous vehicles. Conventional localization and mapping techniques (e.g., visual odometry and simultaneous localization and mapping) can only estimate the ego motion of the vehicle. The capability of a robot localization pipeline to deal with multiple motions has not been widely investigated in the literature. We present a theoretical framework for robust estimation of multiple relative motions in addition to the camera ego motion. First, the framework for general unconstrained motion is introduced and then it is adapted to exploit the vehicle kinematic constraints to increase efficiency. The method is based on projective factorization of the multiple-trajectory matrix. First, the ego motion is segmented and then several hypotheses are generated for the eoru motions. All the hypotheses are evaluated and the one with the smallest reprojection error is selected. The proposed framework does not need any a priori knowledge of the number of motions and is robust to noisy image measurements. The method with a constrained motion model is evaluated on a popular street-level image dataset collected in urban environments (the KITTI dataset), including several relative ego-motion and eoru-motion scenarios. A benchmark dataset (Hopkins 155) is used to evaluate this method with a general motion model. The results are compared with those of the state-of-the-art methods considering a similar problem, referred to as multibody structure from motion in the computer vision community.",
"title": ""
},
{
"docid": "6e8a9c37672ec575821da5c9c3145500",
"text": "As video games become increasingly popular pastimes, it becomes more important to understand how different individuals behave when they play these games. Previous research has focused mainly on behavior in massively multiplayer online role-playing games; therefore, in the current study we sought to extend on this research by examining the connections between personality traits and behaviors in video games more generally. Two hundred and nineteen university students completed measures of personality traits, psychopathic traits, and a questionnaire regarding frequency of different behaviors during video game play. A principal components analysis of the video game behavior questionnaire revealed four factors: Aggressing, Winning, Creating, and Helping. Each behavior subscale was significantly correlated with at least one personality trait. Men reported significantly more Aggressing, Winning, and Helping behavior than women. Controlling for participant sex, Aggressing was negatively correlated with Honesty–Humility, Helping was positively correlated with Agreeableness, and Creating was negatively correlated with Conscientiousness. Aggressing was also positively correlated with all psychopathic traits, while Winning and Creating were correlated with one psychopathic trait each. Frequency of playing video games online was positively correlated with the Aggressing, Winning, and Helping scales, but not with the Creating scale. The results of the current study provide support for previous research on personality and behavior in massively multiplayer online role-playing games. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9b1edc2fbbf8c6ec584708be0dd25327",
"text": "To date, a large number of algorithms to solve the problem of autonomous exploration and mapping has been presented. However, few efforts have been made to compare these techniques. In this paper, an extensive study of the most important methods for autonomous exploration and mapping of unknown environments is presented. Furthermore, a representative subset of these techniques has been chosen to be analysed. This subset contains methods that differ in the level of multi-robot coordination and in the grade of integration with the simultaneous localization and mapping (SLAM) algorithm. These exploration techniques were tested in simulation and compared using different criteria as exploration time or map quality. The results of this analysis are shown in this paper. The weaknesses and strengths of each strategy have been stated and the most appropriate algorithm for each application has been determined.",
"title": ""
},
{
"docid": "5df96510354ee3b37034a99faeff4956",
"text": "In recent years, the task of recommending hashtags for microblogs has been given increasing attention. Various methods have been proposed to study the problem from different aspects. However, most of the recent studies have not considered the differences in the types or uses of hashtags. In this paper, we introduce a novel nonparametric Bayesian method for this task. Based on the Dirichlet Process Mixture Models (DPMM), we incorporate the type of hashtag as a hidden variable. The results of experiments on the data collected from a real world microblogging service demonstrate that the proposed method outperforms stateof-the-art methods that do not consider these aspects. By taking these aspects into consideration, the relative improvement of the proposed method over the state-of-theart methods is around 12.2% in F1score.",
"title": ""
},
{
"docid": "26cc16cfb31222c7f800ac75a9cbbd13",
"text": "In the WZ factorization the outermost parallel loop decreases the number of iterations executed at each step and this changes the amount of parallelism in each step. The aim of the paper is to present four strategies of parallelizing nested loops on multicore architectures on the example of the WZ factorization.",
"title": ""
},
{
"docid": "fd11fbed7a129e3853e73040cbabb56c",
"text": "A digitally modulated power amplifier (DPA) in 1.2 V 0.13 mum SOI CMOS is presented, to be used as a building block in multi-standard, multi-band polar transmitters. It performs direct amplitude modulation of an input RF carrier by digitally controlling an array of 127 unary-weighted and three binary-weighted elementary gain cells. The DPA is based on a novel two-stage topology, which allows seamless operation from 800 MHz through 2 GHz, with a full-power efficiency larger than 40% and a 25.2 dBm maximum envelope power. Adaptive digital predistortion is exploited for DPA linearization. The circuit is thus able to reconstruct 21.7 dBm WCDMA/EDGE signals at 1.9 GHz with 38% efficiency and a higher than 10 dB margin on all spectral specifications. As a result of the digital modulation technique, a higher than 20.1 % efficiency is guaranteed for WCDMA signals with a peak-to-average power ratio as high as 10.8 dB. Furthermore, a 15.3 dBm, 5 MHz WiMAX OFDM signal is successfully reconstructed with a 22% efficiency and 1.53% rms EVM. A high 10-bit nominal resolution enables a wide-range TX power control strategy to be implemented, which greatly minimizes the quiescent consumption down to 10 mW. A 16.4% CDMA average efficiency is thus obtained across a > 70 dB power control range, while complying with all the spectral specifications.",
"title": ""
},
{
"docid": "df609125f353505fed31eee302ac1742",
"text": "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26].",
"title": ""
},
{
"docid": "b216a38960c537d52d94adc8d50a43df",
"text": "BACKGROUND\nAutologous platelet-rich plasma has attracted attention in various medical fields recently, including orthopedic, plastic, and dental surgeries and dermatology for its wound healing ability. Further, it has been used clinically in mesotherapy for skin rejuvenation.\n\n\nOBJECTIVE\nIn this study, the effects of activated platelet-rich plasma (aPRP) and activated platelet-poor plasma (aPPP) have been investigated on the remodelling of the extracellular matrix, a process that requires activation of dermal fibroblasts, which is essential for rejuvenation of aged skin.\n\n\nMETHODS\nPlatelet-rich plasma (PRP) and platelet-poor plasma (PPP) were prepared using a double-spin method and then activated with thrombin and calcium chloride. The proliferative effects of aPRP and aPPP were measured by [(3)H]thymidine incorporation assay, and their effects on matrix protein synthesis were assessed by quantifying levels of procollagen type I carboxy-terminal peptide (PIP) by enzyme-linked immunosorbent assay (ELISA). The production of collagen and matrix metalloproteinases (MMP) was studied by Western blotting and reverse transcriptase-polymerase chain reaction.\n\n\nRESULTS\nPlatelet numbers in PRP increased to 9.4-fold over baseline values. aPRP and aPPP both stimulated cell proliferation, with peak proliferation occurring in cells grown in 5% aPRP. Levels of PIP were highest in cells grown in the presence of 5% aPRP. Additionally, aPRP and aPPP increased the expression of type I collagen, MMP-1 protein, and mRNA in human dermal fibroblasts.\n\n\nCONCLUSION\naPRP and aPPP promote tissue remodelling in aged skin and may be used as adjuvant treatment to lasers for skin rejuvenation in cosmetic dermatology.",
"title": ""
},
{
"docid": "cc7b9d8bc0036b842f3c1f492998abc7",
"text": "This paper presents a new approach called Hierarchical Support Vector Machines (HSVM), to address multiclass problems. The method solves a series of maxcut problems to hierarchically and recursively partition the set of classes into two-subsets, till pure leaf nodes that have only one class label, are obtained. The SVM is applied at each internal node to construct the discriminant function for a binary metaclass classifier. Because maxcut unsupervised decomposition uses distance measures to investigate the natural class groupings. HSVM has a fast and intuitive SVM training process that requires little tuning and yields both high accuracy levels and good generalization. The HSVM method was applied to Hyperion hyperspectral data collected over the Okavango Delta of Botswana. Classification accuracies and generalization capability are compared to those achieved by the Best Basis Binary Hierarchical Classifier, a Random Forest CART binary decision tree classifier and Binary Hierarchical Support Vector Machines.",
"title": ""
}
] |
scidocsrr
|
e534d1c8b40dbbe5fdcc723b4c6f2e81
|
Real-Time Robot Localization, Vision, and Speech Recognition on Nvidia Jetson TX1
|
[
{
"docid": "c26db11bfb98e1fcb32ef7a01adadd1c",
"text": "Until recently the weight and size of inertial sensors has prohibited their use in domains such as human motion capture. Recent improvements in the performance of small and lightweight micromachined electromechanical systems (MEMS) inertial sensors have made the application of inertial techniques to such problems possible. This has resulted in an increased interest in the topic of inertial navigation, however current introductions to the subject fail to sufficiently describe the error characteristics of inertial systems. We introduce inertial navigation, focusing on strapdown systems based on MEMS devices. A combination of measurement and simulation is used to explore the error characteristics of such systems. For a simple inertial navigation system (INS) based on the Xsens Mtx inertial measurement unit (IMU), we show that the average error in position grows to over 150 m after 60 seconds of operation. The propagation of orientation errors caused by noise perturbing gyroscope signals is identified as the critical cause of such drift. By simulation we examine the significance of individual noise processes perturbing the gyroscope signals, identifying white noise as the process which contributes most to the overall drift of the system. Sensor fusion and domain specific constraints can be used to reduce drift in INSs. For an example INS we show that sensor fusion using magnetometers can reduce the average error in position obtained by the system after 60 seconds from over 150 m to around 5 m. We conclude that whilst MEMS IMU technology is rapidly improving, it is not yet possible to build a MEMS based INS which gives sub-meter position accuracy for more than one minute of operation.",
"title": ""
}
] |
[
{
"docid": "19607c362f07ebe0238e5940fefdf03f",
"text": "This paper presents an approach for generating photorealistic video sequences of dynamically varying facial expressions in human-agent interactions. To this end, we study human-human interactions to model the relationship and influence of one individual's facial expressions in the reaction of the other. We introduce a two level optimization of generative adversarial models, wherein the first stage generates a dynamically varying sequence of the agent's face sketch conditioned on facial expression features derived from the interacting human partner. This serves as an intermediate representation, which is used to condition a second stage generative model to synthesize high-quality video of the agent face. Our approach uses a novel L1 regularization term computed from layer features of the discriminator, which are integrated with the generator objective in the GAN model. Session constraints are also imposed on video frame generation to ensure appearance consistency between consecutive frames. We demonstrated that our model is effective at generating visually compelling facial expressions. Moreover, we quantitatively showed that agent facial expressions in the generated video clips reflect valid emotional reactions to behavior of the human partner.",
"title": ""
},
{
"docid": "49a2202592071a07109bd347563e4d6b",
"text": "To model deformation of anatomical shapes, non-linear statistics are required to take into account the non-linear structure of the data space. Computer implementations of non-linear statistics and differential geometry algorithms often lead to long and complex code sequences. The aim of the paper is to show how the Theano framework can be used for simple and concise implementation of complex differential geometry algorithms while being able to handle complex and high-dimensional data structures. We show how the Theano framework meets both of these requirements. The framework provides a symbolic language that allows mathematical equations to be directly translated into Theano code, and it is able to perform both fast CPU and GPU computations on highdimensional data. We show how different concepts from non-linear statistics and differential geometry can be implemented in Theano, and give examples of the implemented theory visualized on landmark representations of Corpus Callosum shapes.",
"title": ""
},
{
"docid": "19d90dd3843fab8a2d9a8d312c7763a0",
"text": "Clinically significant separation anxiety [SA] has been identified as being common among patients who do not respond to psychiatric interventions, regardless of intervention type (pharmacological or psychotherapeutic), across anxiety and mood disorders. An attachment formation and maintenance domain has been proposed as contributing to anxiety disorders. We therefore directly determined prevalence of SA in a population of adult treatment non-responders suffering from primary anxiety. In these separation anxious nonresponders, we pilot-tested an SA-focused, attachment-based psychotherapy for anxiety, Panic-Focused Psychodynamic Psychotherapy-eXtended Range [PFPP-XR], and assessed whether hypothesized biomarkers of attachment were engaged. We studied separation anxiety [SA] in 46 adults (ages 23-70 [mean 43.9 (14.9)]) with clinically significant anxiety symptoms (Hamilton Anxiety Rating Scale [HARS]≥15), and reporting a history of past non-response to psychotherapy and/or medication treatments. Thirty-seven (80%) had clinically significant symptoms of separation anxiety (Structured Clinical Interview for Separation Anxiety Symptoms [SCI-SAS] score≥8). Five of these subjects completed an open clinical trial of Panic Focused Psychodynamic Psychotherapy eXtended Range [PFPP-XR], a 21-24 session, 12-week manualized attachment-focused anxiolytic psychodynamic psychotherapy for anxiety. Patients improved on \"adult threshold\" SCI-SAS (current separation anxiety) (p=.016), HARS (p=0.002), and global severity, assessed by the Clinical Global Impression Scale (p=.0006), at treatment termination. Salivary oxytocin levels decreased 67% after treatment (p=.12). There was no significant change in high or low frequency HRV after treatment, but change in high frequency HRV inversely correlated with treatment change in oxytocin (p<.02), and change in low frequency HRV was positively associated with change in oxytocin (p<.02). SA is surprisingly prevalent among non-responders to standard anti-anxiety treatments, and it may represent a novel transdiagnostic target for treatment intervention in this population. Anxiety and global function improved in a small trial of a brief, manualized, attachment-focused psychodynamic psychotherapy, potentially supporting the clinical relevance of attachment dysfunction in this sample. The large decrease in oxytocin levels with treatment, although not statistically significant in this very small sample, suggests the need for further study of oxytocin as a putative biomarker or mediator of SA response. These pilot data generate testable hypotheses supporting an attachment domain underlying treatment-resistant anxiety, and new treatment strategies.",
"title": ""
},
{
"docid": "bf42a82730cfc7fb81866fbb345fef64",
"text": "MicroRNAs (miRNAs) are evolutionarily conserved small non-coding RNAs that have crucial roles in regulating gene expression. Increasing evidence supports a role for miRNAs in many human diseases, including cancer and autoimmune disorders. The function of miRNAs can be efficiently and specifically inhibited by chemically modified antisense oligonucleotides, supporting their potential as targets for the development of novel therapies for several diseases. In this Review we summarize our current knowledge of the design and performance of chemically modified miRNA-targeting antisense oligonucleotides, discuss various in vivo delivery strategies and analyse ongoing challenges to ensure the specificity and efficacy of therapeutic oligonucleotides in vivo. Finally, we review current progress on the clinical development of miRNA-targeting therapeutics.",
"title": ""
},
{
"docid": "6f67a18d8b3d969a8b69b80516c5e668",
"text": "Ubiquitous computing researchers are increasingly turning to sensorenabled “living laboratories” for the study of people and technologies in settings more natural than a typical laboratory. We describe the design and operation of the PlaceLab, a new live-in laboratory for the study of ubiquitous technologies in home settings. Volunteer research participants individually live in the PlaceLab for days or weeks at a time, treating it as a temporary home. Meanwhile, sensing devices integrated into the fabric of the architecture record a detailed description of their activities. The facility generates sensor and observational datasets that can be used for research in ubiquitous computing and other fields where domestic contexts impact behavior. We describe some of our experiences constructing and operating the living laboratory, and we detail a recently generated sample dataset, available online to researchers.",
"title": ""
},
{
"docid": "563f331d3ab4ae7e7f6282276a792b88",
"text": "The exponential growth of the data may lead us to the information explosion era, an era where most of the data cannot be managed easily. Text mining study is believed to prevent the world from entering that era. One of the text mining studies that may prevent the explosion era is text classification. It is a way to classify articles into several predefined categories. In this research, the classifier implements TF-IDF algorithm. TF-IDF is an algorithm that counts the word weight by considering frequency of the word (TF) and in how many files the word can be found (IDF). Since the IDF could see the in how many files a term can be found, it can control the weight of each word. When a word can be found in so many files, it will be considered as an unimportant word. TF-IDF has been proven to create a classifier that could classify news articles in Bahasa Indonesia in a high accuracy; 98.3%.",
"title": ""
},
{
"docid": "50ab05d133dceaacf71b28b6a4b547bc",
"text": "The ability to measure human hand motions and interaction forces is critical to improving our understanding of manual gesturing and grasp mechanics. This knowledge serves as a basis for developing better tools for human skill training and rehabilitation, exploring more effective methods of designing and controlling robotic hands, and creating more sophisticated human-computer interaction devices which use complex hand motions as control inputs. This paper presents work on the design, fabrication, and experimental validation of a soft sensor-embedded glove which measures both hand motion and contact pressures during human gesturing and manipulation tasks. We design an array of liquid-metal embedded elastomer sensors to measure up to hundreds of Newtons of interaction forces across the human palm during manipulation tasks and to measure skin strains across phalangeal and carpal joints for joint motion tracking. The elastomeric sensors provide the mechanical compliance necessary to accommodate anatomical variations and permit a normal range of hand motion. We explore methods of assembling this soft sensor glove from modular, individually fabricated pressure and strain sensors and develop design guidelines for their mechanical integration. Experimental validation of a soft finger glove prototype demonstrates the sensitivity range of the designed sensors and the mechanical robustness of the proposed assembly method, and provides a basis for the production of a complete soft sensor glove from inexpensive modular sensor components.",
"title": ""
},
{
"docid": "0b7718d4ed9c06536f7b120bc73b72ce",
"text": "The feasibility of a 1.2kV GaN switch based on two series-connected 650V GaN transistors is demonstrated in this paper. Aside to achieve ultra-fast transitions and reduced switching energy loss, stacking GaN transistors enables compatibility with high-voltage GaN-on-Silicon technologies. A proof-of-concept is provided by electrical characterization and hard-switching operation of a GaN Super-Cascode built with discrete components. Further investigations to enhance stability with auxiliary components are carried out by simulations and co-integrated prototypes are proven at wafer level.",
"title": ""
},
{
"docid": "8ed2fa021e5b812de90795251b5c2b64",
"text": "A new implicit surface fitting method for surface reconstruction from scattered point data is proposed. The method combines an adaptive partition of unity approximation with least-squares RBF fitting and is capable of generating a high quality surface reconstruction. Given a set of points scattered over a smooth surface, first a sparse set of overlapped local approximations is constructed. The partition of unity generated from these local approximants already gives a faithful surface reconstruction. The final reconstruction is obtained by adding compactly supported RBFs. The main feature of the developed approach consists of using various regularization schemes which lead to economical, yet accurate surface reconstruction.",
"title": ""
},
{
"docid": "aa2bf057322c9a8d2c7d1ce7d6a384d3",
"text": "Our team is currently developing an Automated Cyber Red Teaming system that, when given a model-based capture of an organisation's network, uses automated planning techniques to generate and assess multi-stage attacks. Specific to this paper, we discuss our development of the visual analytic component of this system. Through various views that display network attacks paths at different levels of abstraction, our tool aims to enhance cyber situation awareness of human decision makers.",
"title": ""
},
{
"docid": "f3c9c84697019cfdfc598440ca157ed2",
"text": "A prominent account of prefrontal cortex (PFC) function is that single neurons within the PFC maintain representations of task-relevant stimuli in working memory. Evidence for this view comes from studies in which subjects hold a stimulus across a delay lasting up to several seconds. Persistent elevated activity in the PFC has been observed in animal models as well as in humans performing these tasks. This persistent activity has been interpreted as evidence for the encoding of the stimulus itself in working memory. However, recent findings have posed a challenge to this notion. A number of recent studies have examined neural data from the PFC and posterior sensory areas, both at the single neuron level in primates, and at a larger scale in humans, and have failed to find encoding of stimulus information in the PFC during tasks with a substantial working memory component. Strong stimulus related information, however, was seen in posterior sensory areas. These results suggest that delay period activity in the PFC might be better understood not as a signature of memory storage per se, but as a top down signal that influences posterior sensory areas where the actual working memory representations are maintained.",
"title": ""
},
{
"docid": "31756ac6aaa46df16337dbc270831809",
"text": "Broadly speaking, the goal of neuromorphic engineering is to build computer systems that mimic the brain. Spiking Neural Network (SNN) is a type of biologically-inspired neural networks that perform information processing based on discrete-time spikes, different from traditional Artificial Neural Network (ANN). Hardware implementation of SNNs is necessary for achieving high-performance and low-power. We present the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on SNN implemented with digitallogic, supporting a maximum of 2048 neurons, 20482 = 4194304 synapses, and 15 possible synaptic delays. The Darwin NPU was fabricated by standard 180 nm CMOS technology with an area size of 5 ×5 mm2 and 70 MHz clock frequency at the worst case. It consumes 0.84 mW/MHz with 1.8 V power supply for typical applications. Two prototype applications are used to demonstrate the performance and efficiency of the hardware implementation. 脉冲神经网络(SNN)是一种基于离散神经脉冲进行信息处理的人工神经网络。本文提出的“达尔文”芯片是一款基于SNN的类脑硬件协处理器。它支持神经网络拓扑结构,神经元与突触各种参数的灵活配置,最多可支持2048个神经元,四百万个神经突触及15个不同的突触延迟。该芯片采用180纳米CMOS工艺制造,面积为5x5平方毫米,最坏工作频率达到70MHz,1.8V供电下典型应用功耗为0.84mW/MHz。基于该芯片实现了两个应用案例,包括手写数字识别和运动想象脑电信号分类。",
"title": ""
},
{
"docid": "1450c2025de3ea31271c9d6c56be016f",
"text": "The vast increase in clinical data has the potential to bring about large improvements in clinical quality and other aspects of healthcare delivery. However, such benefits do not come without cost. The analysis of such large datasets, particularly where the data may have to be merged from several sources and may be noisy and incomplete, is a challenging task. Furthermore, the introduction of clinical changes is a cyclical task, meaning that the processes under examination operate in an environment that is not static. We suggest that traditional methods of analysis are unsuitable for the task, and identify complexity theory and machine learning as areas that have the potential to facilitate the examination of clinical quality. By its nature the field of complex adaptive systems deals with environments that change because of the interactions that have occurred in the past. We draw parallels between health informatics and bioinformatics, which has already started to successfully use machine learning methods.",
"title": ""
},
{
"docid": "58042f8c83e5cc4aa41e136bb4e0dc1f",
"text": "In this paper, we propose wire-free integrated sensors that monitor pulse wave velocity (PWV) and respiration, both non-electrical vital signs, by using an all-electrical method. The key techniques that we employ to obtain all-electrical and wire-free measurement are bio-impedance (BI) and analog-modulated body-channel communication (BCC), respectively. For PWV, time difference between ECG signal from the heart and BI signal from the wrist is measured. To remove wires and avoid sampling rate mismatch between ECG and BI sensors, ECG signal is sent to the BI sensor via analog BCC without any sampling. For respiration measurement, BI sensor is located at the abdomen to detect volume change during inhalation and exhalation. A prototype chip fabricated in 0.11 μm CMOS process consists of ECG, BI sensor and BCC transceiver. Measurement results show that heart rate and PWV are both within their normal physiological range. The chip consumes 1.28 mW at 1.2 V supply while occupying 5 mm×2.5 mm of area.",
"title": ""
},
{
"docid": "52722e0d7a11f2deccf5dec893a8febb",
"text": "With more than 340~million messages that are posted on Twitter every day, the amount of duplicate content as well as the demand for appropriate duplicate detection mechanisms is increasing tremendously. Yet there exists little research that aims at detecting near-duplicate content on microblogging platforms. We investigate the problem of near-duplicate detection on Twitter and introduce a framework that analyzes the tweets by comparing (i) syntactical characteristics, (ii) semantic similarity, and (iii) contextual information. Our framework provides different duplicate detection strategies that, among others, make use of external Web resources which are referenced from microposts. Machine learning is exploited in order to learn patterns that help identifying duplicate content. We put our duplicate detection framework into practice by integrating it into Twinder, a search engine for Twitter streams. An in-depth analysis shows that it allows Twinder to diversify search results and improve the quality of Twitter search. We conduct extensive experiments in which we (1) evaluate the quality of different strategies for detecting duplicates, (2) analyze the impact of various features on duplicate detection, (3) investigate the quality of strategies that classify to what exact level two microposts can be considered as duplicates and (4) optimize the process of identifying duplicate content on Twitter. Our results prove that semantic features which are extracted by our framework can boost the performance of detecting duplicates.",
"title": ""
},
{
"docid": "b587de667df04de627a3f4b5cc658341",
"text": "Terrorism has led to many problems in Thai societies, not only property damage but also civilian casualties. Predicting terrorism activities in advance can help prepare and manage risk from sabotage by these activities. This paper proposes a framework focusing on event classification in terrorism domain using fuzzy inference systems (FISs). Each FIS is a decisionmaking model combining fuzzy logic and approximate reasoning. It is generated in five main parts: the input interface, the fuzzification interface, knowledge base unit, decision making unit and output defuzzification interface. Adaptive neuro-fuzzy inference system (ANFIS) is a FIS model adapted by combining the fuzzy logic and neural network. The ANFIS utilizes automatic identification of fuzzy logic rules and adjustment of membership function (MF). Moreover, neural network can directly learn from data set to construct fuzzy logic rules and MF implemented in various applications. FIS settings are evaluated based on two comparisons. The first evaluation is the comparison between unstructured and structured events using the same FIS setting. The second comparison is the model settings between FIS and ANFIS for classifying structured events. The data set consists of news articles related to terrosim events in three southern provinces of Thailand. The experimental results show that the classification performance of the FIS resulting from structured events achieves satisfactory accuracy and is better than the unstructured events. In addition, the classification of structured events using ANFIS gives higher performance than the events using only FIS in the prediction of terrorism events. KeywordsEvent classification; terrorism domain; fuzzy inference system (FIS); adaptive neuro-fuzzy inference system (ANFIS); membership function (MF)",
"title": ""
},
{
"docid": "0b59b6f7e24a4c647ae656a0dc8cc3ab",
"text": "Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced. r 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f41f4e3b27bda4b3000f3ab5ae9ef22a",
"text": "This paper, first analysis the performance of image segmentation techniques; K-mean clustering algorithm and region growing for cyst area extraction from liver images, then enhances the performance of K-mean by post-processing. The K-mean algorithm makes the clusters effectively. But it could not separate out the desired cluster (cyst) from the image. So, to enhance its performance for cyst region extraction, morphological opening-by-reconstruction is applied on the output of K-mean clustering algorithm. The results are presented both qualitatively and quantitatively, which demonstrate the superiority of enhanced K-mean as compared to standard K-mean and region growing algorithm.",
"title": ""
},
{
"docid": "2b16725c22f06b8155ce948636877004",
"text": "The Internet of Things (IoT) aims to connect billions of smart objects to the Internet, which can bring a promising future to smart cities. These objects are expected to generate large amounts of data and send the data to the cloud for further processing, especially for knowledge discovery, in order that appropriate actions can be taken. However, in reality sensing all possible data items captured by a smart object and then sending the complete captured data to the cloud is less useful. Further, such an approach would also lead to resource wastage (e.g., network, storage, etc.). The Fog (Edge) computing paradigm has been proposed to counterpart the weakness by pushing processes of knowledge discovery using data analytics to the edges. However, edge devices have limited computational capabilities. Due to inherited strengths and weaknesses, neither Cloud computing nor Fog computing paradigm addresses these challenges alone. Therefore, both paradigms need to work together in order to build a sustainable IoT infrastructure for smart cities. In this article, we review existing approaches that have been proposed to tackle the challenges in the Fog computing domain. Specifically, we describe several inspiring use case scenarios of Fog computing, identify ten key characteristics and common features of Fog computing, and compare more than 30 existing research efforts in this domain. Based on our review, we further identify several major functionalities that ideal Fog computing platforms should support and a number of open challenges toward implementing them, to shed light on future research directions on realizing Fog computing for building sustainable smart cities.",
"title": ""
},
{
"docid": "1bf2f9e48a67842412a3b32bb2dd3434",
"text": "Since Paul Broca, the relationship between mind and brain has been the central preoccupation of cognitive neuroscience. In the 19th century, recognition that mental faculties might be understood by observations of individuals with brain damage led to vigorous debates about the properties of mind. By the end of the First World War, neurologists had outlined basic frameworks for the neural organization of language, perception, and motor cognition. Geschwind revived these frameworks in the 1960s and by the 1980s, lesion studies had incorporated methods from experimental psychology, models from cognitive science, formalities from computational approaches, and early developments in structural brain imaging. Around the same time, functional neuroimaging entered the scene. Early xenon probes evolved to the present-day wonders of BOLD and perfusion imaging. In a quick two decades, driven by these technical advances, centers for cognitive neuroscience now dot the landscape, journals such as this one are thriving, and the annual meeting of the Society for Cognitive Neuroscience is overflowing. In these heady times, a group of young cognitive neuroscientists training at a center in which human lesion studies and functional neuroimaging are pursued with similar vigor inquire about the relative impact of these two methods on the field. Fellows and colleagues, in their article titled ‘‘Method matters: An empirical study of impact on cognitive neuroscience,’’ point out that the nature of the evidence derived from the two methods are different. Importantly, they have complementary strengths and weaknesses. A critical difference highlighted in their article is that functional imaging by necessity provides correlational data, whereas lesion studies can support necessity claims for a specific brain region in a particular function. The authors hypothesize that despite the obvious growth of functional imaging in the last decade or so, lesion studies would have a disproportionate impact on cognitive neuroscience because they offer the possibility of establishing a causal role for structure in behavior in a way that is difficult to establish using functional imaging. The authors did not confirm this hypothesis. Using bibliometric methods, they found that functional imaging studies were cited three times as often as lesion studies, in large part because imaging studies were more likely to be published in high-impact journals. Given the complementary nature of the evidence from both methods, they anticipated extensive cross-method references. However, they found a within-method bias to citations generally, and, furthermore, functional imaging articles cited lesion studies considerably less often than the converse. To confirm the trends indicated by Fellows and colleagues, I looked at the distribution of cognitive neuroscience methods in the abstracts accepted for the 2005 Annual Meeting of the Cognitive Neuroscience Society (see Figure 1). Imaging studies composed over a third of all abstracts, followed by electrophysiological studies, the bulk of which were event-related potential (ERP) and magnetoencephalogram (MEG) studies. Studies that used patient populations composed 16% of the abstracts. The patient studies were almost evenly split between those focused on understanding a disease (47%), such as autism or schizophrenia, and those in which structure–function relationships were a consideration (53%). These observations do not speak of the final impact of these studies, but they do point out the relative lack of patient-based studies, particularly those addressing basic cognitive neuroscience questions. Fellows and colleagues pose the following question: Despite the greater ‘‘in-principle’’ inferential strength of lesion than functional imaging studies, why in practice do they have less impact on the field? They suggest that sociologic and practical considerations, rather than scientific merit, might be at play. Here, I offer my speculations on the factors that contribute to the relative impact of these methods. These speculations are not intended to be comprehensive. Rather they are intended to begin conversations in response to the question posed by Fellows and colleagues. In my view, the disproportionate impact of functional imaging compared to lesion studies is driven by three factors: the appeal of novelty and technology, by ease of access to neural data, and, in a subtle way, to the pragmatics of hypothesis testing. First, novelty is intrinsically appealing. As a clinician, I often encounter patients requesting the latest medications, even when they are more expensive and not demonstrably better than older ones. As scions of the enlightenment, many of us believe in progress, and that things newer are generally things better. Lesion studies have been around for a century and a half. Any advances made now are likely to be incremental. By contrast, functional imaging is truly a new way to examine the University of Pennsylvania",
"title": ""
}
] |
scidocsrr
|
8f683401322bb89c6226b5f73d9bc2f1
|
An agent-based market platform for Smart Grids
|
[
{
"docid": "dd51e9bed7bbd681657e8742bb5bf280",
"text": "Automated negotiation systems with self interested agents are becoming increas ingly important One reason for this is the technology push of a growing standardized communication infrastructure Internet WWW NII EDI KQML FIPA Concor dia Voyager Odyssey Telescript Java etc over which separately designed agents belonging to di erent organizations can interact in an open environment in real time and safely carry out transactions The second reason is strong application pull for computer support for negotiation at the operative decision making level For example we are witnessing the advent of small transaction electronic commerce on the Internet for purchasing goods information and communication bandwidth There is also an industrial trend toward virtual enterprises dynamic alliances of small agile enterprises which together can take advantage of economies of scale when available e g respond to more diverse orders than individual agents can but do not su er from diseconomies of scale Multiagent technology facilitates such negotiation at the operative decision mak ing level This automation can save labor time of human negotiators but in addi tion other savings are possible because computational agents can be more e ective at nding bene cial short term contracts than humans are in strategically and com binatorially complex settings This chapter discusses multiagent negotiation in situations where agents may have di erent goals and each agent is trying to maximize its own good without concern for the global good Such self interest naturally prevails in negotiations among independent businesses or individuals In building computer support for negotiation in such settings the issue of self interest has to be dealt with In cooperative distributed problem solving the system designer imposes an interaction protocol and a strategy a mapping from state history to action a",
"title": ""
}
] |
[
{
"docid": "04ce0390032b01d559585d82be9cd434",
"text": "From an internal audit perspective, enterprise systems have created new opportunities and challenges in managing internal as well as external risks. In this work, we report results of a survey that examines internal auditors’ ability to identify and manage operational, financial, technological, compliance and other risks as the organization migrates to an ERP environment. Our findings show that the internal auditors perceive a reduction in financial and operational risk and an increase in technical risks. These effects are somewhat mitigated by their ability to assess and manage these risks. We also find that internal audit departments satisfied their needs for ERP skills not by outsourcing but by providing staff with in-house training.",
"title": ""
},
{
"docid": "9b0d413795d6fe2631985d54c42b970d",
"text": "Deep analysis of domain content yields novel insights and can be used to produce better courses. Aspects of such analysis can be performed by applying AI and statistical algorithms to student data collected from educational technology and better cognitive models can be discovered and empirically validated in terms of more accurate predictions of student learning. However, can such improved models yield improved student learning? This paper reports positively on progress in closing this loop. We demonstrate that a tutor unit, redesigned based on data-driven cognitive model improvements, helped students reach mastery more efficiently. In particular, it produced better learning on the problem-decomposition planning skills that were the focus of the cognitive model improvements.",
"title": ""
},
{
"docid": "d59ed793be5213d9ce8800de3c1c072d",
"text": "As the electric vehicle (EV) is becoming a significant component of the loads, an accurate and valid model for the EV charging demand is the key to enable accurate load forecasting, demand respond, system planning, and several other important applications. We propose a data driven queuing model for residential EV charging demand by performing big data analytics on smart meter measurements. The data driven model captures the non-homogeneity and periodicity of the residential EV charging behavior through a self-service queue with a periodic and non-homogeneous Poisson arrival rate, an empirical distribution for charging duration and a finite calling population. Upon parameter estimation, we further validate the model by comparing the simulated data series with real measurements. The hypothesis test shows the proposed model accurately captures the charging behavior. We further acquire the long-run average steady state probabilities and simultaneous rate of the EV charging demand through simulation output analysis.",
"title": ""
},
{
"docid": "d698f181eb7682d9bf98b3bc103abaac",
"text": "Current database research identified the use of computational power of GPUs as a way to increase the performance of database systems. As GPU algorithms are not necessarily faster than their CPU counterparts, it is important to use the GPU only if it will be beneficial for query processing. In a general database context, only few research projects address hybrid query processing, i.e., using a mix of CPUand GPU-based processing to achieve optimal performance. In this paper, we extend our CPU/GPU scheduling framework to support hybrid query processing in database systems. We point out fundamental problems and propose an algorithm to create a hybrid query plan for a query using our scheduling framework. Additionally, we provide cost metrics, which consider the possible overlapping of data transfers and computation on the GPU. Furthermore, we present algorithms to create hybrid query plans for query sequences and query trees.",
"title": ""
},
{
"docid": "5dbc520fbac51f9cc1d13480e7bfb603",
"text": "In 1899, Nikola Tesla, who had devised a type of resonant transformer called the Tesla coil, achieved a major breakthrough in his work by transmitting 100 million volts of electric power wirelessly over a distance of 26 miles to light up a bank of 200 light bulbs and run one electric motor. Tesla claimed to have achieved 95% efficiency, but the technology had to be shelved because the effects of transmitting such high voltages in electric arcs would have been disastrous to humans and electrical equipment in the vicinity. This technology has been languishing in obscurity for a number of years, but the advent of portable devices such as mobiles, laptops, smartphones, MP3 players, etc warrants another look at the technology. We propose the use of a new technology, based on strongly coupled magnetic resonance. It consists of a transmitter, a current carrying copper coil, which acts as an electromagnetic resonator and a receiver, another copper coil of similar dimensions to which the device to be powered is attached. The transmitter emits a non-radiative magnetic field resonating at MHz frequencies, and the receiving unit resonates in that field. The resonant nature of the process ensures a strong interaction between the sending and receiving unit, while interaction with rest of the environment is weak.",
"title": ""
},
{
"docid": "1566c80c4624533292c7442c61f3be15",
"text": "Modern software often relies on the combination of several software modules that are developed independently. There are use cases where different software libraries from different programming languages are used, e.g., embedding DLL files in JAVA applications. Even more complex is the case when different programming paradigms are combined like within applications with database connections, for instance PHP and SQL. Such a diversification of programming languages and modules in just one software application is becoming more and more important, as this leads to a combination of the strengths of different programming paradigms. But not always, the developers are experts in the different programming languages or even in different programming paradigms. So, it is desirable to provide easy to use interfaces that enable the integration of programs from different programming languages and offer access to different programming paradigms. In this paper we introduce a connector architecture for two programming languages of different paradigms: JAVA as a representative of object oriented programming languages and PROLOG for logic programming. Our approach provides a fast, portable and easy to use communication layer between JAVA and PROLOG. The exchange of information is done via a textual term representation which can be used independently from a deployed PROLOG engine. The proposed connector architecture allows for Object Unification on the JAVA side. We provide an exemplary connector for JAVA and SWI-PROLOG, a well-known PROLOG implementation.",
"title": ""
},
{
"docid": "4f64b2b2b50de044c671e3d0d434f466",
"text": "Optical flow estimation is one of the oldest and still most active research domains in computer vision. In 35 years, many methodological concepts have been introduced and have progressively improved performances , while opening the way to new challenges. In the last decade, the growing interest in evaluation benchmarks has stimulated a great amount of work. In this paper, we propose a survey of optical flow estimation classifying the main principles elaborated during this evolution, with a particular concern given to recent developments. It is conceived as a tutorial organizing in a comprehensive framework current approaches and practices. We give insights on the motivations, interests and limitations of modeling and optimization techniques, and we highlight similarities between methods to allow for a clear understanding of their behavior. Motion analysis is one of the main tasks of computer vision. From an applicative viewpoint, the information brought by the dynamical behavior of observed objects or by the movement of the camera itself is a decisive element for the interpretation of observed phenomena. The motion characterizations can be extremely variable among the large number of application domains. Indeed, one can be interested in tracking objects, quantifying deformations, retrieving dominant motion, detecting abnormal behaviors, and so on. The most low-level characterization is the estimation of a dense motion field, corresponding to the displacement of each pixel, which is called optical flow. Most high-level motion analysis tasks employ optical flow as a fundamental basis upon which more semantic interpretation is built. Optical flow estimation has given rise to a tremendous quantity of works for 35 years. If a certain continuity can be found since the seminal works of [120,170], a number of methodological innovations have progressively changed the field and improved performances. Evaluation benchmarks and applicative domains have followed this progress by proposing new challenges allowing methods to face more and more difficult situations in terms of motion discontinuities, large displacements, illumination changes or computational costs. Despite great advances, handling these issues in a unique method still remains an open problem. Comprehensive surveys of optical flow literature were carried out in the nineties [21,178,228]. More recently, reviewing works have focused on variational approaches [264], benchmark results [13], specific applications [115], or tutorials restricted to a certain subset of methods [177,260]. However, covering all the main estimation approaches and including recent developments in a comprehensive classification is still lacking in the optical flow field. This survey …",
"title": ""
},
{
"docid": "917c26a6b09842c97be03abd830c8095",
"text": "With the popularity of deep learning (DL), artificial intelligence (AI) has been applied in many areas of human life. Artificial neural network or neural network (NN), the main technique behind DL, has been extensively studied to facilitate computer vision and natural language processing. However, the more we rely on information technology, the more vulnerable we are. That is, malicious NNs could bring huge threat in the so-called coming AI era. In this paper, for the first time in the literature, we propose a novel approach to design and insert powerful neural-level trojans or PoTrojan in pre-trained NN models. Most of the time, PoTrojans remain inactive, not affecting the normal functions of their host NN models. PoTrojans could only be triggered in very rare conditions. Once activated, however, the PoTrojans could cause the host NN models to malfunction, either falsely predicting or classifying, which is a significant threat to human society of the AI era. We would explain the principles of PoTrojans and the easiness of designing and inserting them in pre-trained deep learning models. PoTrojans doesn’t modify the existing architecture or parameters of the pre-trained models, without re-training. Hence, the proposed method is very efficient.",
"title": ""
},
{
"docid": "9f87ea8fd766f4b208ac142dcbbed4b2",
"text": "The dynamic marketplace in online advertising calls for ranking systems that are optimized to consistently promote and capitalize better performing ads. The streaming nature of online data inevitably makes an advertising system choose between maximizing its expected revenue according to its current knowledge in short term (exploitation) and trying to learn more about the unknown to improve its knowledge (exploration), since the latter might increase its revenue in the future. The exploitation and exploration (EE) tradeoff has been extensively studied in the reinforcement learning community, however, not been paid much attention in online advertising until recently. In this paper, we develop two novel EE strategies for online advertising. Specifically, our methods can adaptively balance the two aspects of EE by automatically learning the optimal tradeoff and incorporating confidence metrics of historical performance. Within a deliberately designed offline simulation framework we apply our algorithms to an industry leading performance based contextual advertising system and conduct extensive evaluations with real online event log data. The experimental results and detailed analysis reveal several important findings of EE behaviors in online advertising and demonstrate that our algorithms perform superiorly in terms of ad reach and click-through-rate (CTR).",
"title": ""
},
{
"docid": "b0575058a6950bc17a976504145dca0e",
"text": "BACKGROUND\nCitation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening.\n\n\nMETHODS\nFour systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection.\n\n\nRESULTS\nOf the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9 % rituximab, 40 % dietary fibre, 67 % aHUS, and 57 % ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16 % (aHUS) to 45 % (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7 %. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25 % and increased the workload saving by 10 % but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80 %) but reduced the precision (6.8 %) and increased the number of missed citations.\n\n\nCONCLUSIONS\nSemi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.",
"title": ""
},
{
"docid": "7b7a0b0b6a36789834c321d04c2e2f8f",
"text": "In the present paper we propose and evaluate a framework for detection and classification of plant leaf/stem diseases using image processing and neural network technique. The images of plant leaves affected by four types of diseases namely early blight, late blight, powdery-mildew and septoria has been considered for study and evaluation of feasibility of the proposed method. The color transformation structures were obtained by converting images from RGB to HSI color space. The Kmeans clustering algorithm was used to divide images into clusters for demarcation of infected area of the leaves. After clustering, the set of color and texture features viz. moment, mean, variance, contrast, correlation and entropy were extracted based on Color Co-occurrence Method (CCM). A feed forward back propagation neural network was configured and trained using extracted set of features and subsequently utilized for detection of leaf diseases. Keyword: Color Co-Occurrence Method, K-Means, Feed Forward Neural Network",
"title": ""
},
{
"docid": "1a9086eb63bffa5a36fde268fb74c7a6",
"text": "This brief presents a simple reference circuit with channel-length modulation compensation to generate a reference voltage of 221 mV using subthreshold of MOSFETs at supply voltage of 0.85 V with power consumption of 3.3 muW at room temperature using TSMC 0.18-mum technology. The proposed circuit occupied in less than 0.0238 mm 2 achieves the reference voltage variation of 2 mV/V for supply voltage from 0.9 to 2.5V and about 6 mV of temperature variation in the range from -20degC to 120 degC. The agreement of simulation and measurement data is demonstrated",
"title": ""
},
{
"docid": "64e2b73e8a2d12a1f0bbd7d07fccba72",
"text": "Point-of-interest (POI) recommendation is an important service to Location-Based Social Networks (LBSNs) that can benefit both users and businesses. In recent years, a number of POI recommender systems have been proposed, but there is still a lack of systematical comparison thereof. In this paper, we provide an allaround evaluation of 12 state-of-the-art POI recommendation models. From the evaluation, we obtain several important findings, based on which we can better understand and utilize POI recommendation models in various scenarios. We anticipate this work to provide readers with an overall picture of the cutting-edge research on POI recommendation.",
"title": ""
},
{
"docid": "41e74e0226ef48076aa3e33f2f652b80",
"text": "Gastroschisis and omphalocele are the two most common congenital abdominal wall defects. Both are frequently detected prenatally due to routine maternal serum screening and fetal ultrasound. Prenatal diagnosis may influence timing, mode and location of delivery. Prognosis for gastroschisis is primarily determined by the degree of bowel injury, whereas prognosis for omphalocele is related to the number and severity of associated anomalies. The surgical management of both conditions consists of closure of the abdominal wall defect, while minimizing the risk of injury to the abdominal viscera either through direct trauma or due to increased intra-abdominal pressure. Options include primary closure or a variety of staged approaches. Long-term outcome is favorable in most cases; however, significant associated anomalies (in the case of omphalocele) or intestinal dysfunction (in the case of gastroschisis) may result in morbidity and mortality.",
"title": ""
},
{
"docid": "dddef6d3c0b8d32f215094f7fd8a5f54",
"text": "Complex systems are often characterized by distinct types of interactions between the same entities. These can be described as a multilayer network where each layer represents one type of interaction. These layers may be interdependent in complicated ways, revealing different kinds of structure in the network. In this work we present a generative model, and an efficient expectation-maximization algorithm, which allows us to perform inference tasks such as community detection and link prediction in this setting. Our model assumes overlapping communities that are common between the layers, while allowing these communities to affect each layer in a different way, including arbitrary mixtures of assortative, disassortative, or directed structure. It also gives us a mathematically principled way to define the interdependence between layers, by measuring how much information about one layer helps us predict links in another layer. In particular, this allows us to bundle layers together to compress redundant information and identify small groups of layers which suffice to predict the remaining layers accurately. We illustrate these findings by analyzing synthetic data and two real multilayer networks, one representing social support relationships among villagers in South India and the other representing shared genetic substring material between genes of the malaria parasite.",
"title": ""
},
{
"docid": "a8d02f362ba8210488e4dea1a1bf9b6f",
"text": "BACKGROUND\nThe AMNOG regulation, introduced in 2011 in Germany, changed the game for new drugs. Now, the industry is required to submit a dossier to the GBA (the central decision body in the German sickness fund system) to show additional benefit. After granting the magnitude of the additional benefit by the GBA, the manufacturer is entitled to negotiate the reimbursement price with the GKV-SV (National Association of Statutory Health Insurance Funds). The reimbursement price is defined as a discount on the drug price at launch. As the price or discount negotiations between the manufacturers and the GKV-SV takes place behind closed doors, the factors influencing the results of the negotiation are not known.\n\n\nOBJECTIVES\nThe aim of this evaluation is to identify factors influencing the results of the AMNOG price negotiation process.\n\n\nMETHODS\nThe analysis was based on a dataset containing detailed information on all assessments until the end of 2015. A descriptive analysis was followed by an econometric analysis of various potential factors (benefit rating, size of target population, deviating from appropriate comparative therapy and incorporation of HRQoL-data).\n\n\nRESULTS\nUntil December 2015, manufacturers and the GKV-SV finalized 96 negotiations in 193 therapeutic areas, based on assessment conducted by the GBA. The GBA has granted an additional benefit to 100/193 drug innovations. Negotiated discount was significantly higher for those drugs without additional benefit (p = 0.030) and non-orphan drugs (p = 0.015). Smaller population size, no deviation from recommended appropriate comparative therapy and the incorporation of HRQoL-data were associated with a lower discount on the price at launch. However, neither a uni- nor the multivariate linear regression showed enough power to predict the final discount.\n\n\nCONCLUSIONS\nAlthough the AMNOG regulation implemented binding and strict rules for the benefit assessment itself, the outcome of the discount negotiations are still unpredictable. Obviously, negotiation tactics, the current political situation and soft factors seem to play a more influential role for the outcome of the negotiations than the five hard and known factors analyzed in this study. Further research is needed to evaluate additional factors.",
"title": ""
},
{
"docid": "fe043223b37f99419d9dc2c4d787cfbb",
"text": "We describe a Markov chain Monte Carlo based particle filter that effectively deals with interacting targets, i.e., targets that are influenced by the proximity and/or behavior of other targets. Such interactions cause problems for traditional approaches to the data association problem. In response, we developed a joint tracker that includes a more sophisticated motion model to maintain the identity of targets throughout an interaction, drastically reducing tracker failures. The paper presents two main contributions: (1) we show how a Markov random field (MRF) motion prior, built on the fly at each time step, can substantially improve tracking when targets interact, and (2) we show how this can be done efficiently using Markov chain Monte Carlo (MCMC) sampling. We prove that incorporating an MRF to model interactions is equivalent to adding an additional interaction factor to the importance weights in a joint particle filter. Since a joint particle filter suffers from exponential complexity in the number of tracked targets, we replace the traditional importance sampling step in the particle filter with an MCMC sampling step. The resulting filter deals efficiently and effectively with complicated interactions when targets approach each other. We present both qualitative and quantitative results to substantiate the claims made in the paper, including a large scale experiment on a video-sequence of over 10,000 frames in length.",
"title": ""
},
{
"docid": "8fd830d62cceb6780d0baf7eda399fdf",
"text": "Little work from the Natural Language Processing community has targeted the role of quantities in Natural Language Understanding. This paper takes some key steps towards facilitating reasoning about quantities expressed in natural language. We investigate two different tasks of numerical reasoning. First, we consider Quantity Entailment, a new task formulated to understand the role of quantities in general textual inference tasks. Second, we consider the problem of automatically understanding and solving elementary school math word problems. In order to address these quantitative reasoning problems we first develop a computational approach which we show to successfully recognize and normalize textual expressions of quantities. We then use these capabilities to further develop algorithms to assist reasoning in the context of the aforementioned tasks.",
"title": ""
},
{
"docid": "e5b543b8880ec436874bee6b03a58618",
"text": "This paper outlines my concerns with Qualitative Data Analysis’ (QDA) numerous remodelings of Grounded Theory (GT) and the subsequent eroding impact. I cite several examples of the erosion and summarize essential elements of classic GT methodology. It is hoped that the article will clarify my concerns with the continuing enthusiasm but misunderstood embrace of GT by QDA methodologists and serve as a preliminary guide to novice researchers who wish to explore the fundamental principles of GT.",
"title": ""
},
{
"docid": "1565009c9fd58862f376066ffb8e3e48",
"text": "Attendance plays a vital role in evaluating a student. The traditional method of taking",
"title": ""
}
] |
scidocsrr
|
51f9a7195f37dc291221eef9bd7848f4
|
The Modified Checklist for Autism in Toddlers: an initial study investigating the early detection of autism and pervasive developmental disorders.
|
[
{
"docid": "9fb06b9431ddebcad14ac970ec3baa20",
"text": "We use a new model of metarepresentational development to predict a cognitive deficit which could explain a crucial component of the social impairment in childhood autism. One of the manifestations of a basic metarepresentational capacity is a ‘theory of mind’. We have reason to believe that autistic children lack such a ‘theory’. If this were so, then they would be unable to impute beliefs to others and to predict their behaviour. This hypothesis was tested using Wimmer and Perner’s puppet play paradigm. Normal children and those with Down’s syndrome were used as controls for a group of autistic children. Even though the mental age of the autistic children was higher than that of the controls, they alone failed to impute beliefs to others. Thus the dysfunction we have postulated and demonstrated is independent of mental retardation and specific to autism.",
"title": ""
}
] |
[
{
"docid": "e249d8d00610ef1e5e48fdc39b63c803",
"text": "With the increasing availability of metropolitan transportation data, such as those from vehicle GPSs (Global Positioning Systems) and road-side sensors, it becomes viable for authorities, operators, as well as individuals to analyze the data for a better understanding of the transportation system and possibly improved utilization and planning of the system. We report our experience in building the VAST (Visual Analytics for Smart Transportation) system. Our key observation is that metropolitan transportation data are inherently visual as they are spatio-temporal around road networks. Therefore, we visualize traffic data together with digital maps and support analytical queries through this interactive visual interface. As a case study, we demonstrate VAST on real-world taxi GPS and meter data sets from 15, 000 taxis running two months in a Chinese city of over 10 million population. We discuss the technical challenges in data cleaning, storage, visualization, and query processing, and offer our first-hand lessons learned from developing the system.",
"title": ""
},
{
"docid": "e0b1e38b08b6fb098808585a5a3c8753",
"text": "The decade since the Human Genome Project ended has witnessed a remarkable sequencing technology explosion that has permitted a multitude of questions about the genome to be asked and answered, at unprecedented speed and resolution. Here I present examples of how the resulting information has both enhanced our knowledge and expanded the impact of the genome on biomedical research. New sequencing technologies have also introduced exciting new areas of biological endeavour. The continuing upward trajectory of sequencing technology development is enabling clinical applications that are aimed at improving medical diagnosis and treatment.",
"title": ""
},
{
"docid": "49239993ee1c281e8384f0ce01f03fd6",
"text": "With the advent of social media, our online feeds increasingly consist of short, informal, and unstructured text. This textual data can be analyzed for the purpose of improving user recommendations and detecting trends. Instagram is one of the largest social media platforms, containing both text and images. However, most of the prior research on text processing in social media is focused on analyzing Twitter data, and little attention has been paid to text mining of Instagram data. Moreover, many text mining methods rely on annotated training data, which in practice is both difficult and expensive to obtain. In this paper, we present methods for unsupervised mining of fashion attributes from Instagram text, which can enable a new kind of user recommendation in the fashion domain. In this context, we analyze a corpora of Instagram posts from the fashion domain, introduce a system for extracting fashion attributes from Instagram, and train a deep clothing classifier with weak supervision to classify Instagram posts based on the associated text. With our experiments, we confirm that word embeddings are a useful asset for information extraction. Experimental results show that information extraction using word embeddings outperforms a baseline that uses Levenshtein distance. The results also show the benefit of combining weak supervision signals using generative models instead of majority voting. Using weak supervision and generative modeling, an F1 score of 0.61 is achieved on the task of classifying the image contents of Instagram posts based solely on the associated text, which is on level with human performance. Finally, our empirical study provides one of the few available studies on Instagram text and shows that the text is noisy, that the text distribution exhibits the long-tail phenomenon, and that comment sections on Instagram are multi-lingual.",
"title": ""
},
{
"docid": "3ec3285a2babcd3a00b453956dda95aa",
"text": "Microblog normalisation methods often utilise complex models and struggle to differentiate between correctly-spelled unknown words and lexical variants of known words. In this paper, we propose a method for constructing a dictionary of lexical variants of known words that facilitates lexical normalisation via simple string substitution (e.g. tomorrow for tmrw). We use context information to generate possible variant and normalisation pairs and then rank these by string similarity. Highlyranked pairs are selected to populate the dictionary. We show that a dictionary-based approach achieves state-of-the-art performance for both F-score and word error rate on a standard dataset. Compared with other methods, this approach offers a fast, lightweight and easy-to-use solution, and is thus suitable for high-volume microblog pre-processing. 1 Lexical Normalisation A staggering number of short text “microblog” messages are produced every day through social media such as Twitter (Twitter, 2011). The immense volume of real-time, user-generated microblogs that flows through sites has been shown to have utility in applications such as disaster detection (Sakaki et al., 2010), sentiment analysis (Jiang et al., 2011; González-Ibáñez et al., 2011), and event discovery (Weng and Lee, 2011; Benson et al., 2011). However, due to the spontaneous nature of the posts, microblogs are notoriously noisy, containing many non-standard forms — e.g., tmrw “tomorrow” and 2day “today” — which degrade the performance of natural language processing (NLP) tools (Ritter et al., 2010; Han and Baldwin, 2011). To reduce this effect, attempts have been made to adapt NLP tools to microblog data (Gimpel et al., 2011; Foster et al., 2011; Liu et al., 2011b; Ritter et al., 2011). An alternative approach is to pre-normalise non-standard lexical variants to their standard orthography (Liu et al., 2011a; Han and Baldwin, 2011; Xue et al., 2011; Gouws et al., 2011). For example, se u 2morw!!! would be normalised to see you tomorrow! The normalisation approach is especially attractive as a preprocessing step for applications which rely on keyword match or word frequency statistics. For example, earthqu, eathquake, and earthquakeee — all attested in a Twitter corpus — have the standard form earthquake; by normalising these types to their standard form, better coverage can be achieved for keyword-based methods, and better word frequency estimates can be obtained. In this paper, we focus on the task of lexical normalisation of English Twitter messages, in which out-of-vocabulary (OOV) tokens are normalised to their in-vocabulary (IV) standard form, i.e., a standard form that is in a dictionary. Following other recent work on lexical normalisation (Liu et al., 2011a; Han and Baldwin, 2011; Gouws et al., 2011; Liu et al., 2012), we specifically focus on one-to-one normalisation in which one OOV token is normalised to one IV word. Naturally, not all OOV words in microblogs are lexical variants of IV words: named entities, e.g., are prevalent in microblogs, but not all named entities are included in our dictionary. One challenge for lexical normalisation is therefore to dis-",
"title": ""
},
{
"docid": "985e8fae88a81a2eec2ca9cc73740a0f",
"text": "Negative symptoms account for much of the functional disability associated with schizophrenia and often persist despite pharmacological treatment. Cognitive behavioral therapy (CBT) is a promising adjunctive psychotherapy for negative symptoms. The treatment is based on a cognitive formulation in which negative symptoms arise and are maintained by dysfunctional beliefs that are a reaction to the neurocognitive impairment and discouraging life events frequently experienced by individuals with schizophrenia. This article outlines recent innovations in tailoring CBT for negative symptoms and functioning, including the use of a strong goal-oriented recovery approach, in-session exercises designed to disconfirm dysfunctional beliefs, and adaptations to circumvent neurocognitive and engagement difficulties. A case illustration is provided.",
"title": ""
},
{
"docid": "0a3713459412d3278a19a3ff8855a6ba",
"text": "a Universidad Autónoma del Estado de Hidalgo, Escuela Superior de Tizayuca, Carretera Federal Pachuca – Tizayuca km 2.5, CP 43800, Tizayuca, Hidalgo, Mexico b Universidad Autónoma del Estado de México, Av. Jardín Zumpango s/n, Fraccionamiento El Tecojote, CP 56259, Texcoco-Estado de México, Mexico c Centro de Investigación y de Estudios Avanzados del IPN, Departamento de Computación, Av. Instituto Politécnico Nacional 2508, San Pedro Zacatenco, CP 07360, México DF, Mexico",
"title": ""
},
{
"docid": "85e9ab280eeb91a902344b5148b35e29",
"text": "Task-oriented dialogue systems can efficiently serve a large number of customers and relieve people from tedious works. However, existing task-oriented dialogue systems depend on handcrafted actions and states or extra semantic labels, which sometimes degrades user experience despite the intensive human intervention. Moreover, current user simulators have limited expressive ability so that deep reinforcement Seq2Seq models have to rely on selfplay and only work in some special cases. To address those problems, we propose a uSer and Agent Model IntegrAtion (SAMIA) framework inspired by an observation that the roles of the user and agent models are asymmetric. Firstly, this SAMIA framework model the user model as a Seq2Seq learning problem instead of ranking or designing rules. Then the built user model is used as a leverage to train the agent model by deep reinforcement learning. In the test phase, the output of the agent model is filtered by the user model to enhance the stability and robustness. Experiments on a real-world coffee ordering dataset verify the effectiveness of the proposed SAMIA framework.",
"title": ""
},
{
"docid": "e045619ede30efb3338e6278f23001d7",
"text": "Particle filtering has become a standard tool for non-parametric estimation in computer vision tracking applications. It is an instance of stochastic search. Each particle represents a possible state of the system. Higher concentration of particles at any given region of the search space implies higher probabilities. One of its major drawbacks is the exponential growth in the number of particles for increasing dimensions in the search space. We present a graph based filtering framework for hierarchical model tracking that is capable of substantially alleviate this issue. The method relies on dividing the search space in subspaces that can be estimated separately. Low correlated subspaces may be estimated with parallel, or serial, filters and have their probability distributions combined by a special aggregator filter. We describe a new algorithm to extract parameter groups, which define the subspaces, from the system model. We validate our method with different graph structures within a simple hand tracking experiment with both synthetic and real data",
"title": ""
},
{
"docid": "61055bd3152c1bff75ee5e69b603b49b",
"text": "This paper focuses on investigating immunological principles in designing a multi-agent system for intrusion/anomaly detection and response in networked computers. In this approach, the immunity-based agents roam around the machines (nodes or routers), and monitor the situation in the network (i.e. look for changes such as malfunctions, faults, abnormalities, misuse, deviations, intrusions, etc.). These agents can mutually recognize each other's activities and can take appropriate actions according to the underlying security policies. Specifically, their activities are coordinated in a hierarchical fashion while sensing, communicating and generating responses. Such an agent can learn and adapt to its environment dynamically and can detect both known and unknown intrusions. This research is the part of an effort to develop a multi-agent detection system that can simultaneously monitor networked computer's activities at different levels (such as user level, system level, process level and packet level) in order to determine intrusions and anomalies. The proposed intrusion detection system is designed to be flexible, extendible, and adaptable that can perform real-time monitoring in accordance with the needs and preferences of network administrators. This paper provides the conceptual view and a general framework of the proposed system. 1. Inspiration from the nature: Every organism in nature is constantly threatened by other organisms, and each species has evolved elaborate set of protective measures called, collectively, the immune system. The natural immune system is an adaptive learning system that is highly distributive in nature. It employs multi-level defense mechanisms to make rapid, highly specific and often very protective responses against wide variety of pathogenic microorganisms. The immune system is a subject of great research interest because of its powerful information processing capabilities [5,6]. Specifically, its' mechanisms to extract unique signatures from antigens and ability to recognize and classify dangerous antigenic peptides are very important. It also uses memory to remember signature patterns that have been seen previously, and use combinatorics to construct antibody for efficient detection. It is observed that the overall behavior of the system is an emergent property of several local interactions. Moreover, the immune response can be either local or systemic, depending on the route and property of the antigenic challenge [19]. The immune system is consists of different populations of immune cells (mainly B or T cells) which circulate at various primary and secondary lymphoid organs of the body. They are carefully controlled to ensure that appropriate populations of B and T cells (naive, effector, and memory) are recruited into different location [19]. This differential migration of lymphocyte subpopulations at different locations (organs) of the body is called trafficking or homing. The lymph nodes and organs provide specialized local environment (called germinal center) during pathogenic attack in any part of the body. This dynamic mechanism support to create a large number of antigen-specific lymphocytes (as effector and memory cells) for stronger defense through the process of the clonal expansion and differentiation. Interestingly, memory cells exhibit selective homing to the type of tissue in which they first encountered an antigen. Presumably this ensures that a particular memory cell will return to the location where it is most likely to re-encounter a subsequent antigenic challenge. The mechanisms of immune responses are self-regulatory in nature. There is no central organ that controls the functions of the immune system. The regulation of the clonal expansion and proliferation of B cells are closely regulated (with a co-stimulation) in order to prevent uncontrolled immune response. This second signal helps to ensure tolerance and judge between dangerous and harmless invaders. So the purpose of this accompanying signal in identifying a non-self is to minimize false alarm and to generate decisive response in case of a real danger[19]. 2. Existing works in Intrusion Detection: The study of security in computer networks is a rapidly growing area of interest because of the proliferation of networks (LANs, WANs etc.), greater deployment of shared computer databases (packages) and the increasing reliance of companies, institutions and individuals on such data. Though there are many levels of access protection to computing and network resources, yet the intruders are finding ways to entry into many sites and systems, and causing major damages. So the task of providing and maintaining proper security in a network system becomes a challenging issue. Intrusion/Anomaly detection is an important part of computer security. It provides an additional layer of defense against computer misuse (abuse) after physical, authentication and access control. There exist different methods for intrusion detection [7,23,25,29] and the early models include IDES (later versions NIDES and MIDAS), W & S, AudES, NADIR, DIDS, etc. These approaches monitor audit trails generated by systems and user applications and perform various statistical analyses in order to derive regularities in behavior pattern. These works based on the hypothesis that an intruder's behavior will be noticeably different from that of a legitimate user, and security violations can be detected by monitoring these audit trails. Most of these methods, however, used to monitor a single host [13,14], though NADIR and DIDS can collect and aggregate audit data from a number of hosts to detect intrusions. However, in all cases, there is no real analysis of patterns of network activities and they only perform centralized analysis. Recent works include GrIDS[27] which used hierarchical graphs to detect attacks on networked systems. Other approaches used autonomous agent architectures [1,2,26] for distributed intrusion detection. 3. Computer Immune Systems: The security in the field of computing may be considered as analogous to the immunity in natural systems. In computing, threats and dangers (of compromising privacy, integrity, and availability) may arise because of malfunction of components or intrusive activities (both internal and external). The idea of using immunological principles in computer security [9-11,15,16,18] started since 1994. Stephanie Forrest and her group at the University of New Mexico have been working on a research project with a long-term goal to build an artificial immune system for computers [911,15,16]. This immunity-based system has much more sophisticated notions of identity and protection than those afforded by current operating systems, and it is suppose to provide a general-purpose protection system to augment current computer security systems. The security of computer systems depends on such activities as detecting unauthorized use of computer facilities, maintaining the integrity of data files, and preventing the spread of computer viruses. The problem of protecting computer systems from harmful viruses is viewed as an instance of the more general problem of distinguishing self (legitimate users, uncorrupted data, etc.) from dangerous other (unauthorized users, viruses, and other malicious agents). This method (called the negative-selection algorithm) is intended to be complementary to the more traditional cryptographic and deterministic approaches to computer security. As an initial step, the negativeselection algorithm has been used as a file-authentication method on the problem of computer virus detection [9].",
"title": ""
},
{
"docid": "329343cec99c221e6f6ce8e3f1dbe83f",
"text": "Artificial Neural Networks (ANN) play a very vital role in making stock market predictions. As per the literature survey, various researchers have used various approaches to predict the prices of stock market. Some popular approaches used by researchers are Artificial Neural Networks, Genetic Algorithms, Fuzzy Logic, Auto Regressive Models and Support Vector Machines. This study presents ANN based computational approach for predicting the one day ahead closing prices of companies from the three different sectors:IT Sector (Wipro, TCS and Infosys), Automobile Sector (Maruti Suzuki Ltd.) and Banking Sector (ICICI Bank). Different types of artificial neural networks based models like Back Propagation Neural Network (BPNN), Radial Basis Function Neural Network (RBFNN), Generalized Regression Neural Network (GRNN) and Layer Recurrent Neural Network (LRNN) have been studied and used to forecast the short term and long term share prices of Wipro, TCS, Infosys, Maruti Suzuki and ICICI Bank. All the networks were trained with the 1100 days of trading data and predicted the prices up to next 6 months. Predicted output was generated through available historical data. Experimental results show that BPNN model gives minimum error (MSE) as compared to the RBFNN and GRNN models. GRNN model performs better as compared to RBFNN model. Forecasting performance of LRNN model is found to be much better than other three models. Keywordsartificial intelligence, back propagation, mean square error, artificial neural network.",
"title": ""
},
{
"docid": "02edb85279317752bd86a8fe7f0ccfc0",
"text": "Despite the potential wealth of educational indicators expressed in a student's approach to homework assignments, how students arrive at their final solution is largely overlooked in university courses. In this paper we present a methodology which uses machine learning techniques to autonomously create a graphical model of how students in an introductory programming course progress through a homework assignment. We subsequently show that this model is predictive of which students will struggle with material presented later in the class.",
"title": ""
},
{
"docid": "595cb7698c38b9f5b189ded9d270fe69",
"text": "Sentiment Analysis can help to extract knowledge related to opinions and emotions from user generated text information. It can be applied in medical field for patients monitoring purposes. With the availability of large datasets, deep learning algorithms have become a state of the art also for sentiment analysis. However, deep models have the drawback of not being non human-interpretable, raising various problems related to model’s interpretability. Very few work have been proposed to build models that explain their decision making process and actions. In this work, we review the current sentiment analysis approaches and existing explainable systems. Moreover, we present a critical review of explainable sentiment analysis models and discussed the insight of applying explainable sentiment analysis in the medical field.",
"title": ""
},
{
"docid": "9a98e97bb786a0c57a68e4cf8e4fb7a8",
"text": "The application of frequent patterns in classification has demonstrated its power in recent studies. It often adopts a two-step approach: frequent pattern (or classification rule) mining followed by feature selection (or rule ranking). However, this two-step process could be computationally expensive, especially when the problem scale is large or the minimum support is low. It was observed that frequent pattern mining usually produces a huge number of \"patterns\" that could not only slow down the mining process but also make feature selection hard to complete. In this paper, we propose a direct discriminative pattern mining approach, DDPMine, to tackle the efficiency issue arising from the two-step approach. DDPMine performs a branch-and-bound search for directly mining discriminative patterns without generating the complete pattern set. Instead of selecting best patterns in a batch, we introduce a \"feature-centered\" mining approach that generates discriminative patterns sequentially on a progressively shrinking FP-tree by incrementally eliminating training instances. The instance elimination effectively reduces the problem size iteratively and expedites the mining process. Empirical results show that DDPMine achieves orders of magnitude speedup without any downgrade of classification accuracy. It outperforms the state-of-the-art associative classification methods in terms of both accuracy and efficiency.",
"title": ""
},
{
"docid": "ed5a17f62e4024727538aba18f39fc78",
"text": "The extent to which people can focus attention in the face of irrelevant distractions has been shown to critically depend on the level and type of information load involved in their current task. The ability to focus attention improves under task conditions of high perceptual load but deteriorates under conditions of high load on cognitive control processes such as working memory. I review recent research on the effects of load on visual awareness and brain activity, including changing effects over the life span, and I outline the consequences for distraction and inattention in daily life and in clinical populations.",
"title": ""
},
{
"docid": "d76a65397b62b511c2ee20b10edc7b00",
"text": "In this paper we introduce the Pivoting M-tree (PM-tree), a metric access method combining M-tree with the pivot-based approach. While in M-tree a metric region is represented by a hyper-sphere, in PM-tree the shape of a metric region is determined by intersection of the hyper-sphere and a set of hyper-rings. The set of hyper-rings for each metric region is related to a fixed set of pivot objects. As a consequence, the shape of a metric region bounds the indexed objects more tightly which, in turn, significantly improves the overall efficiency of similarity search. We present basic algorithms on PM-tree and two cost models for range query processing. Finally, the PM-tree efficiency is experimentally evaluated on large synthetic as well as real-world datasets.",
"title": ""
},
{
"docid": "d8a4476ca2406038d2ba01ffa8fac2ab",
"text": "Generating pluripotent stem cells directly from cells obtained from patients is one of the ultimate goals in regenerative medicine. Two \"reprogramming\" strategies for the generation of pluripotent stem cells from somatic cells have been studied extensively: nuclear transfer to oocytes and fusion with ES cells. The recent demonstration that, in mouse, nuclear transfer into zygotes can also be effective if the recipient cells are arrested in mitosis provides an exciting new avenue for this type of approach. Patient-specific pluripotent cells could potentially also be generated by the spontaneous reprogramming of bone marrow cells, spermatogonial cells, and parthenogenetic embryos. A third overall type of strategy arose from the demonstration that pluripotent stem (iPS) cells can be generated from mouse fibroblasts by the introduction of four transcription factors (Oct-3/4, Sox2, c-Myc, and KLF4). Recent work has underlined the potential of this strategy by improving the efficiency of the process and demonstrating that iPS cells can contribute to many different tissues in vivo, including the germline. Taken together, these studies underscore the crucial roles of transcription factors and chromatin remodeling in nuclear reprogramming.",
"title": ""
},
{
"docid": "509075d64990cf7258c13dd0dfd5e282",
"text": "In recent years we have seen a tremendous growth in applications of passive sensor-enabled RFID technology by researchers; however, their usability in applications such as activity recognition is limited by a key issue associated with their incapability to handle unintentional brownout events leading to missing significant sensed events such as a fall from a chair. Furthermore, due to the need to power and sample a sensor the practical operating range of passive-sensor enabled RFID tags are also limited with respect to passive RFID tags. Although using active or semi-passive tags can provide alternative solutions, they are not without the often undesirable maintenance and limited lifespan issues due to the need for batteries. In this article we propose a new hybrid powered sensor-enabled RFID tag concept which can sustain the supply voltage to the tag circuitry during brownouts and increase the operating range of the tag by combining the concepts from passive RFID tags and semipassive RFID tags, while potentially eliminating shortcomings of electric batteries. We have designed and built our concept, evaluated its desirable properties through extensive experiments and demonstrate its significance in the context of a human activity recognition application.",
"title": ""
},
{
"docid": "e1a4468ccd5305b5158c26b2160d04a6",
"text": "Recent years have seen a deluge of behavioral data from players hitting the game industry. Reasons for this data surge are many and include the introduction of new business models, technical innovations, the popularity of online games, and the increasing persistence of games. Irrespective of the causes, the proliferation of behavioral data poses the problem of how to derive insights therefrom. Behavioral data sets can be large, time-dependent and high-dimensional. Clustering offers a way to explore such data and to discover patterns that can reduce the overall complexity of the data. Clustering and other techniques for player profiling and play style analysis have, therefore, become popular in the nascent field of game analytics. However, the proper use of clustering techniques requires expertise and an understanding of games is essential to evaluate results. With this paper, we address game data scientists and present a review and tutorial focusing on the application of clustering techniques to mine behavioral game data. Several algorithms are reviewed and examples of their application shown. Key topics such as feature normalization are discussed and open problems in the context of game analytics are pointed out.",
"title": ""
},
{
"docid": "c35341d3b82dd4921e752b4b774cd501",
"text": "The initial concept of a piezoelectric transformer (PT) was proposed by C.A. Rosen, K. Fish, and H.C. Rothenberg and is described in the U.S. Patent 2,830,274, applied for in 1954. Fifty years later, this technology has become one of the most promising alternatives for replacing the magnetic transformers in a wide range of applications. Piezoelectric transformers convert electrical energy into electrical energy by using acoustic energy. These devices are typically manufactured using piezoelectric ceramic materials that vibrate in resonance. With appropriate designs it is possible to step-up and step-down the voltage between the input and output of the piezoelectric transformer, without making use of wires or any magnetic materials. This technology did not reach commercial success until early the 90s. During this period, several companies, mainly in Japan, decided to introduce PTs for applications requiring small size, high step-up voltages, and low electromagnetic interference (EMI) signature. These PTs were developed based on optimizations of the initial Rosen concept, and thus typically referred to as “Rosen-type PTs”. Today’s, PTs are used for backlighting LCD displays in notebook computers, PDAs, and other handheld devices. The PT yearly sales estimate was about over 20 millions in 2000 and industry sources report that production of piezoelectric transformers in Japan is growing steadily at a rate of 10% annually. The reliability achieved in LCD applications and the advances in the related technologies (materials, driving circuitry, housing and manufacturing) have currently spurred enormous interest and confidence in expanding this technology to other fields of application. This, consequently, is expanding the business opportunities for PTs. Currently, the industry trend is moving in two directions: low-cost product market and valueadded product market. Prices of PTs have been declining in recent years, and this trend is expected to continue. Soon (if not already), this technology will become a serious candidate for replacing the magnetic transformers in cost-sensitive applications. Currently, leading makers are reportedly focusing on more value-added products. Two of the key value-added areas are miniaturization and higher output power. Piezoelectric transformers for power applications require lower output impedances, high power capabilities and high efficiency under step-down conditions. Among the different PT designs proposed as alternatives to the classical Rosen configuration, Transoner laminated radial PT has been demonstrated as the most promising technology for achieving high power levels. Higher powers than 100W, with power densities in the range of 30-40 W/cm2 have been demonstrated. Micro-PTs are currently being developed with sizes of less than 5mm diameter and 1mm thickness allowing up to 0.5W power transfer and up to 50 times gain. Smaller sizes could be in the future integrated to power MEMs systems. This paper summarizes the state of the art on the PT technology and introduces the current trends of this industry. HISTORICAL INTRODUCTION It has been 50 years since the development of piezoelectric ceramic transformers began. The first invention on piezoelectric transformers (PTs) has been traditionally associated with the patent of Charles A. Rosen et al., which was disclosed on January 4, 1954 and finally granted on April 8, 1958 [1]. Briefly after this first application, on September 17, 1956, H.Jaffe and Don A. Berlincourt, on behalf of the Clevite Companies, applied for the second patent on PT technology, which was granted on Jan. 24, 1961 [2]. Since then, the PT technology has been growing simultaneously with the progress in piezoceramic technology as well as with the electronics in general. Currently, it is estimated that 25-30 millions of PTs are annually sold commercially for different applications. Thus, the growth of the technology is promising and is expected to expand to many other areas as an alternative to magnetic transformers. In attempt to be historically accurate, it is required to mention that the first studies on PTs initially took place in the late 20s and early 30s. Based on the research of the author of this paper, Alexander McLean Nicolson has the honor of being the first researcher to consider the idea of a piezoelectric transformer. In his patent US1829234 titled “Piezo-electric crystal transformer” [3], Nicolson describes the first research in this field. The work of Nicolson on piezoelectric transformers, recognized in several other patents [4], was limited to the use of piezoelectric crystals with obvious limitations in performance, design and applicability as compared to the later developed piezoceramic materials. Piezoelectric transformers (from now on referred to as piezoelectric ceramic transformers), like magnetic devices, are basically energy converters. A magnetic transformer operates by converting electrical input to magnetic energy and then reconverting that magnetic energy back to electrical output. A PT has an analogous operating mechanism. It converts an electrical input into mechanical energy and subsequently reconverts this mechanical energy back to an electrical output. This mechanical conversion is achieved by a standing wave vibrating at a frequency equal to a multiple of the mechanical resonance frequency of the transformer body, which is typically in the range of 50 to 150 kHz. Recently, PTs operating at 1MHz and higher have also been proposed. Piezoelectric transformers were initially considered as high voltage transformer devices. Two different designs driving the initial steps in the development on these “conventional” PTs were, the so-called Rosen-type PT designs and the contour extensional mode uni-poled PTs. Until early in 90s, the technology evolution was based on improvements in these two basic designs. Although Rosen proposed several types of PT embodiments in his patents and publications, the name of “Rosen-type PT” currently refers to those PTs representing an evolution on the initial rectangular design idea proposed by C. Rosen in 1954, as shown in Figure 1.",
"title": ""
},
{
"docid": "3a3f3e1c0eac36d53a40d7639c3d65cc",
"text": "The aim of this paper is to present a hybrid approach to accurate quantification of vascular structures from magnetic resonance angiography (MRA) images using level set methods and deformable geometric models constructed with 3-D Delaunay triangulation. Multiple scale filtering based on the analysis of local intensity structure using the Hessian matrix is used to effectively enhance vessel structures with various diameters. The level set method is then applied to automatically segment vessels enhanced by the filtering with a speed function derived from enhanced MRA images. Since the goal of this paper is to obtain highly accurate vessel borders, suitable for use in fluid flow simulations, in a subsequent step, the vessel surface determined by the level set method is triangulated using 3-D Delaunay triangulation and the resulting surface is used as a parametric deformable model. Energy minimization is then performed within a variational setting with a first-order internal energy; the external energy is derived from 3-D image gradients. Using the proposed method, vessels are accurately segmented from MRA data.",
"title": ""
}
] |
scidocsrr
|
ecd2b192dfd4446903cfdf44beb244d3
|
Efficient self-shadowed radiosity normal mapping
|
[
{
"docid": "96a10ef46ebc1b1a4075d874bdfabe50",
"text": "Bump mapping produces realistic shading by perturbing normal vectors to a surface, but does not show the shadows that the bumps cast on nearby parts of the same surface. In this paper, these shadows are found from precomputed tables of horizon angles, listing, for each position entry, the elevation of the horizon in a sampled collection of directions. These tables are made for bumps on a standard flat surface, and then a transformation is developed so that the same tables can be used for an arbitrary curved parametrized surface patch. This necessitates a new method for scaling the bump size to the patch size. Incremental calculations can be used in a scan line algorithm for polygonal surface approximations. The errors in the bump shadows are discussed, as well as their anti-aliasing. (An earlier version of this article appeared as Max [10].)",
"title": ""
}
] |
[
{
"docid": "a73167a43aec68b59968a014e553bf8d",
"text": "Between the late 1960s and the beginning of the 1980s, the wide recognition that simple dynamical laws could give rise to complex behaviors was sometimes hailed as a true scientific revolution impacting several disciplines, for which a striking label was coined—“chaos.” Mathematicians quickly pointed out that the purported revolution was relying on the abstract theory of dynamical systems founded in the late 19th century by Henri Poincaré who had already reached a similar conclusion. In this paper, we flesh out the historiographical tensions arising from these confrontations: longue-durée history and revolution; abstract mathematics and the use of mathematical techniques in various other domains. After reviewing the historiography of dynamical systems theory from Poincaré to the 1960s, we highlight the pioneering work of a few individuals (Steve Smale, Edward Lorenz, David Ruelle). We then go on to discuss the nature of the chaos phenomenon, which, we argue, was a conceptual reconfiguration as much as a sociodisciplinary convergence. C © 2002 Elsevier Science (USA)",
"title": ""
},
{
"docid": "7f070d85f4680a2b88d3b530dff0cfc5",
"text": "An extensive data search among various types of developmental and evolutionary sequences yielded a `four quadrant' model of consciousness and its development (the four quadrants being intentional, behavioural, cultural, and social). Each of these dimensions was found to unfold in a sequence of at least a dozen major stages or levels. Combining the four quadrants with the dozen or so major levels in each quadrant yields an integral theory of consciousness that is quite comprehensive in its nature and scope. This model is used to indicate how a general synthesis and integration of twelve of the most influential schools of consciousness studies can be effected, and to highlight some of the most significant areas of future research. The conclusion is that an `all-quadrant, all-level' approach is the minimum degree of sophistication that we need into order to secure anything resembling a genuinely integral theory of consciousness.",
"title": ""
},
{
"docid": "1189c3648c2cce0c716ec7c0eca214d7",
"text": "This article considers the application of variational Bayesian methods to joint recursive estimation of the dynamic state and the time-varying measurement noise parameters in linear state space models. The proposed adaptive Kalman filtering method is based on forming a separable variational approximation to the joint posterior distribution of states and noise parameters on each time step separately. The result is a recursive algorithm, where on each step the state is estimated with Kalman filter and the sufficient statistics of the noise variances are estimated with a fixed-point iteration. The performance of the algorithm is demonstrated with simulated data.",
"title": ""
},
{
"docid": "1e391fba8f3d7a7b72e8f8c8c85e1080",
"text": "It is not my intention to compare their entire works in this short paper. It would be like comparing apples and pears – they have produced very different models and for very different purposes. While Ernst von Glasersfeld has always limited himself to a sharp focus on epistemology, Humberto Maturana has developed several different models relating to the different areas of cellular biology, experimental epistemology, neurophysiology, language, visual perception, and the “definition of the living,” among others. Indeed, in recent years Ernst von Glasersfeld (1995) has written that he now tries to avoid even using the term “epistemology” and writes about human “knowing.” “...(this book) is an attempt to explain a way of thinking and makes no claim to describe an independent reality. That is why I prefer to call it an approach to or a theory of knowing. Though I have used them in the past, I now try to avoid the terms ‘epistemology’ or ‘theory of knowledge’ for constructivism, because they tend to imply the traditional scenario according to which novice subjects are born into a ready-made world, which they must try to discover and ‘represent’ to themselves. From the constructivist point of view, the subject cannot transcend the limits of individual experience.” (Glasersfeld 1995, pp. 1–2) In his early studies Ernst von Glasersfeld noted a problem in Wittgenstein’s (1933) assertions about comparing our picture of reality with the reality in question in order to determine whether or not our own picture was true or false. Ernst von Glasersfeld (1987) comments: “How could one possibly carry out that comparison? With that question, although I did not know it at the time, I found myself in the company of Sextus Empiricus, of Montaigne, Berkeley, and Vico ... the company of all the courageous sceptics who ... have maintained that it is impossible to compare our image of reality with a reality outside. It is impossible, because in order to check whether our representation is a ‘true’ picture of reality we should have to have access not only to our representation but also to that outside reality before we get to know it. And because the only way in which we are supposed to get at reality is precisely the way we would like to check and verify, there is no possible escape from the dilemma.” (Glasersfeld 1987, pp. 137–138). So here is a very clear condemnation of “epistemological cheating” – the impossible feat of trying to peep around our perceptual “goggles” to see if our “picture” is approximating to the “real reality” or not. Over the past 20 years Ernst von Glasersfeld has put a lot of effort into understanding just where his work and the work of Humberto Maturana differ, especially in the fundamental matters of epistemology. Apart from his grave reservations about key concepts of Maturana’s work such as the “observer” (and how he comes about), “consciousness,” “awareness,” and “language” (its genesis, and that it precedes cognition, etc.), Ernst von Glasersfeld shares the perplexity of other authors regarding the ways in which Maturana can be seen to be “smuggling realism” back into his opus in one form or another (Mingers 1995, Johnson 1991, Held & Pols 1987). In Maturana’s writings there are many passages where one gets the impression that he edges over into the terrain of “realism” in his discussions and phraseologies. In attempting to understand this Ernst von Glasersfeld (1991) tries to explain that Maturana Vincent Kenny A Accademia Costruttivista di Terapia Sistemica (Italy) <kenny@acts-psicologia.it>",
"title": ""
},
{
"docid": "c81e823de071ae451420326e9fbb2e3d",
"text": "Deep latent variable models, trained using variational autoencoders or generative adversarial networks, are now a key technique for representation learning of continuous structures. However, applying similar methods to discrete structures, such as text sequences or discretized images, has proven to be more challenging. In this work, we propose a flexible method for training deep latent variable models of discrete structures. Our approach is based on the recently-proposed Wasserstein autoencoder (WAE) which formalizes the adversarial autoencoder (AAE) as an optimal transport problem. We first extend this framework to model discrete sequences, and then further explore different learned priors targeting a controllable representation. This adversarially regularized autoencoder (ARAE) allows us to generate natural textual outputs as well as perform manipulations in the latent space to induce change in the output space. Finally we show that the latent representation can be trained to perform unaligned textual style transfer, giving improvements both in automatic/human evaluation compared to existing methods.",
"title": ""
},
{
"docid": "81f7938d647ac9658995fb61f508aa0c",
"text": "This letter describes a robust voice activity detector using an ultrasonic Doppler sonar device. An ultrasonic beam is incident on the talker's face. Facial movements result in Doppler frequency shifts in the reflected signal that are sensed by an ultrasonic sensor. Speech-related facial movements result in identifiable patterns in the spectrum of the received signal that can be used to identify speech activity. These sensors are not affected by even high levels of ambient audio noise. Unlike most other non-acoustic sensors, the device need not be taped to a talker. A simple yet robust method of extracting the voice activity information from the ultrasonic Doppler signal is developed and presented in this letter. The algorithm is seen to be very effective and robust to noise, and it can be implemented in real time.",
"title": ""
},
{
"docid": "628947fa49383b73eda8ad374423f8ce",
"text": "The proposed system for the cloud based automatic system involves the automatic updating of the data to the lighting system. It also reads the data from the base station in case of emergencies. Zigbee devices are used for wireless transmission of the data from the base station to the light system thus enabling an efficient street lamp control system. Infrared sensor and dimming control circuit is used to track the movement of human in a specific range and dims/bright the street lights accordingly hence saving a large amount of power. In case of emergencies data is sent from the particular light or light system and effective measures are taken accordingly.",
"title": ""
},
{
"docid": "38ec25930c008a1bc70b4c5e56774747",
"text": "Connectionist Temporal Classification (CTC) has recently shown improved efficiency in LVCSR decoding. One popular implementation is to use a CTC model to predict the phone posteriors at each frame which are then used for Viterbi beam search on a modified WFST network. This is still within the traditional frame synchronous decoding framework. In this paper, the peaky posterior property of a CTC model is carefully investigated and it is found that ignoring blank frames will not introduce additional search errors. Based on this phenomenon, a novel phone synchronous decoding framework is proposed. Here, a phone-level CTC lattice is constructed purely using the CTC acoustic model. The resultant CTC lattice is highly compact and removes tremendous search redundancy due to blank frames. Then, the CTC lattice can be composed with the standard WFST to yield the final decoding result. The proposed approach effectively separates the acoustic evidence calculation and the search operation. This not only significantly improves online search efficiency, but also allows flexible acoustic/linguistic resources to be used. Experiments on LVCSR tasks show that phone synchronous decoding can yield an extra 2-3 times speed up compared to the traditional frame synchronous CTC decoding implementation.",
"title": ""
},
{
"docid": "3e54834b8e64bbdf25dd0795e770d63c",
"text": "Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes for computerized lung cancer detection, emphysema diagnosis and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3D airway tree structure from a CT volume is quite a challenging task. Several researchers have proposed automated airway segmentation algorithms basically based on region growing and machine learning techniques. However, these methods fail to detect the peripheral bronchial branches, which results in a large amount of leakage. This paper presents a novel approach for more accurate extraction of the complex airway tree. This proposed segmentation method is composed of three steps. First, Hessian analysis is utilized to enhance the tube-like structure in CT volumes; then, an adaptive multiscale cavity enhancement filter is employed to detect the cavity-like structure with different radii. In the second step, support vector machine learning will be utilized to remove the false positive (FP) regions from the result obtained in the previous step. Finally, the graph-cut algorithm is used to refine the candidate voxels to form an integrated airway tree. A test dataset including 50 standard-dose chest CT volumes was used for evaluating our proposed method. The average extraction rate was about 79.1 % with the significantly decreased FP rate. A new method of airway segmentation based on local intensity structure and machine learning technique was developed. The method was shown to be feasible for airway segmentation in a computer-aided diagnosis system for a lung and bronchoscope guidance system.",
"title": ""
},
{
"docid": "7462810e07059616c0e16cc4d51a28f9",
"text": "This paper introduces the use of experiential learning during the early stages of teacher professional development. Teachers observe student outcomes from the very beginning of the process and experience new pedagogical approaches as learners themselves before adapting and implementing them in their own classrooms. This research explores the implementation of this approach with teachers in Irish second level schools who are being asked to make significant pedagogic changes as part of a major curriculum reform. Teachers’ self-reflections, observations and interviews demonstrate how the process and outcomes influenced their beliefs, resulting in meaningful changes in classroom practice. © 2016 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).",
"title": ""
},
{
"docid": "b3b0df15dbb6a516dbda0c4ebac38309",
"text": "The concept of alternative risk premia is an extension of the factor investing approach. Factor investing consists in building long-only equity portfolios, which are directly exposed to common risk factors like size, value or momentum. Alternative risk premia designate non-traditional risk premia other than a long exposure to equities and bonds. They may involve equities, rates, credit, currencies or commodities and correspond to long/short portfolios. However, contrary to traditional risk premia, it is more difficult to define alternative risk premia and which risk premia really matter. In fact, the term “alternative risk premia” encompasses two different types of systematic risk factor: skewness risk premia and market anomalies. For example, the most frequent alternative risk premia are carry and momentum, which are respectively a skewness risk premium and a market anomaly. Because the returns of alternative risk premia exhibit heterogeneous patterns in terms of statistical properties, option profile and drawdown, asset allocation is more complex than with traditional risk premia. In this context, risk diversification cannot be reduced to volatility diversification and skewness risk becomes a key component of portfolio optimization. Understanding these different concepts and how they interconnect is essential for improving multi-asset allocation.",
"title": ""
},
{
"docid": "ede1f31a32e59d29ee08c64c1a6ed5f7",
"text": "There are different approaches to the problem of assigning each word of a text with a parts-of-speech tag, which is known as Part-Of-Speech (POS) tagging. In this paper we compare the performance of a few POS tagging techniques for Bangla language, e.g. statistical approach (n-gram, HMM) and transformation based approach (Brill’s tagger). A supervised POS tagging approach requires a large amount of annotated training corpus to tag properly. At this initial stage of POS-tagging for Bangla, we have very limited resource of annotated corpus. We tried to see which technique maximizes the performance with this limited resource. We also checked the performance for English and tried to conclude how these techniques might perform if we can manage a substantial amount of annotated corpus.",
"title": ""
},
{
"docid": "4139e60c07d60cc5eb62a5b1dbf80695",
"text": "Previous research has primarily examined consumers’ perceived usefulness of web sites and trust in the web retailer as two major predictors of web site use and e-commerce adoption. While the consumers’ repeated behavior in the past (i.e., habit) may contribute to continuance behavior, it has not been investigated. This article includes habit as a primary construct along with perceived usefulness and trust to predict and explain consumers’ continued behavior of using a B2C web site. Additionally, included are several web quality measures as antecedents to trust and perceived usefulness. The research model is evaluated using structural equation modeling. Results show that consumers’ behavioral intentions to continue using a B2C web site are determined by all three key drivers: perceived usefulness, trust, and habit. Furthermore, not all dimensions of web quality have a significant effect on perceived usefulness and trust.",
"title": ""
},
{
"docid": "cefcd78be7922f4349f1bb3aa59d2e1d",
"text": "The paper presents performance analysis of modified SEPIC dc-dc converter with low input voltage and wide output voltage range. The operational analysis and the design is done for the 380W power output of the modified converter. The simulation results of modified SEPIC converter are obtained with PI controller for the output voltage. The results obtained with the modified converter are compared with the basic SEPIC converter topology for the rise time, peak time, settling time and steady state error of the output response for open loop. Voltage tracking curve is also shown for wide output voltage range. I. Introduction Dc-dc converters are widely used in regulated switched mode dc power supplies and in dc motor drive applications. The input to these converters is often an unregulated dc voltage, which is obtained by rectifying the line voltage and it will therefore fluctuate due to variations of the line voltages. Switched mode dc-dc converters are used to convert this unregulated dc input into a controlled dc output at a desired voltage level. The recent growth of battery powered applications and low voltage storage elements are increasing the demand of efficient step-up dc–dc converters. Typical applications are in adjustable speed drives, switch-mode power supplies, uninterrupted power supplies, and utility interface with nonconventional energy sources, battery energy storage systems, battery charging for electric vehicles, and power supplies for telecommunication systems etc.. These applications demand high step-up static gain, high efficiency and reduced weight, volume and cost. The step-up stage normally is the critical point for the design of high efficiency converters due to the operation with high input current and high output voltage [1]. The boost converter topology is highly effective in these applications but at low line voltage in boost converter, the switching losses are high because the input current has the maximum value and the highest step-up conversion is required. The inductor has to be oversized for the large current at low line input. As a result, a boost converter designed for universal-input applications is heavily oversized compared to a converter designed for a narrow range of input ac line voltage [2]. However, recently new non-isolated dc–dc converter topologies with basic boost are proposed, showing that it is possible to obtain high static gain, low voltage stress and low losses, improving the performance with respect to the classical topologies. Some single stage high power factor rectifiers are presented in [3-6]. A new …",
"title": ""
},
{
"docid": "bb685e028e4f1005b7fe9da01f279784",
"text": "Although there are few efficient algorithms in the literature for scientific workflow tasks allocation and scheduling for heterogeneous resources such as those proposed in grid computing context, they usually require a bounded number of computer resources that cannot be applied in Cloud computing environment. Indeed, unlike grid, elastic computing, such asAmazon's EC2, allows users to allocate and release compute resources on-demand and pay only for what they use. Therefore, it is reasonable to assume that the number of resources is infinite. This feature of Clouds has been called âillusion of infiniteresourcesâ. However, despite the proven benefits of using Cloud to run scientific workflows, users lack guidance for choosing between multiple offering while taking into account several objectives which are often conflicting. On the other side, the workflow tasks allocation and scheduling have been shown to be NP-complete problems. Thus, it is convenient to use heuristic rather than deterministic algorithm. The objective of this paper is to design an allocation strategy for Cloud computing platform. More precisely, we propose three complementary bi-criteria approaches for scheduling workflows on distributed Cloud resources, taking into account the overall execution time and the cost incurred by using a set of resources.",
"title": ""
},
{
"docid": "c92ddd8bc6e7fc7e759a08f3c4303ea8",
"text": "Defect detection and classification in semiconductor wafers has received an increasing attention from both industry and academia alike. Wafer defects are a serious problem that could cause massive losses to the companies' yield. The defects occur as a result of a lengthy and complex fabrication process involving hundreds of stages, and they can create unique patterns. If these patterns were to be identified and classified correctly, then the root of the fabrication problem can be recognized and eventually resolved. Machine learning (ML) techniques have been widely accepted and are well suited for such classification-/identification problems. However, none of the existing ML model's performance exceeds 96% in identification accuracy for such tasks. In this paper, we develop a state-of-the-art classifying algorithm using multiple ML techniques, relying on a general-regression-network-based consensus learning model along with a powerful randomization technique. We compare our proposed method with the widely used ML models in terms of model accuracy, stability, and time complexity. Our method has proved to be more accurate and stable as compared to any of the existing algorithms reported in the literature, achieving its accuracy of 99.8%, stability of 1.128, and TBM of 15.8 s.",
"title": ""
},
{
"docid": "1e1237c45acf5f09a3e945b17d4bbaf8",
"text": "Multicore systems have not only become ubiquitous in the desktop and server worlds, but are also becoming the standard in the embedded space. Multicore offers programability and flexibility over traditional ASIC solutions. However, many of the advantages of switching to multicore hinge on the assumption that software development is simpler and less costly than hardware development. However, the design and development of correct, high-performance, multi-threaded programs is a difficult challenge for most programmers. Stream programming is one model that has wide applicability in the multimedia, signal processing, and networking domains. Streaming is convenient for developers because it separates the creation of actors, or functions that operate on packets of data, from the flow of data through the system. However, stream compilers are generally ineffective for embedded systems because they do not handle strict resource or timing constraints. Specifically, real-time deadlines and memory size limitations are not handled by conventional stream partitioning and scheduling techniques. This paper introducesthe SPIR compiler that orchestrates the execution of streaming applications with strict memory and timing constraints. Software defined radio or SDR is chosen as the application space to illustrate the effectiveness of the compiler for mapping applications onto the IBM Cell platform.",
"title": ""
},
{
"docid": "eded4097ba2c00f8511a1389538c5f8a",
"text": "Ontology engineering and maintenance require (semi-)automated ontology change operations. Intensive research has been conducted on TBox and ABox changes in description logics (DLs), and various change operators have been proposed in the literature. Existing operators largely fall into two categories: syntax-based and model-based. While each approach has its advantages and disadvantages, an important topic that has rarely been explored is how to achieve a balance between syntax-based and model-based approaches. Also, most existing operators are specially designed for either TBox change or ABox change, and cannot handle the general ontology revision task—given a DL knowledge base (KB, a pair consisting of a TBox and an ABox), how to revise it by a set of TBox and ABox axioms (i.e., a new DL KB). In this article, we introduce an alternative structure for DL-Lite, called a featured interpretation, and show that featured models provide a finite and tight characterization to the classical semantics of DL-Lite. A key issue for defining a change operator is the so-called expressibility, that is, whether a set of models (or featured models here) is axiomatizable in DLs. It is indeed much easier to obtain expressibility results for featured models than for classical DL models. As a result, the new semantics determined by featured models provides a method for defining and studying various changes of DL-Lite KBs that involve both TBoxes and ABoxes. To demonstrate the usefulness of the new semantic characterization in ontology change, we define two revision operators for DL-Lite KBs using featured models and study their properties. In particular, we show that our two operators both satisfy AGM postulates. We show that the complexity of our revisions is ΠP2-complete, that is, on the same level as major revision operators in propositional logic, which further justifies the feasibility of our revision approach for DL-Lite. Also, we develop algorithms for these DL-Lite revisions.",
"title": ""
},
{
"docid": "5854e4fe2abddc407273b5df65b0c97b",
"text": "This reprint is provided for personal and noncommercial use. For any other use, please send a request to Permissions,",
"title": ""
}
] |
scidocsrr
|
1f7896d2a4a6fd2000a8cf4b5ad2161b
|
Numerical password via graphical input — An authentication system on embedded platform
|
[
{
"docid": "9975e61afd0bf521c3ffbf29d0f39533",
"text": "Computer security depends largely on passwords to authenticate human users. However, users have difficulty remembering passwords over time if they choose a secure password, i.e. a password that is long and random. Therefore, they tend to choose short and insecure passwords. Graphical passwords, which consist of clicking on images rather than typing alphanumeric strings, may help to overcome the problem of creating secure and memorable passwords. In this paper we describe PassPoints, a new and more secure graphical password system. We report an empirical study comparing the use of PassPoints to alphanumeric passwords. Participants created and practiced either an alphanumeric or graphical password. The participants subsequently carried out three longitudinal trials to input their password over the course of 6 weeks. The results show that the graphical password users created a valid password with fewer difficulties than the alphanumeric users. However, the graphical users took longer and made more invalid password inputs than the alphanumeric users while practicing their passwords. In the longitudinal trials the two groups performed similarly on memory of their password, but the graphical group took more time to input a password. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "e9f7bf5eb9bf3c2c3ff7820ffb34cb93",
"text": "BACKGROUND\nThe transconjunctival lower eyelid blepharoplasty is advantageous for its quick recovery and low complication rates. Conventional techniques rely on fat removal to contour the lower eyelid. This article describes the authors' extended transconjunctival lower eyelid blepharoplasty technique that takes dissection beyond the orbital rim to address aging changes on the midcheek.\n\n\nMETHODS\nFrom December of 2012 to December of 2015, 54 patients underwent this procedure. Through a transconjunctival incision, the preseptal space was entered and excess orbital fat pads were excised. Medially, the origins of the palpebral part of the orbicularis oculi, the tear trough ligament, and orbital part of the orbicularis oculi were sequentially released, connecting the dissection with the premaxillary space. More laterally, the orbicularis retaining ligament was released, connecting the dissection with the prezygomatic space. Excised orbital fat was then grafted under the released tear trough ligament to correct the tear trough deformity. When the patients had significant maxillary retrusion, structural fat grafting was performed at the same time.\n\n\nRESULTS\nThe mean follow-up was 10 months. High satisfaction was noted among the patients treated with this technique. The revision rate was 2 percent. Complication rates were low. No chemosis, prolonged swelling, lower eyelid retraction, or ectropion was seen in any patients.\n\n\nCONCLUSION\nThe extended transconjunctival lower blepharoplasty using the midcheek soft-tissue spaces is a safe and effective approach for treating patients presenting with eye bags and tear trough deformity.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.",
"title": ""
},
{
"docid": "466e715ad00c023ab27871d6b4e98104",
"text": "Cheap and versatile cameras make it possible to easily and quickly capture a wide variety of documents. However, low resolution cameras present a challenge to OCR because it is virtually impossible to do character segmentation independently from recognition. In this paper we solve these problems simultaneously by applying methods borrowed from cursive handwriting recognition. To achieve maximum robustness, we use a machine learning approach based on a convolutional neural network. When our system is combined with a language model using dynamic programming, the overall performance is in the vicinity of 80-95% word accuracy on pages captured with a 1024/spl times/768 webcam and 10-point text.",
"title": ""
},
{
"docid": "c167db403413e60c2ed15e728bca81b4",
"text": "OBJECTIVES\nAttachment style refers to a systematic pattern of emotions, behaviors, and expectations that people have for how others will respond in relationships. Extensive evidence has documented the importance of attachment security in infants, children, adolescents, and adults, but the effects of attachment among exclusively older adult populations have received less attention. The present study explored the relationships between attachment style in late adulthood and eudaimonic well-being, which refers to a life replete with meaning, productive activity, and striving to reach one's potential. It also explored the mediating role of self-compassion, which can be described as a kind and forgiving attitude toward the self.\n\n\nMETHOD\nA sample of 126 community-dwelling older adults (mean age = 70.40 years) completed measures tapping adult attachment, self-compassion, and six theoretically derived markers of eudaimonic well-being.\n\n\nRESULTS\nAttachment anxiety and avoidance were inversely related to self-acceptance, personal growth, interpersonal relationship quality, purpose in life, and environmental mastery. Mediation analyses showed that self-compassion mediated each of these relationships.\n\n\nCONCLUSION\nResults support the importance of attachment orientation for psychological well-being in late life and indicate that secure attachment facilitates an attitude of kindness and acceptance toward the self.",
"title": ""
},
{
"docid": "37f4da100d31ad1da1ba21168c95d7e9",
"text": "An AC chopper controller with symmetrical Pulse-Width Modulation (PWM) is proposed to achieve better performance for a single-phase induction motor compared to phase-angle control line-commutated voltage controllers and integral-cycle control of thyristors. Forced commutated device IGBT controlled by a microcontroller was used in the AC chopper which has the advantages of simplicity, ability to control large amounts of power and low waveform distortion. In this paper the simulation and hardware models of a simple single phase IGBT An AC controller has been developed which showed good results.",
"title": ""
},
{
"docid": "41e04cbe2ca692cb65f2909a11a4eb5b",
"text": "Bitcoin’s core innovation is its solution to double-spending, called Nakamoto consensus. This mechanism provides a probabilistic guarantee that transactions will not be reversed once they are sufficiently deep in the blockchain, assuming an attacker controls a bounded fraction of mining power in the network. We show, however, that when miners are rational this guarantee can be undermined by a whale attack in which an attacker issues an off-theblockchain whale transaction with an anomalously large transaction fee in an effort to convince miners to fork the current chain. We carry out a game-theoretic analysis and simulation of this attack, and show conditions under which it yields an expected positive payoff for the attacker.",
"title": ""
},
{
"docid": "01659bb903dcffc36500c349cb7dbf88",
"text": "To try to decrease the preference of the attribute values for information gain and information gain ratio, in the paper, the authors puts forward a improved algorithm of C4.5 decision tree on the selection classification attribute. The basic thought of the algorithm is as follows: Firstly, computing the information gain of selection classification attribute, and then get an attribute of the information gain which is higher than the average level; Secondly, computing separately the arithmetic average value of the information gain ratio and information gain of the attribute, and then select the biggest attribute of the average value and set up a branch decision; Finally, to use recursive method to build a decision tree. The experiment shows that this method is applicable and effective.",
"title": ""
},
{
"docid": "35ea891529613ff69ce304cdd3968cab",
"text": "In virtual reality exposure therapy (VRET), patients are exposed to virtual environments that resemble feared real-life situations. The aim of the current study was to assess the extent to which VRET gains can be observed in real-life situations. We conducted a meta-analysis of clinical trials applying VRET to specific phobias and measuring treatment outcome by means of behavioral laboratory tests or recordings of behavioral activities in real-life. Data sources were searches of databases (Medline, PsycInfo, and Cochrane). We included in total 14 clinical trials on specific phobias. Results revealed that patients undergoing VRET did significantly better on behavioral assessments following treatment than before treatment, with an aggregated uncontrolled effect size of g = 1.23. Furthermore, patients undergoing VRET performed better on behavioral assessments at post-treatment than patients on wait-list (g = 1.41). Additionally, results of behavioral assessment at post-treatment and at follow-up revealed no significant differences between VRET and exposure in vivo (g = -0.09 and 0.53, respectively). Finally, behavioral measurement effect sizes were similar to those calculated from self-report measures. The findings demonstrate that VRET can produce significant behavior change in real-life situations and support its application in treating specific phobias.",
"title": ""
},
{
"docid": "e0309023bca92f5401e6f2bcd194024e",
"text": "Watching English-spoken films with subtitles is becoming increasingly popular throughout the world. One reason for this trend is the assumption that perceptual learning of the sounds of a foreign language, English, will improve perception skills in non-English speakers. Yet, solid proof for this is scarce. In order to test the potential learning effects derived from watching subtitled media, a group of intermediate Spanish students of English as a foreign language watched a 1h-long episode of a TV drama in its original English version, with English, Spanish or no subtitles overlaid. Before and after the viewing, participants took a listening and vocabulary test to evaluate their speech perception and vocabulary acquisition in English, plus a final plot comprehension test. The results of the listening skills tests revealed that after watching the English subtitled version, participants improved these skills significantly more than after watching the Spanish subtitled or no-subtitles versions. The vocabulary test showed no reliable differences between subtitled conditions. Finally, as one could expect, plot comprehension was best under native, Spanish subtitles. These learning effects with just 1 hour exposure might have major implications with longer exposure times.",
"title": ""
},
{
"docid": "77f408e456970e32551767e847ca1c19",
"text": "Many graph analytics problems can be solved via iterative algorithms where the solutions are often characterized by a set of steady-state conditions. Different algorithms respect to different set of fixed point constraints, so instead of using these traditional algorithms, can we learn an algorithm which can obtain the same steady-state solutions automatically from examples, in an effective and scalable way? How to represent the meta learner for such algorithm and how to carry out the learning? In this paper, we propose an embedding representation for iterative algorithms over graphs, and design a learning method which alternates between updating the embeddings and projecting them onto the steadystate constraints. We demonstrate the effectiveness of our framework using a few commonly used graph algorithms, and show that in some cases, the learned algorithm can handle graphs with more than 100,000,000 nodes in a single machine.",
"title": ""
},
{
"docid": "83cffbfa5ceebaf5358128892642f4e4",
"text": "We present a robust and efficient algorithm for the pairwise non-rigid registration of partially overlapped 3D surfaces. Our approach treats non-rigid registration as an optimization problem and solves it by alternating between correspondence and deformation optimization. Assuming approximately isometric deformations, robust correspondences are generated using a pruning mechanism based on geodesic consistency. We iteratively learn an appropriate deformation discretization from the current set of correspondences and use it to update the correspondences in the next iteration. Our algorithm is able to register partially similar point clouds that undergo large deformations, in just a few seconds. We demonstrate the potential of our algorithm in various applications such as example based articulated segmentation, and shape interpolation.",
"title": ""
},
{
"docid": "6df61e330f6b71c4ef136e3a2220a5e2",
"text": "In recent years, we have seen significant advancement in technologies to bring about smarter cities worldwide. The interconnectivity of things is the key enabler in these initiatives. An important building block is smart mobility, and it revolves around resolving land transport challenges in cities with dense populations. A transformative direction that global stakeholders are looking into is autonomous vehicles and the transport infrastructure to interconnect them to the traffic management system (that is, vehicle to infrastructure connectivity), as well as to communicate with one another (that is, vehicle to vehicle connectivity) to facilitate better awareness of road conditions. A number of countries had also started to take autonomous vehicles to the roads to conduct trials and are moving towards the plan for larger scale deployment. However, an important consideration in this space is the security of the autonomous vehicles. There has been an increasing interest in the attacks and defences of autonomous vehicles as these vehicles are getting ready to go onto the roads. In this paper, we aim to organize and discuss the various methods of attacking and defending autonomous vehicles, and propose a comprehensive attack and defence taxonomy to better categorize each of them. Through this work, we hope that it provides a better understanding of how targeted defences should be put in place for targeted attacks, and for technologists to be more mindful of the pitfalls when developing architectures, algorithms and protocols, so as to realise a more secure infrastructure composed of dependable autonomous vehicles.",
"title": ""
},
{
"docid": "9c98b0652776a8402979134e753a8b86",
"text": "In this paper, the shielded coil structure using the ferrites and the metallic shielding is proposed. It is compared with the unshielded coil structure (i.e. a pair of circular loop coils only) to demonstrate the differences in the magnetic field distributions and system performance. The simulation results using the 3D Finite Element Analysis (FEA) tool show that it can considerably suppress the leakage magnetic field from 100W-class wireless power transfer (WPT) system with the enhanced system performance.",
"title": ""
},
{
"docid": "7ed1fabaa95eaa1afb52c2f73230b3b0",
"text": "BACKGROUND\nAdult circumcision is an extremely common surgical operation. As such, we developed a simple model to teach junior doctors the various techniques of circumcision in a safe, reliable, and realistic manner.\n\n\nMATERIALS AND METHODS\nA commonly available simulated model penis (Pharmabotics, Limited, Winchester, United Kingdom) is used, which is then covered with a 30-mm diameter, 400-mm long, double-layered simulated bowel (Limbs & Things, Bristol, United Kingdom). The 2 layers of the prepuce are simulated by folding the simulated bowel on itself. The model has been officially adopted in the UroEmerge hands-on practical skills course and all participants were asked to provide feedback about their experience on a scale from 1 to 10 (1 = extremely unsatisfied and 10 = excellent).\n\n\nRESULTS\nThe model has been used successfully to demonstrate, teach, and practice adult circumcision as well as other penile procedures with rating by trainees ranged from 7 to 10 (median: 9), and 9 of 12 trainees commented on the model using expressions such as \"life like,\" \"excellent idea,\" or \"extremely beneficial.\"\n\n\nCONCLUSIONS\nThe model is particularly useful as it is life like, realistic, easy to set up, and can be used to repeatedly demonstrate circumcision, as well as other surgical procedures, such as dorsal slit and paraphimosis reduction.",
"title": ""
},
{
"docid": "7f5e1955b24fe6dee456e5178114a020",
"text": "The scale of Android applications in the market is growing rapidly. To efficiently detect the malicious behavior in these applications, an array of static analysis tools are proposed. However, static analysis tools suffer from code hiding techniques like packing, dynamic loading, self modifying, and reflection. In this paper, we thus present DexLego, a novel system that performs a reassembleable bytecode extraction for aiding static analysis tools to reveal the malicious behavior of Android applications. DexLego leverages just-in-time collection to extract data and bytecode from an application at runtime, and reassembles them to a new Dalvik Executable (DEX) file offline. The experiments on DroidBench and real-world applications show that DexLego precisely reconstructs the behavior of an application in the reassembled DEX file, and significantly improves analysis result of the existing static analysis systems.",
"title": ""
},
{
"docid": "0b0fac5bf220e2bb8a545e988fa5123f",
"text": "Graphite nanoplatelets have recently attracted considerable attention as a viable and inexpensive filler substitute for carbon nanotubes in nanocomposites, given the predicted excellent in-plane mechanical, structural, thermal, and electrical properties of graphite. As with carbon nanotubes, full utilization of graphite nanoplatelets in polymer nanocomposite applications will inevitably depend on the ability to achieve complete dispersion of the nano-filler component in the polymer matrix of choice. In this communication, we describe a method for preparing watersoluble polymer-coated graphitic nanoplatelets. We prepare graphite nanoplatelets via the chemical reduction of exfoliated graphite oxide nanoplatelets. Graphite oxide is produced by the oxidative treatment of graphite. It still possesses a layered structure, but is much lighter in color than graphite due to the loss of electronic conjugation brought about during the oxidation. The basal planes of the graphene sheets in graphite oxide are decorated mostly with epoxide and hydroxyl groups, in addition to carbonyl and carboxyl groups, which are located at the edges. These oxygen functionalities alter the van der Waals interactions between the layers of graphite oxide and render them hydrophilic, thus facilitating their hydration and exfoliation in aqueous media. As a result, graphite oxide readily forms stable colloidal dispersions of thin graphite oxide sheets in water. From these stable dispersions, thin ‘‘graphitic’’ nanoplatelets can be obtained by chemical deoxygenation, e.g., removal of the oxygen functionalities with partial restoration of the aromatic graphene network. It is possible that even single graphite sheets (i.e., finite-sized graphene sheets) can be accessed via graphite oxide exfoliation and a subsequent solution-based chemical reduction. In practice, reduction of water-dispersed graphite oxide nanoplatelets results in a gradual decrease in their hydrophilic character, which eventually leads to their irreversible agglomeration and precipitation. However, stable aqueous dispersions of reduced graphite oxide nanoplatelets can be prepared if the reduction is carried out in the presence of an anionic polymer. A stable water dispersion of graphite oxide nanoplatelets, prepared by exfoliation of the graphite oxide (1 mg mL) via ultrasonic treatment (Fisher Scientific FS60, 1 h), was reduced with hydrazine hydrate at 100 uC for 24 h. As the reduction proceeds, the brown-colored dispersion of exfoliated graphite oxide turns black and the reduced nanoplatelets agglomerate and eventually precipitate. This precipitated material could not be re-suspended even after prolonged ultrasonic treatment in water in the presence of surfactants such as sodium dodecylsulfate (SDS) and TRITON X-100, which have been found to successfully solubilize carbon nanotubes. Elemental analyses, coupled with Karl Fisher titration (Galbraith Laboratories), of both graphite oxide and the reduced material indicate that there is a considerable increase in C/O atomic ratio in the reduced material (10.3) compared to that in the starting graphite oxide (2.7). Hence, the reduced material can be described as consisting of partially oxidized graphitic nanoplatelets, given that a fair amount of oxygen is retained even after reduction. The black color of the reduced materials suggests a partial re-graphitization of the exfoliated graphite oxide, as observed by others. In addition to the decrease in the oxygen level, reduction of graphite oxide is accompanied by nitrogen incorporation from the reducing agent (C/N = 16.1). Attempts to reduce graphite oxide in the presence of SDS and TRITON-X100 also failed to produce a stable aqueous dispersion of graphitic nanoplatelets. However, when the reduction was carried out in the presence of poly(sodium 4-styrenesulfonate) (PSS) (Mw = 70000, Sigma-Aldrich, 10 mg mL , 10/1 w/w vs. graphite oxide), a stable black dispersion was obtained. This dispersion can be filtered through a PVDF membrane (0.2 mm pore size, Fisher Scientific) to yield PSS-coated graphitic nanoplatelets that can be re-dispersed readily in water upon mild sonication, forming black suspensions (Fig. 1). At concentrations lower than 0.1 mg mL, the dispersions obtained after a 30-minute ultrasonic treatment appear to be stable indefinitely— samples prepared over a year ago are still homogeneous to date. More concentrated dispersions would develop a small amount of precipitate after several days. However, they never fully settle, even upon months of standing. Elemental analysis of the PSS-coated platelets indicates that it contains y40% polymer as judged by its sulfur content (graphite oxide reduced without any PSS contains no sulfur at all). Its comparatively high oxygen and hydrogen Department of Mechanical Engineering, Northwestern University, 2145 Sheridan Rd., Evanston, IL 60206-3133, USA. E-mail: r-ruoff@northwestern.edu; Fax: +1(847)491-3915; Tel: +1(847)467-6596 Keck Interdisciplinary Surface Science Center, NUANCE, 2220 Campus Drive, #2036, Northwestern University, Evanston, IL 60208, USA. Fax: +1(847)491-5429; Tel: +1(847)491-5505 Department of Chemistry, 2145 Sheridan Rd., Evanston, IL 60208-3133, USA. E-mail: stn@northwestern.edu; Fax: +1(847)491-7713; Tel: +1(847)467-3347 COMMUNICATION www.rsc.org/materials | Journal of Materials Chemistry",
"title": ""
},
{
"docid": "6882f244253e0367b85c76bd4884ddaa",
"text": "Publishers of news information are keen to amplify the reach of their content by making it as re-sharable as possible on social media. In this work we study the relationship between the concept of social deviance and the re-sharing of news headlines by network gatekeepers on Twitter. Do network gatekeepers have the same predilection for selecting socially deviant news items as professionals? Through a study of 8,000 news items across 8 major news outlets in the U.S. we predominately find that network gatekeepers re-share news items more often when they reference socially deviant events. At the same time we find and discuss exceptions for two outlets, suggesting a more complex picture where newsworthiness for networked gatekeepers may be moderated by other effects such as topicality or varying motivations and relationships with their audience.",
"title": ""
},
{
"docid": "a7c07c3ab577bc8c5cd2930a2c58c5e0",
"text": "Convolutional neural networks have been widely applied in many low level vision tasks. In this paper, we propose a video super-resolution (SR) method named enhanced video SR network with residual blocks (EVSR). The proposed EVSR fully exploits spatio-temporal information and can implicitly capture motion relations between consecutive frames. Therefore, unlike conventional methods to video SR, EVSR does not require an explicit motion compensation process. In addition, residual learning framework exhibits excellence in convergence rate and performance improvement. Based on this, residual blocks and long skip-connection with dimension adjustment layer are proposed to predict high-frequency details. Extensive experiments validate the superiority of our approach over state-of-the-art algorithms.",
"title": ""
},
{
"docid": "2e42e1f9478fb2548e39a92c5bacbaab",
"text": "In this paper, we consider a fully automatic makeup recommendation system and propose a novel examples-rules guided deep neural network approach. The framework consists of three stages. First, makeup-related facial traits are classified into structured coding. Second, these facial traits are fed into examples-rules guided deep neural recommendation model which makes use of the pairwise of Before-After images and the makeup artist knowledge jointly. Finally, to visualize the recommended makeup style, an automatic makeup synthesis system is developed as well. To this end, a new Before-After facial makeup database is collected and labeled manually, and the knowledge of makeup artist is modeled by knowledge base system. The performance of this framework is evaluated through extensive experimental analyses. The experiments validate the automatic facial traits classification, the recommendation effectiveness in statistical and perceptual ways and the makeup synthesis accuracy which outperforms the state of the art methods by large margin. It is also worthy to note that the proposed framework is a pioneering fully automatic makeup recommendation systems to our best knowledge.",
"title": ""
},
{
"docid": "11acd265c1d533916b797bd6015b9eef",
"text": "Genetic and anatomical evidence suggests that Homo sapiens arose in Africa between 200 and 100ka, and recent evidence suggests that complex cognition may have appeared between ~164 and 75ka. This evidence directs our focus to Marine Isotope Stage (MIS) 6, when from 195-123ka the world was in a fluctuating but predominantly glacial stage, when much of Africa was cooler and drier, and when dated archaeological sites are rare. Previously we have shown that humans had expanded their diet to include marine resources by ~164ka (±12ka) at Pinnacle Point Cave 13B (PP13B) on the south coast of South Africa, perhaps as a response to these harsh environmental conditions. The associated material culture documents an early use and modification of pigment, likely for symbolic behavior, as well as the production of bladelet stone tool technology, and there is now intriguing evidence for heat treatment of lithics. PP13B also includes a later sequence of MIS 5 occupations that document an adaptation that increasingly focuses on coastal resources. A model is developed that suggests that the combined richness of the Cape Floral Region on the south coast of Africa, with its high diversity and density of geophyte plants and the rich coastal ecosystems of the associated Agulhas Current, combined to provide a stable set of carbohydrate and protein resources for early modern humans along the southern coast of South Africa during this crucial but environmentally harsh phase in the evolution of modern humans. Humans structured their mobility around the use of coastal resources and geophyte abundance and focused their occupation at the intersection of the geophyte rich Cape flora and coastline. The evidence for human occupation relative to the distance to the coastline over time at PP13B is consistent with this model.",
"title": ""
},
{
"docid": "b674e35f81a1b808b9ed0955f3b23eb6",
"text": "Purpose – Despite the claim that internal corporate social responsibility plays an important role, the understanding of this phenomenon has been neglected. This paper intends to contribute to fill this gap by looking into the relation between CSR and employee engagement. Design/methodology/approach – A survey research was conducted and three different groups of respondents were faced with three different CSR scenarios (general, internal, external) and respondents’ employee engagement was measured. Findings – The results show that there are no statistically significant differences in levels of engagement between employees exposed to external and internal CSR practices. Nevertheless, employees exposed to internal CSR are more engaged than those exposed only to external CSR practices. Research limitations/implications – The use of scenarios, although a grounded approach, involves risks, including the difficulty of participants to put themselves in a fictional situation. Also, the scale used to measure employee engagement puts the emphasis on work rather than on the organisation. Practical implications – Although this study is not conclusive it raises the need for companies to look at their CSR strategy in a holistic approach, i.e. internal and external. Originality/value – This paper represents a contribution to understand CSR strategic status and the need to enlighten the impact that social responsible practices can have on employees’ engagement.",
"title": ""
}
] |
scidocsrr
|
f9239ffd2f5151ea2a3e73d32e4da101
|
Optimizing Multiway Joins in a Map-Reduce Environment
|
[
{
"docid": "f84e0d8892d0b9d0b108aa5dcf317037",
"text": "We present a continuously adaptive, continuous query (CACQ) implementation based on the eddy query processing framework. We show that our design provides significant performance benefits over existing approaches to evaluating continuous queries, not only because of its adaptivity, but also because of the aggressive cross-query sharing of work and space that it enables. By breaking the abstraction of shared relational algebra expressions, our Telegraph CACQ implementation is able to share physical operators --- both selections and join state --- at a very fine grain. We augment these features with a grouped-filter index to simultaneously evaluate multiple selection predicates. We include measurements of the performance of our core system, along with a comparison to existing continuous query approaches.",
"title": ""
}
] |
[
{
"docid": "92cf6e3fd47d40c52bb80faaafab07c8",
"text": "Graham-Little syndrome, also know as Graham-Little-Piccardi-Lassueur syndrome, is an unusual form of lichen planopilaris, characterized by the presence of cicatricial alopecia on the scalp, keratosis pilaris of the trunk and extremities, and non-cicatricial hair loss of the pubis and axillae. We present the case of a 47-year-old woman whose condition was unusual in that there was a prominence of scalp findings. Her treatment included a topical steroid plus systemic prednisone beginning at 30 mg every morning, which rendered her skin smooth, but did not alter her scalp lopecia.",
"title": ""
},
{
"docid": "b5ab4c11feee31195fdbec034b4c99d9",
"text": "Abstract Traditionally, firewalls and access control have been the most important components used in order to secure servers, hosts and computer networks. Today, intrusion detection systems (IDSs) are gaining attention and the usage of these systems is increasing. This thesis covers commercial IDSs and the future direction of these systems. A model and taxonomy for IDSs and the technologies behind intrusion detection is presented. Today, many problems exist that cripple the usage of intrusion detection systems. The decreasing confidence in the alerts generated by IDSs is directly related to serious problems like false positives. By studying IDS technologies and analyzing interviews conducted with security departments at Swedish banks, this thesis identifies the major problems within IDSs today. The identified problems, together with recent IDS research reports published at the RAID 2002 symposium, are used to recommend the future direction of commercial intrusion detection systems. Intrusion Detection Systems – Technologies, Weaknesses and Trends",
"title": ""
},
{
"docid": "a7e5f9cf618d6452945cb6c4db628bbb",
"text": "we present a motion capture device to measure in real-time table tennis strokes. A six degree-of-freedom sensing device, inserted into the racket handle, measures 3D acceleration and 3-axis angular velocity values at a high sampling rate. Data are wirelessly transmitted to a computer in real-time. This flexible system allows for recording and analyzing kinematics information on the motion of the racket, along with synchronized video and sound recordings. Recorded gesture data are analyzed using several algorithms we developed to segment and extract movement features, and to build a reference motion database.",
"title": ""
},
{
"docid": "2e72e09edaa4a13337609c058f139f6e",
"text": "Numerous experimental, epidemiologic, and clinical studies suggest that nonsteroidal anti-inflammatory drugs (NSAIDs), particularly the highly selective cyclooxygenase (COX)-2 inhibitors, have promise as anticancer agents. NSAIDs restore normal apoptosis in human adenomatous colorectal polyps and in various cancer cell lines that have lost adenomatous polyposis coli gene function. NSAIDs also inhibit angiogenesis in cell culture and rodent models of angiogenesis. Many epidemiologic studies have found that long-term use of NSAIDs is associated with a lower risk of colorectal cancer, adenomatous polyps, and, to some extent, other cancers. Two NSAIDs, sulindac and celecoxib, have been found to inhibit the growth of adenomatous polyps and cause regression of existing polyps in randomized trials of patients with familial adenomatous polyposis (FAP). However, unresolved questions about the safety, efficacy, optimal treatment regimen, and mechanism of action of NSAIDs currently limit their clinical application to the prevention of polyposis in FAP patients. Moreover, the development of safe and effective drugs for chemoprevention is complicated by the potential of even rare, serious toxicity to offset the benefit of treatment, particularly when the drug is administered to healthy people who have low annual risk of developing the disease for which treatment is intended. This review considers generic approaches to improve the balance between benefits and risks associated with the use of NSAIDs in chemoprevention. We critically examine the published experimental, clinical, and epidemiologic literature on NSAIDs and cancer, especially that regarding colorectal cancer, and identify strategies to overcome the various logistic and scientific barriers that impede clinical trials of NSAIDs for cancer prevention. Finally, we suggest research opportunities that may help to accelerate the future clinical application of NSAIDs for cancer prevention or treatment.",
"title": ""
},
{
"docid": "7359e387937ce66ce8565237cbf4f1b0",
"text": "A new design of stripline transition structures and flip-chip interconnects for high-speed digital communication systems implemented in low-temperature cofired ceramic (LTCC) substrates is presented. Simplified fabrication, suitability for LTCC machining, suitability for integration with other components, and connection to integrated stripline or microstrip interconnects for LTCC multichip modules and system on package make this approach well suited for miniaturized, advanced broadband, and highly integrated multichip ceramic modules. The transition provides excellent signal integrity at high-speed digital data rates up to 28 Gbits/s. Full-wave simulations and experimental results demonstrate a cost-effective solution for a wide frequency range from dc to 30 GHz and beyond. Signal integrity and high-speed digital data rate performances are verified through eye diagram and time-domain reflectometry and time-domain transmissometry measurements over a 10-cm long stripline.",
"title": ""
},
{
"docid": "096249a1b13cd994427eacddc8af3cf6",
"text": "Many factors influence the adoption of cloud computing. Organizations must systematically evaluate these factors before deciding to adopt cloud-based solutions. To assess the determinants that influence the adoption of cloud computing, we develop a research model based on the innovation characteristics from the diffusion of innovation (DOI) theory and the technology-organization-environment (TOE) framework. Data collected from 369 firms in Portugal are used to test the related hypotheses. The study also investigates the determinants of cloud-computing adoption in the manufacturing and services sectors. 2014 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +351 914 934 438. E-mail addresses: toliveira@isegi.unl.pt (T. Oliveira), mthomas@vcu.edu (M. Thomas), mariana.espadanal@gmail.com (M. Espadanal).",
"title": ""
},
{
"docid": "313dba70fea244739a45a9df37cdcf71",
"text": "We present KB-UNIFY, a novel approach for integrating the output of different Open Information Extraction systems into a single unified and fully disambiguated knowledge repository. KB-UNIFY consists of three main steps: (1) disambiguation of relation argument pairs via a sensebased vector representation and a large unified sense inventory; (2) ranking of semantic relations according to their degree of specificity; (3) cross-resource relation alignment and merging based on the semantic similarity of domains and ranges. We tested KB-UNIFY on a set of four heterogeneous knowledge bases, obtaining high-quality results. We discuss and provide evaluations at each stage, and release output and evaluation data for the use and scrutiny of the community1.",
"title": ""
},
{
"docid": "ee596f4ef7d41c6b627a6990d54b07c2",
"text": "The objective of this study is to develop effective computational models that can predict student learning gains, preferably as early as possible. We compared a series of Bayesian Knowledge Tracing (BKT) models against vanilla RNNs and Long Short Term Memory (LSTM) based models. Our results showed that the LSTM-based model achieved the highest accuracy and the RNN based model have the highest F1-measure. Interestingly, we found that RNN can achieve a reasonably accurate prediction of student final learning gains using only the first 40% of the entire training sequence; using the first 70% of the sequence would produce a result comparable to using the entire sequence.",
"title": ""
},
{
"docid": "0c28741df3a9bf999f4abe7b840cfb26",
"text": "In this work, we analyze taxi-GPS traces collected in Lisbon, Portugal. We perform an exploratory analysis to visualize the spatiotemporal variation of taxi services; explore the relationships between pick-up and drop-off locations; and analyze the behavior in downtime (between the previous drop-off and the following pick-up). We also carry out the analysis of predictability of taxi trips for the next pick-up area type given history of taxi flow in time and space.",
"title": ""
},
{
"docid": "962defe74dbd614a800b8af8777e33e5",
"text": "Due to the permantently growing amount of textual data, automatic methods for organizing the data are needed. Automatic text classification is one of this methods. It automatically assigns documents to a set of classes based on the textual content of the document. Normally, the set of classes is hierarchically structured but today’s classification approaches ignore hierarchical structures, thereby loosing valuable human knowledge. This thesis exploits the hierarchical organization of classes to improve accuracy and reduce computational complexity. Classification methods from machine learning, namely BoosTexter and the newly introduced CentroidBoosting algorithm, are used for learning hierarchies. In doing so, error propagation from higher level nodes and comparing decisions between independently trained leaf nodes are two problems which are considered in this thesis. Experiments are performed on the Reuters 21578, the Reuters Corpus Volume 1 and the Ohsumed data set, which are well known in literature. Rocchio and Support Vector Machines, which are state of the art algorithms in the field of text classification, serve as base line classifiers. Comparing algorithms is done by applying statistical significance tests. Results show that, depending on the structure of a hierarchy, accuracy improves and computational complexity decreases due to hierarchical classification. Also, the introduced model for comparing leaf nodes yields an increase in performance.",
"title": ""
},
{
"docid": "a0c42d2b0ffd4a784c016663dfb6bb4e",
"text": "College of Information and Electrical Engineering, China Agricultural University, Beijing, China Abstract. This paper presents a system framework taking the advantages of the WSN for the real-time monitoring on the water quality in aquaculture. We design the structure of the wireless sensor network to collect and continuously transmit data to the monitoring software. Then we accomplish the configuration model in the software that enhances the reuse and facility of the monitoring project. Moreover, the monitoring software developed to represent the monitoring hardware and data visualization, and analyze the data with expert knowledge to implement the auto control. The monitoring system has been realization of the digital, intelligent, and effectively ensures the quality of aquaculture water. Practical deployment results are to show the system reliability and real-time characteristics, and to display good effect on environmental monitoring of water quality.",
"title": ""
},
{
"docid": "7fac4b577b72cc3efb3a84cc6001bae8",
"text": "When detecting and recording the EMG signal, there are two main issues of concern that influence the fidelity of the signal. The first is the signal to noise ratio. That is, the ratio of the energy in the EMG signal to the energy in the noise signal. In general, noise is defined as electrical signals that are not part of the wanted EMG signal. The other is the distortion of the signal, meaning that the relative contribution of any frequency component in the EMG signal should not be altered. It is well established that the amplitude of the EMG signal is stochastic (random) in nature and can be reasonably represented by a Gausian distribution function. The amplitude of the signal can range from 0 to 10 mV (peak-to-peak) or 0 to 1.5 mV (rms). The usable energy of the signal is limited to the 0 to 500 Hz frequency range, with the dominant energy being in the 50-150 Hz range. Usable signals are those with energy above the electrical noise level. An example of the frequency spectrum of the EMG signal is presented in Figure 1. Figure 1: Frequency spectrum of the EMG signal detected from the Tibialis Anterior muscle during a constant force isometric contraction at 50% of voluntary maximum.",
"title": ""
},
{
"docid": "961372a5e1b21053894040a11e946c8d",
"text": "The main purpose of this paper is to introduce an approach to design a DC-DC boost converter with constant output voltage for grid connected photovoltaic application system. The boost converter is designed to step up a fluctuating solar panel voltage to a higher constant DC voltage. It uses voltage feedback to keep the output voltage constant. To do so, a microcontroller is used as the heart of the control system which it tracks and provides pulse-width-modulation signal to control power electronic device in boost converter. The boost converter will be able to direct couple with grid-tied inverter for grid connected photovoltaic system. Simulations were performed to describe the proposed design. Experimental works were carried out with the designed boost converter which has a power rating of 100 W and 24 V output voltage operated in continuous conduction mode at 20 kHz switching frequency. The test results show that the proposed design exhibits a good performance.",
"title": ""
},
{
"docid": "c450da231d3c3ec8410fe621f4ced54a",
"text": "Distant supervision is a widely applied approach to automatic training of relation extraction systems and has the advantage that it can generate large amounts of labelled data with minimal effort. However, this data may contain errors and consequently systems trained using distant supervision tend not to perform as well as those based on manually labelled data. This work proposes a novel method for detecting potential false negative training examples using a knowledge inference method. Results show that our approach improves the performance of relation extraction systems trained using distantly supervised data.",
"title": ""
},
{
"docid": "2327dcd9f99af6e7a91597f156267030",
"text": "This paper discusses the need for an integrative literature review on data visualizations, particularly in health and medical contexts. The paper analyzes 25 studies across disciplines. The findings suggest there is little agreement on the best way to visualize complex data for lay audiences, but some emerging effective practices are being develop. Pictographs, icon arrays, and bar charts seem to hold promise for comprehension by users, and visualizations need to be kept as simple as possible with attention to integrating other design features such as headings and legends. The review ends with five specific research areas in which technical and professional communicators should focus their attention on empirical studies that examine: interactive displays, merge attention and comprehension, look at numeracy and risk and finally, cross health and medical subjects.",
"title": ""
},
{
"docid": "5a9190b955c9f1f3a677c079782eb443",
"text": "This paper presents an extraction based single document text summarization technique using Genetic Algorithms. A given document is represented as a weighted Directed Acyclic Graph. A fitness function is defined to mathematically express the quality of a summary in terms of some desired properties of a summary, such as, topic relation, cohesion and readability. Genetic Algorithm is designed to maximize this fitness function, and get the corresponding summary by extracting the most important sentences. Results are compared with a couple of other existing text summarization methods keeping the DUC2002 data as benchmark, and using the precision-recall evaluation technique. The initial results obtained seem promising and encouraging for future work in this area.",
"title": ""
},
{
"docid": "4c3805a6db1d43d01196efe50c14822f",
"text": "Relational tables collected from HTML pages (\"web tables\") are used for a variety of tasks including table extension, knowledge base completion, and data transformation. Most of the existing algorithms for these tasks assume that the data in the tables has the form of binary relations, i.e., relates a single entity to a value or to another entity. Our exploration of a large public corpus of web tables, however, shows that web tables contain a large fraction of non-binary relations which will likely be misinterpreted by the state-of-the-art algorithms. In this paper, we propose a categorisation scheme for web table columns which distinguishes the different types of relations that appear in tables on the Web and may help to design algorithms which better deal with these different types. Designing an automated classifier that can distinguish between different types of relations is non-trivial, because web tables are relatively small, contain a high level of noise, and often miss partial key values. In order to be able to perform this distinction, we propose a set of features which goes beyond probabilistic functional dependencies by using the union of multiple tables from the same web site and from different web sites to overcome the problem that single web tables are too small for the reliable calculation of functional dependencies.",
"title": ""
},
{
"docid": "fb0e9f6f58051b9209388f81e1d018ff",
"text": "Because many databases contain or can be embellished with structural information, a method for identifying interesting and repetitive substructures is an essential component to discovering knowledge in such databases. This paper describes the SUBDUE system, which uses the minimum description length (MDL) principle to discover substructures that compress the database and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of SUBDUE produce a hierarchical description of the structural regularities in the data. Inclusion of background knowledgeguides SUBDUE toward appropriate substructures for a particular domain or discovery goal, and the use of an inexact graph match allows a controlled amount of deviations in the instance of a substructure concept. We describe the application of SUBDUE to a variety of domains. We also discuss approaches to combining SUBDUE with non-structural discovery systems.",
"title": ""
},
{
"docid": "889f40a9cf201e2874f7ae40e5dc6c35",
"text": "A data mining system, DBMiner, has been developed for interactive mining of multiple-level knowledge in large relational databases. The system implements a wide spectrum of data mining functions, including generalization, characterization, association, classi cation, and prediction. By incorporating several interesting data mining techniques, including attributeoriented induction, statistical analysis, progressive deepening for mining multiple-level knowledge, and meta-rule guided mining, the system provides a userfriendly, interactive data mining environment with good performance.",
"title": ""
}
] |
scidocsrr
|
70beaf80a2f11968730833a41d927927
|
EE-Grad: Exploration and Exploitation for Cost-Efficient Mini-Batch SGD
|
[
{
"docid": "938395ce421e0fede708e3b4ab7185b5",
"text": "This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.",
"title": ""
}
] |
[
{
"docid": "eaf16b3e9144426aed7edc092ad4a649",
"text": "In order to use a synchronous dynamic RAM (SDRAM) as the off-chip memory of an H.264/AVC encoder, this paper proposes an efficient SDRAM memory controller with an asynchronous bridge. With the proposed architecture, the SDRAM bandwidth is increased by making the operation frequency of an external SDRAM higher than that of the hardware accelerators of an H.264/AVC encoder. Experimental results show that the encoding speed is increased by 30.5% when the SDRAM clock frequency is increased from 100 MHz to 200 MHz while the H.264/AVC hardware accelerators operate at 100 MHz.",
"title": ""
},
{
"docid": "53a1d344a6e38dd790e58c6952e51cdb",
"text": "The thermal conductivities of individual single crystalline intrinsic Si nanowires with diameters of 22, 37, 56, and 115 nm were measured using a microfabricated suspended device over a temperature range of 20–320 K. Although the nanowires had well-defined crystalline order, the thermal conductivity observed was more than two orders of magnitude lower than the bulk value. The strong diameter dependence of thermal conductivity in nanowires was ascribed to the increased phonon-boundary scattering and possible phonon spectrum modification. © 2003 American Institute of Physics.@DOI: 10.1063/1.1616981 #",
"title": ""
},
{
"docid": "441e22ca7323b7490cbdf7f5e6e85a80",
"text": "Familial gigantiform cementoma (FGC) is a rare autosomal dominant, benign fibro-cemento-osseous lesion of the jaws that can cause severe facial deformity. True FGC with familial history is extremely rare and there has been no literature regarding the radiological follow-up of FGC. We report a case of recurrent FGC in an Asian female child who has been under our observation for 6 years since she was 15 months old. After repeated recurrences and subsequent surgeries, the growth of the tumor had seemed to plateau on recent follow-up CT images. The transition from an enhancing soft tissue lesion to a homogeneous bony lesion on CT may indicate decreased growth potential of FGC.",
"title": ""
},
{
"docid": "76d27ae5220bdd692448797e8115d658",
"text": "Abstinence following daily marijuana use can produce a withdrawal syndrome characterized by negative mood (eg irritability, anxiety, misery), muscle pain, chills, and decreased food intake. Two placebo-controlled, within-subject studies investigated the effects of a cannabinoid agonist, delta-9-tetrahydrocannabinol (THC: Study 1), and a mood stabilizer, divalproex (Study 2), on symptoms of marijuana withdrawal. Participants (n=7/study), who were not seeking treatment for their marijuana use, reported smoking 6–10 marijuana cigarettes/day, 6–7 days/week. Study 1 was a 15-day in-patient, 5-day outpatient, 15-day in-patient design. During the in-patient phases, participants took oral THC capsules (0, 10 mg) five times/day, 1 h prior to smoking marijuana (0.00, 3.04% THC). Active and placebo marijuana were smoked on in-patient days 1–8, while only placebo marijuana was smoked on days 9–14, that is, marijuana abstinence. Placebo THC was administered each day, except during one of the abstinence phases (days 9–14), when active THC was given. Mood, psychomotor task performance, food intake, and sleep were measured. Oral THC administered during marijuana abstinence decreased ratings of ‘anxious’, ‘miserable’, ‘trouble sleeping’, ‘chills’, and marijuana craving, and reversed large decreases in food intake as compared to placebo, while producing no intoxication. Study 2 was a 58-day, outpatient/in-patient design. Participants were maintained on each divalproex dose (0, 1500 mg/day) for 29 days each. Each maintenance condition began with a 14-day outpatient phase for medication induction or clearance and continued with a 15-day in-patient phase. Divalproex decreased marijuana craving during abstinence, yet increased ratings of ‘anxious’, ‘irritable’, ‘bad effect’, and ‘tired.’ Divalproex worsened performance on psychomotor tasks, and increased food intake regardless of marijuana condition. Thus, oral THC decreased marijuana craving and withdrawal symptoms at a dose that was subjectively indistinguishable from placebo. Divalproex worsened mood and cognitive performance during marijuana abstinence. These data suggest that oral THC, but not divalproex, may be useful in the treatment of marijuana dependence.",
"title": ""
},
{
"docid": "c213dd0989659d413b39e6698eb097cc",
"text": "It's not surprisingly when entering this site to get the book. One of the popular books now is the the major transitions in evolution. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book.",
"title": ""
},
{
"docid": "9f9128951d6c842689f61fc19c79f238",
"text": "This paper concerns image reconstruction for helical x-ray transmission tomography (CT) with multi-row detectors. We introduce two approximate cone-beam (CB) filtered-backprojection (FBP) algorithms of the Feldkamp type, obtained by extending to three dimensions (3D) two recently proposed exact FBP algorithms for 2D fan-beam reconstruction. The new algorithms are similar to the standard Feldkamp-type FBP for helical CT. In particular, they can reconstruct each transaxial slice from data acquired along an arbitrary segment of helix, thereby efficiently exploiting the available data. In contrast to the standard Feldkamp-type algorithm, however, the redundancy weight is applied after filtering, allowing a more efficient numerical implementation. To partially alleviate the CB artefacts, which increase with increasing values of the helical pitch, a frequency-mixing method is proposed. This method reconstructs the high frequency components of the image using the longest possible segment of helix, whereas the low frequencies are reconstructed using a minimal, short-scan, segment of helix to minimize CB artefacts. The performance of the algorithms is illustrated using simulated data.",
"title": ""
},
{
"docid": "5fe43f0b23b0cfd82b414608e60db211",
"text": "The Distress Analysis Interview Corpus (DAIC) contains clinical interviews designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and post traumatic stress disorder. The interviews are conducted by humans, human controlled agents and autonomous agents, and the participants include both distressed and non-distressed individuals. Data collected include audio and video recordings and extensive questionnaire responses; parts of the corpus have been transcribed and annotated for a variety of verbal and non-verbal features. The corpus has been used to support the creation of an automated interviewer agent, and for research on the automatic identification of psychological distress.",
"title": ""
},
{
"docid": "c7d54d4932792f9f1f4e08361716050f",
"text": "In this paper, we address several puzzles concerning speech acts,particularly indirect speech acts. We show how a formal semantictheory of discourse interpretation can be used to define speech actsand to avoid murky issues concerning the metaphysics of action. Weprovide a formally precise definition of indirect speech acts, includingthe subclass of so-called conventionalized indirect speech acts. Thisanalysis draws heavily on parallels between phenomena at the speechact level and the lexical level. First, we argue that, just as co-predicationshows that some words can behave linguistically as if they're `simultaneously'of incompatible semantic types, certain speech acts behave this way too.Secondly, as Horn and Bayer (1984) and others have suggested, both thelexicon and speech acts are subject to a principle of blocking or ``preemptionby synonymy'': Conventionalized indirect speech acts can block their`paraphrases' from being interpreted as indirect speech acts, even ifthis interpretation is calculable from Gricean-style principles. Weprovide a formal model of this blocking, and compare it withexisting accounts of lexical blocking.",
"title": ""
},
{
"docid": "bee944285ddd3e1e51e5056720a91aa0",
"text": "The iterative Born approximation (IBA) is a well-known method for describing waves scattered by semitransparent objects. In this letter, we present a novel nonlinear inverse scattering method that combines IBA with an edge-preserving total variation regularizer. The proposed method is obtained by relating iterations of IBA to layers of an artificial multilayer neural network and developing a corresponding error backpropagation algorithm for efficiently estimating the permittivity of the object. Simulations illustrate that, by accounting for multiple scattering, the method successfully recovers the permittivity distribution where the traditional linear inverse scattering fails.",
"title": ""
},
{
"docid": "d70214bbb417b0ff7d4a6efbb24abfb6",
"text": "While deep reinforcement learning techniques have recently produced considerable achievements on many decision-making problems, their use in robotics has largely been limited to simulated worlds or restricted motions, since unconstrained trial-and-error interactions in the real world can have undesirable consequences for the robot or its environment. To overcome such limitations, we propose a novel reinforcement learning architecture, OptLayer, that takes as inputs possibly unsafe actions predicted by a neural network and outputs the closest actions that satisfy chosen constraints. While learning control policies often requires carefully crafted rewards and penalties while exploring the range of possible actions, OptLayer ensures that only safe actions are actually executed and unsafe predictions are penalized during training. We demonstrate the effectiveness of our approach on robot reaching tasks, both simulated and in the real world.",
"title": ""
},
{
"docid": "86ef6a2a5c4f32c466bd3595a828bafb",
"text": "Rectus femoris muscle proximal injuries are not rare conditions. The proximal rectus femoris tendinous anatomy is complex and may be affected by traumatic, microtraumatic, or nontraumatic disorders. A good knowledge of the proximal rectus femoris anatomy allows a better understanding of injury and disorder patterns. A new sonographic lateral approach was recently described to assess the indirect head of the proximal rectus femoris, hence allowing for a complete sonographic assessment of the proximal rectus femoris tendons. This article will review sonographic features of direct, indirect, and conjoined rectus femoris tendon disorders.",
"title": ""
},
{
"docid": "1527601285eb1b2ef2de040154e3d4fb",
"text": "This paper exploits the context of natural dynamic scenes for human action recognition in video. Human actions are frequently constrained by the purpose and the physical properties of scenes and demonstrate high correlation with particular scene classes. For example, eating often happens in a kitchen while running is more common outdoors. The contribution of this paper is three-fold: (a) we automatically discover relevant scene classes and their correlation with human actions, (b) we show how to learn selected scene classes from video without manual supervision and (c) we develop a joint framework for action and scene recognition and demonstrate improved recognition of both in natural video. We use movie scripts as a means of automatic supervision for training. For selected action classes we identify correlated scene classes in text and then retrieve video samples of actions and scenes for training using script-to-video alignment. Our visual models for scenes and actions are formulated within the bag-of-features framework and are combined in a joint scene-action SVM-based classifier. We report experimental results and validate the method on a new large dataset with twelve action classes and ten scene classes acquired from 69 movies.",
"title": ""
},
{
"docid": "09e8e50db9ca9af79005013b73bbb250",
"text": "The number of tools for dynamics simulation has grown in the last years. It is necessary for the robotics community to have elements to ponder which of the available tools is the best for their research. As a complement to an objective and quantitative comparison, difficult to obtain since not all the tools are open-source, an element of evaluation is user feedback. With this goal in mind, we created an online survey about the use of dynamical simulation in robotics. This paper reports the analysis of the participants’ answers and a descriptive information fiche for the most relevant tools. We believe this report will be helpful for roboticists to choose the best simulation tool for their researches.",
"title": ""
},
{
"docid": "68f422172815df9fff6bf515bf7ea803",
"text": "Active learning (AL) promises to reduce the cost of annotating labeled datasets for trainable human language technologies. Contrary to expectations, when creating labeled training material for HPSG parse selection and latereusing it with other models, gains from AL may be negligible or even negative. This has serious implications for using AL, showing that additional cost-saving strategies may need to be adopted. We explore one such strategy: using a model during annotation to automate some of the decisions. Our best results show an 80% reduction in annotation cost compared with labeling randomly selected data with a single model.",
"title": ""
},
{
"docid": "3d3927d6be7ab9575439a3e26102852f",
"text": "A fundamental frequency (F0) estimator named Harvest is described. The unique points of Harvest are that it can obtain a reliable F0 contour and reduce the error that the voiced section is wrongly identified as the unvoiced section. It consists of two steps: estimation of F0 candidates and generation of a reliable F0 contour on the basis of these candidates. In the first step, the algorithm uses fundamental component extraction by many band-pass filters with different center frequencies and obtains the basic F0 candidates from filtered signals. After that, basic F0 candidates are refined and scored by using the instantaneous frequency, and then several F0 candidates in each frame are estimated. Since the frame-by-frame processing based on the fundamental component extraction is not robust against temporally local noise, a connection algorithm using neighboring F0s is used in the second step. The connection takes advantage of the fact that the F0 contour does not precipitously change in a short interval. We carried out an evaluation using two speech databases with electroglottograph (EGG) signals to compare Harvest with several state-of-the-art algorithms. Results showed that Harvest achieved the best performance of all algorithms.",
"title": ""
},
{
"docid": "6a4844bf755830d14fb24caff1aa8442",
"text": "We present a stochastic first-order optimization algorithm, named BCSC, that adds a cyclic constraint to stochastic block-coordinate descent. It uses different subsets of the data to update different subsets of the parameters, thus limiting the detrimental effect of outliers in the training set. Empirical tests in benchmark datasets show that our algorithm outperforms state-of-the-art optimization methods in both accuracy as well as convergence speed. The improvements are consistent across different architectures, and can be combined with other training techniques and regularization methods.",
"title": ""
},
{
"docid": "b13286a4875d30f6d32b43dd5d95bd79",
"text": "The complexity of indoor radio propagation has resulted in location-awareness being derived from empirical fingerprinting techniques, where positioning is performed via a previously-constructed radio map, usually of WiFi signals. The recent introduction of the Bluetooth Low Energy (BLE) radio protocol provides new opportunities for indoor location. It supports portable battery-powered beacons that can be easily distributed at low cost, giving it distinct advantages over WiFi. However, its differing use of the radio band brings new challenges too. In this work, we provide a detailed study of BLE fingerprinting using 19 beacons distributed around a ~600 m2 testbed to position a consumer device. We demonstrate the high susceptibility of BLE to fast fading, show how to mitigate this, and quantify the true power cost of continuous BLE scanning. We further investigate the choice of key parameters in a BLE positioning system, including beacon density, transmit power, and transmit frequency. We also provide quantitative comparison with WiFi fingerprinting. Our results show advantages to the use of BLE beacons for positioning. For one-shot (push-to-fix) positioning we achieve <; 2.6 m error 95% of the time for a dense BLE network (1 beacon per 30 m2), compared to <; 4.8 m for a reduced density (1 beacon per 100 m2) and <; 8.5 m for an established WiFi network in the same area.",
"title": ""
},
{
"docid": "08bb027bc95762431350d2260570faa0",
"text": "RetSim is an agent-based simulator of a shoe store based on the transactional data of one of the largest retail shoe sellers in Sweden. The aim of RetSim is the generation of synthetic data that can be used for fraud detection research. Statistical and a Social Network Analysis (SNA) of relations between staff and customers was used to develop and calibrate the model. Our ultimate goal is for RetSim to be usable to model relevant scenarios to generate realistic data sets that can be used by academia, and others, to develop and reason about fraud detection methods without leaking any sensitive information about the underlying data. Synthetic data has the added benefit of being easier to acquire, faster and at less cost, for experimentation even for those that have access to their own data. We argue that RetSim generates data that usefully approximates the relevant aspects of the real data.",
"title": ""
},
{
"docid": "4ceab082d195c1f69bb98793852f4a29",
"text": "This paper presents a 22 to 26.5 Gb/s optical receiver with an all-digital clock and data recovery (AD-CDR) fabricated in a 65 nm CMOS process. The receiver consists of an optical front-end and a half-rate bang-bang clock and data recovery circuit. The optical front-end achieves low power consumption by using inverter-based amplifiers and realizes sufficient bandwidth by applying several bandwidth extension techniques. In addition, in order to minimize additional jitter at the front-end, not only magnitude and bandwidth but also group-delay responses are considered. The AD-CDR employs an LC quadrature digitally controlled oscillator (LC-QDCO) to achieve a high phase noise figure-of-merit at tens of gigahertz. The recovered clock jitter is 1.28 ps rms and the measured jitter tolerance exceeds the tolerance mask specified in IEEE 802.3ba. The receiver sensitivity is 106 and 184 for a bit error rate of 10-12 at data rates of 25 and 26.5 Gb/s, respectively. The entire receiver chip occupies an active die area of 0.75 mm2 and consumes 254 mW at a data rate of 26.5 Gb/s. The energy efficiencies of the front-end and entire receiver at 26.5 Gb/s are 1.35 and 9.58 pJ/bit, respectively.",
"title": ""
},
{
"docid": "af7d318e1c203358c87592d0c6bcb4d2",
"text": "A fundamental component of spatial modulation (SM), termed generalized space shift keying (GSSK), is presented. GSSK modulation inherently exploits fading in wireless communication to provide better performance over conventional amplitude/phase modulation (APM) techniques. In GSSK, only the antenna indices, and not the symbols themselves (as in the case of SM and APM), relay information. We exploit GSSKpsilas degrees of freedom to achieve better performance, which is done by formulating its constellation in an optimal manner. To support our results, we also derive upper bounds on GSSKpsilas bit error probability, where the source of GSSKpsilas strength is made clear. Analytical and simulation results show performance gains (1.5-3 dB) over popular multiple antenna APM systems (including Bell Laboratories layered space time (BLAST) and maximum ratio combining (MRC) schemes), making GSSK an excellent candidate for future wireless applications.",
"title": ""
}
] |
scidocsrr
|
e9813c24a8a6643d79747c42ab3ba847
|
Iris Center Corneal Reflection Method for Gaze Tracking Using Visible Light
|
[
{
"docid": "e45c07c42c1a7f235dd5cb511c131d30",
"text": "This paper is about mapping images to continuous output spaces using powerful Bayesian learning techniques. A sparse, semi-supervised Gaussian process regression model (S3GP) is introduced which learns a mapping using only partially labelled training data. We show that sparsity bestows efficiency on the S3GP which requires minimal CPU utilization for real-time operation; the predictions of uncertainty made by the S3GP are more accurate than those of other models leading to considerable performance improvements when combined with a probabilistic filter; and the ability to learn from semi-supervised data simplifies the process of collecting training data. The S3GP uses a mixture of different image features: this is also shown to improve the accuracy and consistency of the mapping. A major application of this work is its use as a gaze tracking system in which images of a human eye are mapped to screen coordinates: in this capacity our approach is efficient, accurate and versatile.",
"title": ""
}
] |
[
{
"docid": "32effb3b888c5b523c4288f270a9c7f3",
"text": "Deep Neural Networks (DNNs) have advanced the state-of-the-art in a variety of machine learning tasks and are deployed in increasing numbers of products and services. However, the computational requirements of training and evaluating large-scale DNNs are growing at a much faster pace than the capabilities of the underlying hardware platforms that they are executed upon. In this work, we propose Dynamic Variable Effort Deep Neural Networks (DyVEDeep) to reduce the computational requirements of DNNs during inference. Previous efforts propose specialized hardware implementations for DNNs, statically prune the network, or compress the weights. Complementary to these approaches, DyVEDeep is a dynamic approach that exploits the heterogeneity in the inputs to DNNs to improve their compute efficiency with comparable classification accuracy. DyVEDeep equips DNNs with dynamic effort mechanisms that, in the course of processing an input, identify how critical a group of computations are to classify the input. DyVEDeep dynamically focuses its compute effort only on the critical computations, while skipping or approximating the rest. We propose 3 effort knobs that operate at different levels of granularity viz. neuron, feature and layer levels. We build DyVEDeep versions for 5 popular image recognition benchmarks — one for CIFAR-10 and four for ImageNet (AlexNet, OverFeat and VGG-16, weightcompressed AlexNet). Across all benchmarks, DyVEDeep achieves 2.1×-2.6× reduction in the number of scalar operations, which translates to 1.8×-2.3× performance improvement over a Caffe-based implementation, with < 0.5% loss in accuracy.",
"title": ""
},
{
"docid": "4910c462dd735ef16378c014a7d63c69",
"text": "Malware detection increasingly relies on machine learning techniques, which utilize multiple features to separate the malware from the benign apps. The effectiveness of these techniques primarily depends on the manual feature engineering process, based on human knowledge and intuition. However, given the adversaries' efforts to evade detection and the growing volume of publications on malware behaviors, the feature engineering process likely draws from a fraction of the relevant knowledge. We propose an end-to-end approach for automatic feature engineering. We describe techniques for mining documents written in natural language (e.g. scientific papers) and for representing and querying the knowledge about malware in a way that mirrors the human feature engineering process. Specifically, we first identify abstract behaviors that are associated with malware, and then we map these behaviors to concrete features that can be tested experimentally. We implement these ideas in a system called FeatureSmith, which generates a feature set for detecting Android malware. We train a classifier using these features on a large data set of benign and malicious apps. This classifier achieves a 92.5% true positive rate with only 1% false positives, which is comparable to the performance of a state-of-the-art Android malware detector that relies on manually engineered features. In addition, FeatureSmith is able to suggest informative features that are absent from the manually engineered set and to link the features generated to abstract concepts that describe malware behaviors.",
"title": ""
},
{
"docid": "eb3ca0e61967ded4d39120cf0285abc6",
"text": "This paper surveys a variety of data compression methods spanning almost 40 years of research, from the work of Shannon, Fano, and Huffman in the late 1940s to a technique developed in 1986. The aim of data compression is to reduce redundancy in stored or communicated data, thus increasing effective data density. Data compression has important application in the areas of file storage and distributed systems. Concepts from information theory as they relate to the goals and evaluation of data compression methods are discussed briefly. A framework for evaluation and comparison of methods is constructed and applied to the algorithms presented. Comparisons of both theoretical and empirical natures are reported, and possibilities for future research are suggested.",
"title": ""
},
{
"docid": "ee2f1d856532b262455224ebaddf73d1",
"text": "In this paper a behavioral control framework is developed to control anunmanned aerial vehicle-manipulator (UAVM) system, composed by a multirotor aerial vehicle equipped with a robotic arm. The goal is to ensure vehiclearm coordination and manage complex multi-task missions, where different behaviors must be encompassed in a clear and meaningful way. In detail, a control scheme, based on the null space-based behavioral paradigm, is proposed to hanB F. Pierri francesco.pierri@unibas.it K. Baizid khelifa.baizid@mines-douai.fr G. Giglio gerardo.giglio@unibas.it M. A. Trujillo matrujillo@catec.aero G. Antonelli antonelli@unicas.it F. Caccavale fabrizio.caccavale@unibas.it A. Viguria aviguria@catec.aero S. Chiaverini chiaverini@unicas.it A. Ollero aollero@us.es 1 Mines Douai, IA 59508 Douai, France 2 Univ. Lille, 59000 Lille, France 3 University of Basilicata, Potenza, Italy 4 Center for Advanced Aerospace Technologies (CATEC), Seville, Spain 5 University of Cassino and Southern Lazio, Cassino, Italy 6 University of Seville, Seville, Spain dle the coordination between the arm and vehicle motion. To this aim, a set of basic functionalities (elementary behaviors) are designed and combined in a given priority order, in order to attain more complex tasks (compound behaviors). A supervisor is in charge of switching between the compound behaviors according to the mission needs and the sensory feedback. The method is validated on a real testbed, consisting of a multirotor aircraft with an attached 6 Degree of Freedoms manipulator, developed within the EU-funded project ARCAS (Aerial Robotics Cooperative Assembly System). At the the best of authors’ knowledge, this is the first time that an UAVM system is experimentally tested in the execution of complex multi-task missions. The results show that, by properly designing a set of compound behaviors and a supervisor, vehicle-arm coordination in complex missions can be effectively managed.",
"title": ""
},
{
"docid": "354e3d7034f93ff4e319567ce1508680",
"text": "In this paper, we discuss, from an experimental point of view, the use of different control strategies for the trajectory tracking control of an industrial selective compliance assembly robot arm robot, which is one of the most employed manipulators in industrial environments, especially for assembly tasks. Specifically, we consider decentralized controllers such as proportional–integral–derivative-based and sliding-mode ones and model-based controllers such as the classical computed-torque one and a neural-network-based controller. A simple procedure for the estimation of the dynamic model of the manipulator is given. Experimental results provide a detailed framework about the cost/benefit ratio regarding the use of the different controllers, showing that the performance obtained with decentralized controllers may suffice in a large number of industrial applications, but in order to achieve low tracking errors also for high-speed trajectories, it might be convenient to adopt a neural-network-based control scheme, whose implementation is not particularly demanding.",
"title": ""
},
{
"docid": "d509f695435ba51813164ee98512bf06",
"text": "In this article, we present OntoDM-core, an ontology of core data mining entities. OntoDM-core defines the most essential data mining entities in a three-layered ontological structure comprising of a specification, an implementation and an application layer. It provides a representational framework for the description of mining structured data, and in addition provides taxonomies of datasets, data mining tasks, generalizations, data mining algorithms and constraints, based on the type of data. OntoDM-core is designed to support a wide range of applications/use cases, such as semantic annotation of data mining algorithms, datasets and results; annotation of QSAR studies in the context of drug discovery investigations; and disambiguation of terms in text mining. The ontology has been thoroughly assessed following the practices in ontology engineering, is fully interoperable with many domain resources and is easy to extend. OntoDM-core is available at http://www.ontodm.com .",
"title": ""
},
{
"docid": "c8e64b40f971b430505a2c86a3f94c84",
"text": "V. Rastogi, R. Shao, Y. Chen, X. Pan, S. Zou, and R. Riley, “Are These Ads Safe: Detecting Hidden Attacks through the Mobile App-Web Interfaces,” in Proceedings of the Network and Distributed System Security Symposium (NDSS), 2016. V. Rastogi, Z. Qu, J. McClurg, Y. Cao, and Y. Chen, “Uranine: Real-time Privacy Leakage Monitoring without System Modification for Android,” in Proceedings of the 11th International Conference on Security and Privacy in Communication Networks (SecureComm), 2015. B. He, V. Rastogi, Y. Cao, Y. Chen, V. N. Venkatakrishnan, R. Yang, and Z. Zhang, “Vetting SSL Usage in Applications with SSLint,” in Proceedings of the 36th IEEE Symposium on Security and Privacy (Oakland), 2015. Z. Qu, V. Rastogi, X. Zhang, Y. Chen, T. Zhu, and Z. Chen, “AutoCog: Measuring the Description-to-permission Fidelity in Android Applications,” in Proceedings of the 21st Vaibhav Rastogi",
"title": ""
},
{
"docid": "893dd691cd7c5b039c92c8a00145bcfc",
"text": "Computing a bijective spherical parametrization of a genus-0 surface with low distortion is a fundamental task in geometric modeling and processing. Current methods for spherical parametrization cannot, in general, control the worst case distortion of all triangles nor guarantee bijectivity. Given an initial bijective spherical parametrization, with high distortion, we develop a non-linear constrained optimization problem to refine it, with objective penalizing the presence of triangles degeneration and maximal distortion. By using a dynamic adjusting parameter and a constrained, iterative inexact block coordinate descent optimization method, we efficiently and robustly achieve a bijective and low distortion parametrization with an optimal sphere radius. Compared to the state-of-the-art methods, our method is robust to initial parametrization and not sensitive to parameter choice. We demonstrate that our method produces excellent results on numerous models undergoing simple to complex shapes, in comparison to several state-of-the-art methods. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1a38695797b921e35e0987eeed11c95d",
"text": "We show that states of a dynamical system can be usefully represented by multi-step, action-conditional predictions of future observations. State representations that are grounded in data in this way may be easier to learn, generalize better, and be less dependent on accurate prior models than, for example, POMDP state representations. Building on prior work by Jaeger and by Rivest and Schapire, in this paper we compare and contrast a linear specialization of the predictive approach with the state representations used in POMDPs and in k-order Markov models. Ours is the first specific formulation of the predictive idea that includes both stochasticity and actions (controls). We show that any system has a linear predictive state representation with number of predictions no greater than the number of states in its minimal POMDP model. In predicting or controlling a sequence of observations, the concepts of state and state estimation inevitably arise. There have been two dominant approaches. The generative-model approach, typified by research on partially observable Markov decision processes (POMDPs), hypothesizes a structure for generating observations and estimates its state and state dynamics. The history-based approach, typified by k-order Markov methods, uses simple functions of past observations as state, that is, as the immediate basis for prediction and control. (The data flow in these two approaches are diagrammed in Figure 1.) Of the two, the generative-model approach is more general. The model's internal state gives it temporally unlimited memorythe ability to remember an event that happened arbitrarily long ago--whereas a history-based approach can only remember as far back as its history extends. The bane of generative-model approaches is that they are often strongly dependent on a good model of the system's dynamics. Most uses of POMDPs, for example, assume a perfect dynamics model and attempt only to estimate state. There are algorithms for simultaneously estimating state and dynamics (e.g., Chrisman, 1992), analogous to the Baum-Welch algorithm for the uncontrolled case (Baum et al., 1970), but these are only effective at tuning parameters that are already approximately correct (e.g., Shatkay & Kaelbling, 1997). observations (and actions) (a) state 1-----1-----1..rep'n observations¢E (and actions) / state t/' rep'n 1-step --+ . delays",
"title": ""
},
{
"docid": "fca617ebfc6dad2db881cdaba9ffe154",
"text": "In this paper, we present ‘Tuskbot’, a wheel-based robot with a novel structure called ‘Tusk’, passive and protruded elements in the front part of the robot, which can create an angle-of-attack when it climbs stairs. The robot can easily overcome stairs with the help of Tusk, which does not require additional active mechanisms. We formulated a simplified mathematical model of the structure based on the geometrical relationship between the wheels and the stairs in each phase during the stair-climb. To test the model and Tusk structure, we calculated the length of each link and the angle of Tusk from the dimension of stair and radius of wheels, and built the robot accordingly. The results demonstrate the validity of the model and the structure.",
"title": ""
},
{
"docid": "d68f1d3762de6db8bf8d67556d4c72ec",
"text": "With the emerging technologies and all associated devices, it is predicted that massive amount of data will be created in the next few years – in fact, as much as 90% of current data were created in the last couple of years – a trend that will continue for the foreseeable future. Sustainable computing studies the process by which computer engineer/scientist designs computers and associated subsystems efficiently and effectively with minimal impact on the environment. However, current intelligent machine-learning systems are performance driven – the focus is on the predictive/classification accuracy, based on known properties learned from the training samples. For instance, most machine-learning-based nonparametric models are known to require high computational cost in order to find the global optima. With the learning task in a large dataset, the number of hidden nodes within the network will therefore increase significantly, which eventually leads to an exponential rise in computational complexity. This paper thus reviews the theoretical and experimental data-modeling literature, in large-scale data-intensive fields, relating to: (1) model efficiency, including computational requirements in learning, and data-intensive areas’ structure and design, and introduces (2) new algorithmic approaches with the least memory requirements and processing to minimize computational cost, while maintaining/improving its predictive/classification accuracy and stability.",
"title": ""
},
{
"docid": "df1d05fba803a691b3d992399f800ca3",
"text": "The usage of Internet is getting widespread, and the service of online video is getting more and more popular. The revenue of the web service providers comes mostly from the advertisements. This study investigates the attitudes toward the advertisements while watching online videos in YouTube. We followed the research of users' attitudes toward advertisements (Brackett & Carr, 2001) and combined it with the theory of reasoned action and the flow theory in the psychology. This study investigates the factor affecting attitudes toward advertisements and the influence to behaviors. Our findings show that the model explained most of the variance of attitudes toward advertisements in sites providing services of online videos indicating that the model is confirmed in the situation of online video advertising. The conclusion and managerial implications have further discussions.",
"title": ""
},
{
"docid": "0ef58b9966c7d3b4e905e8306aad3359",
"text": "Agriculture is the back bone of India. To make the sustainable agriculture, this system is proposed. In this system ARM 9 processor is used to control and monitor the irrigation system. Different kinds of sensors are used. This paper presents a fully automated drip irrigation system which is controlled and monitored by using ARM9 processor. PH content and the nitrogen content of the soil are frequently monitored. For the purpose of monitoring and controlling, GSM module is implemented. The system informs user about any abnormal conditions like less moisture content and temperature rise, even concentration of CO2 via SMS through the GSM module.",
"title": ""
},
{
"docid": "98e7492293b295200b78c99cce8824dd",
"text": "Ann Campbell Burke examines the development and evolution [5] of vertebrates, in particular, turtles [6]. Her Harvard University [7] experiments, described in \"Development of the Turtle Carapace [4]: Implications for the Evolution of a Novel Bauplan,\" were published in 1989. Burke used molecular techniques to investigate the developmental mechanisms responsible for the formation of the turtle shell. Burke's work with turtle embryos has provided empirical evidence for the hypothesis that the evolutionary origins of turtle morphology [8] depend on changes in the embryonic and developmental mechanisms underpinning shell production.",
"title": ""
},
{
"docid": "e954ddbe077762204b5cabb8d083a2a9",
"text": "The aim of this paper is to derive and analyze a variational model for the joint estimation of motion and reconstruction of image sequences, which is based on a time-continuous Eulerian motion model. The model can be set up in terms of the continuity equation or the brightness constancy equation. The analysis in this paper focuses on the latter for robust motion estimation on sequences of twodimensional images. We rigorously prove the existence of a minimizer in a suitable function space setting. Moreover, we discuss the numerical solution of the model based on primal-dual algorithms and investigate several examples. Finally, the benefits of our model compared to existing techniques, such as sequential image reconstruction and motion estimation, are shown.",
"title": ""
},
{
"docid": "a6a98d0599c1339c1f2c6a6c7525b843",
"text": "We consider a generalized version of the Steiner problem in graphs, motivated by the wire routing phase in physical VLSI design: given a connected, undirected distance graph with required classes of vertices and Steiner vertices, find a shortest connected subgraph containing at least one vertex of each required class. We show that this problem is NP-hard, even if there are no Steiner vertices and the graph is a tree. Moreover, the same complexity result holds if the input class Steiner graph additionally is embedded in a unit grid, if each vertex has degree at most three, and each class consists of no more than three vertices. For similar restricted versions, we prove MAX SNP-hardness and we show that there exists no polynomial-time approximation algorithm with a constant bound on the relative error, unless P = NP. We propose two efficient heuristics computing different approximate solutions in time 0(/E] + /VI log IV]) and in time O(c(lEl + IV1 log (VI)), respectively, where E is the set of edges in the given graph, V is the set of vertices, and c is the number of classes. We present some promising implementation results.",
"title": ""
},
{
"docid": "d35c176cfe5c8296862513c26f0fdffa",
"text": "Vertical scar mammaplasty, first described by Lötsch in 1923 and Dartigues in 1924 for mastopexy, was extended later to breast reduction by Arié in 1957. It was otherwise lost to surgical history until Lassus began experimenting with it in 1964. It then was extended by Marchac and de Olarte, finally to be popularized by Lejour. Despite initial skepticism, vertical reduction mammaplasty is becoming increasingly popular in recent years because it best incorporates the two concepts of minimal scarring and a satisfactory breast shape. At the moment, vertical scar techniques seem to be more popular in Europe than in the United States. A recent survey, however, has demonstrated that even in the United States, it has surpassed the rate of inverted T-scar breast reductions. The technique, however, is not without major drawbacks, such as long vertical scars extending below the inframammary crease and excessive skin gathering and “dog-ear” at the lower end of the scar that may require long periods for resolution, causing extreme distress to patients and surgeons alike. Efforts are being made to minimize these complications and make the procedure more user-friendly either by modifying it or by replacing it with an alternative that retains the same advantages. Although conceptually opposed to the standard vertical design, the circumvertical modification probably is the most important maneuver for shortening vertical scars. Residual dog-ears often are excised, resulting in a short transverse scar (inverted T- or L-scar). The authors describe limited subdermal undermining of the skin at the inferior edge of the vertical incisions with liposculpture of the inframammary crease, avoiding scar extension altogether. Simplified circumvertical drawing that uses the familiar Wise pattern also is described.",
"title": ""
},
{
"docid": "676d16a15d8d4aabef2109d52328bf6c",
"text": "B-cell maturation antigen (BCMA), highly expressed on malignant plasma cells in human multiple myeloma (MM), has not been effectively targeted with therapeutic monoclonal antibodies. We here show that BCMA is universally expressed on the MM cell surface and determine specific anti-MM activity of J6M0-mcMMAF (GSK2857916), a novel humanized and afucosylated antagonistic anti-BCMA antibody-drug conjugate via a noncleavable linker. J6M0-mcMMAF specifically blocks cell growth via G2/M arrest and induces caspase 3-dependent apoptosis in MM cells, alone and in coculture with bone marrow stromal cells or various effector cells. It strongly inhibits colony formation by MM cells while sparing surrounding BCMA-negative normal cells. J6M0-mcMMAF significantly induces effector cell-mediated lysis against allogeneic or autologous patient MM cells, with increased potency and efficacy compared with the wild-type J6M0 without Fc enhancement. The antibody-dependent cell-mediated cytotoxicity and apoptotic activity of J6M0-mcMMAF is further enhanced by lenalidomide. Importantly, J6M0-mcMMAF rapidly eliminates myeloma cells in subcutaneous and disseminated mouse models, and mice remain tumor-free up to 3.5 months. Furthermore, J6M0-mcMMAF recruits macrophages and mediates antibody-dependent cellular phagocytosis of MM cells. Together, these results demonstrate that GSK2857916 has potent and selective anti-MM activities via multiple cytotoxic mechanisms, providing a promising next-generation immunotherapeutic in this cancer.",
"title": ""
},
{
"docid": "26b415f796b85dea5e63db9c58b6c790",
"text": "A predominant portion of Internet services, like content delivery networks, news broadcasting, blogs sharing and social networks, etc., is data centric. A significant amount of new data is generated by these services each day. To efficiently store and maintain backups for such data is a challenging task for current data storage systems. Chunking based deduplication (dedup) methods are widely used to eliminate redundant data and hence reduce the required total storage space. In this paper, we propose a novel Frequency Based Chunking (FBC) algorithm. Unlike the most popular Content-Defined Chunking (CDC) algorithm which divides the data stream randomly according to the content, FBC explicitly utilizes the chunk frequency information in the data stream to enhance the data deduplication gain especially when the metadata overhead is taken into consideration. The FBC algorithm consists of two components, a statistical chunk frequency estimation algorithm for identifying the globally appeared frequent chunks, and a two-stage chunking algorithm which uses these chunk frequencies to obtain a better chunking result. To evaluate the effectiveness of the proposed FBC algorithm, we conducted extensive experiments on heterogeneous datasets. In all experiments, the FBC algorithm persistently outperforms the CDC algorithm in terms of achieving a better dedup gain or producing much less number of chunks. Particularly, our experiments show that FBC produces 2.5 ~ 4 times less number of chunks than that of a baseline CDC which achieving the same Duplicate Elimination Ratio (DER). Another benefit of FBC over CDC is that the FBC with average chunk size greater than or equal to that of CDC achieves up to 50% higher DER than that of a CDC algorithm.",
"title": ""
},
{
"docid": "16c21e5020a518135cce6b2e9a3e11bd",
"text": "This paper measures social media activities of 15 broad scientific disciplines indexed in Scopus database using Altmetric.com data. First, the presence of Altmetric.com data in Scopus database is investigated, overall and across disciplines. Second, a zero-truncated negative binomial model is used to determine the association of various factors with increasing or decreasing citations. Lastly, the effectiveness of altmetric indices to identify publications with high citation impact is comprehensively evaluated by deploying area under the curve (AUC)—an application of receiver operating characteristic. Results indicate a rapid increase in the presence of Altmetric.com data in Scopus database from 10.19% in 2011 to 20.46% in 2015. It was found that Blog count was the most important factor in the field of Health Professions and Nursing as it increased the number of citations by 38.6%, followed by Twitter count increasing the number of citations by 8% in the field of Physics and Astronomy. The results of receiver operating characteristic show that altmetric indices can be a good indicator to discriminate highly cited publications, with an encouragingly AUC = 0.725 between highly cited publications and total altmetric count. Overall, findings suggest that altmetrics can be used to distinguish highly cited publications. The implications of this research are significant in many different directions. Firstly, they set the basis for a further investigation of altmetrics efficiency to predict publications impact and most significantly promote new insights for the measurement of research outcome dissemination over social media.",
"title": ""
}
] |
scidocsrr
|
408e56206bbe7d7710889ede58d05fe7
|
Enhancing the security of user data using the keyword encryption and hybrid cryptographic algorithm in cloud
|
[
{
"docid": "3af00b1320acc7673a86bb39f2385a5e",
"text": "We provide a general framework for constructing identitybased and broadcast encryption systems. In particular, we construct a general encryption system called spatial encryption from which many systems with a variety of properties follow. The ciphertext size in all these systems is independent of the number of users involved and is just three group elements. Private key size grows with the complexity of the system. One application of these results gives the first broadcast HIBE system with short ciphertexts. Broadcast HIBE solves a natural problem having to do with identity-based encrypted email.",
"title": ""
}
] |
[
{
"docid": "54eaba8cca6637bed13cc162edca3c4b",
"text": "Automatic and accurate lung field segmentation is an essential step for developing an automated computer-aided diagnosis system for chest radiographs. Although active shape model (ASM) has been useful in many medical imaging applications, lung field segmentation remains a challenge due to the superimposed anatomical structures. We propose an automatic lung field segmentation technique to address the inadequacy of ASM in lung field extraction. Experimental results using both normal and abnormal chest radiographs show that the proposed technique provides better performance and can achieve 3-6% improvement on accuracy, sensitivity and specificity compared to traditional ASM techniques.",
"title": ""
},
{
"docid": "305ae3e7a263bb12f7456edca94c06ca",
"text": "We study the effects of changes in uncertainty about future fiscal policy on aggregate economic activity. In light of large fiscal deficits and high public debt levels in the U.S., a fiscal consolidation seems inevitable. However, there is notable uncertainty about the policy mix and timing of such a budgetary adjustment. To evaluate the consequences of the increased uncertainty, we first estimate tax and spending processes for the U.S. that allow for timevarying volatility. We then feed these processes into an otherwise standard New Keynesian business cycle model calibrated to the U.S. economy. We find that fiscal volatility shocks can have a sizable adverse effect on economic activity.",
"title": ""
},
{
"docid": "258d0290b2cc7d083800d51dfa525157",
"text": "In recent years, study of influence propagation in social networks has gained tremendous attention. In this context, we can identify three orthogonal dimensions—the number of seed nodes activated at the beginning (known as budget), the expected number of activated nodes at the end of the propagation (known as expected spread or coverage), and the time taken for the propagation. We can constrain one or two of these and try to optimize the third. In their seminal paper, Kempe et al. constrained the budget, left time unconstrained, and maximized the coverage: this problem is known as Influence Maximization (or MAXINF for short). In this paper, we study alternative optimization problems which are naturally motivated by resource and time constraints on viral marketing campaigns. In the first problem, termed minimum target set selection (or MINTSS for short), a coverage threshold η is given and the task is to find the minimum size seed set such that by activating it, at least η nodes are eventually activated in the expected sense. This naturally captures the problem of deploying a viral campaign on a budget. In the second problem, termed MINTIME, the goal is to minimize the time in which a predefined coverage is achieved. More precisely, in MINTIME, a coverage threshold η and a budget threshold k are given, and the task is to find a seed set of size at most k such that by activating it, at least η nodes are activated in the expected sense, in the minimum possible time. This problem addresses the issue of timing when deploying viral campaigns. Both these problems are NP-hard, which motivates our interest in their approximation. For MINTSS, we develop a simple greedy algorithm and show that it provides a bicriteria approximation. We also establish a generic hardness result suggesting that improving this bicriteria approximation is likely to be hard. For MINTIME, we show that even bicriteria and tricriteria approximations are hard under several conditions. We show, however, that if we allow the budget for number of seeds k to be boosted by a logarithmic factor and allow the coverage to fall short, then the problem can be solved exactly in PTIME, i.e., we can achieve the required coverage within the time achieved by the optimal solution to MINTIME with budget k and coverage threshold η. Finally, we establish the value of the approximation algorithms, by conducting an experimental evaluation, comparing their quality against that achieved by various heuristics.",
"title": ""
},
{
"docid": "c02fb121399e1ed82458fb62179d2560",
"text": "Most coreference resolution models determine if two mentions are coreferent using a single function over a set of constraints or features. This approach can lead to incorrect decisions as lower precision features often overwhelm the smaller number of high precision ones. To overcome this problem, we propose a simple coreference architecture based on a sieve that applies tiers of deterministic coreference models one at a time from highest to lowest precision. Each tier builds on the previous tier’s entity cluster output. Further, our model propagates global information by sharing attributes (e.g., gender and number) across mentions in the same cluster. This cautious sieve guarantees that stronger features are given precedence over weaker ones and that each decision is made using all of the information available at the time. The framework is highly modular: new coreference modules can be plugged in without any change to the other modules. In spite of its simplicity, our approach outperforms many state-of-the-art supervised and unsupervised models on several standard corpora. This suggests that sievebased approaches could be applied to other NLP tasks.",
"title": ""
},
{
"docid": "d79b440e5417fae517286206394e8685",
"text": "When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts. Previous approaches have focused on avoiding aliasing by pre-processing the acquired light field via prefiltering, demosaicing, reparameterization, etc. In this paper, we present a different solution that first detects and then removes aliasing at the light field refocusing stage. Different from previous frequency domain aliasing analysis, we carry out a spatial domain analysis to reveal whether the aliasing would occur and uncover where in the image it would occur. The spatial analysis also facilitates easy separation of the aliasing vs. non-aliasing regions and aliasing removal. Experiments on both synthetic scene and real light field camera array data sets demonstrate that our approach has a number of advantages over the classical prefiltering and depth-dependent light field rendering techniques.",
"title": ""
},
{
"docid": "d29f2b03b3ebe488a935e19d87c37226",
"text": "Log analysis shows that PubMed users frequently use author names in queries for retrieving scientific literature. However, author name ambiguity may lead to irrelevant retrieval results. To improve the PubMed user experience with author name queries, we designed an author name disambiguation system consisting of similarity estimation and agglomerative clustering. A machine-learning method was employed to score the features for disambiguating a pair of papers with ambiguous names. These features enable the computation of pairwise similarity scores to estimate the probability of a pair of papers belonging to the same author, which drives an agglomerative clustering algorithm regulated by 2 factors: name compatibility and probability level. With transitivity violation correction, high precision author clustering is achieved by focusing on minimizing false-positive pairing. Disambiguation performance is evaluated with manual verification of random samples of pairs from clustering results. When compared with a state-of-the-art system, our evaluation shows that among all the pairs the lumping error rate drops from 10.1% to 2.2% for our system, while the splitting error rises from 1.8% to 7.7%. This results in an overall error rate of 9.9%, compared with 11.9% for the state-of-the-art method. Other evaluations based on gold standard data also show the increase in accuracy of our clustering. We attribute the performance improvement to the machine-learning method driven by a large-scale training set and the clustering algorithm regulated by a name compatibility scheme preferring precision. With integration of the author name disambiguation system into the PubMed search engine, the overall click-through-rate of PubMed users on author name query results improved from 34.9% to 36.9%.",
"title": ""
},
{
"docid": "7c449b9714d937dc6a3367a851130c4a",
"text": "We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacy-preserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero.",
"title": ""
},
{
"docid": "7700935aeb818b8c863747c0624764db",
"text": "The Internal Model Control (IMC) is a transparent framework for designing and tuning the controller. The proportional-integral (PI) and proportional-integral derivative (PID) controllers have ability to meet most of the control objectives and this led to their widespread acceptance in the control industry. In this paper the IMC-based PID controller is designed. IMC-based PID tuning method is a trade-off between closed-loop performance and robustness to model inaccuracies achieved with a single tuning parameter λ. The IMC-PID controller shows good set-point tracking property. In this paper, Robust stability synthesis of a class of uncertain parameter varying firstorder time-delay systems is presented. The output response characteristics using IMC based PID controller along with characteristics using automatic PID tuner are compared. The performance of IMC based PID for both stable, unstable as well as for the processes with time delay is studied and discussed. Various Order reduction techniques are utilized to reduce higher order polynomial into smaller order transfer function. This paper presents results of the implementation of an Internal Model Control (IMC) based PID controller for the level control application to meet robust performance and to achieve the set point tracking and disturbance rejection.",
"title": ""
},
{
"docid": "de1793c3a682270c0d42694f5df22183",
"text": "Original multidisciplinary research hereby clarifies the complex geodomestication pathways that generated the vast range of banana cultivars (cvs). Genetic analyses identify the wild ancestors of modern-day cvs and elucidate several key stages of domestication for different cv groups. Archaeology and linguistics shed light on the historical roles of people in the movement and cultivation of bananas from New Guinea to West Africa during the Holocene. The historical reconstruction of domestication processes is essential for breeding programs seeking to diversify and improve banana cvs for the future.",
"title": ""
},
{
"docid": "a5274779804272ffc76edfa9b47ef805",
"text": "World energy demand is expected to increase due to the expanding urbanization, better living standards and increasing population. At a time when society is becoming increasingly aware of the declining reserves of fossil fuels beside the environmental concerns, it has become apparent that biodiesel is destined to make a substantial contribution to the future energy demands of the domestic and industrial economies. There are different potential feedstocks for biodiesel production. Non-edible vegetable oils which are known as the second generation feedstocks can be considered as promising substitutions for traditional edible food crops for the production of biodiesel. The use of non-edible plant oils is very significant because of the tremendous demand for edible oils as food source. Moreover, edible oils’ feedstock costs are far expensive to be used as fuel. Therefore, production of biodiesel from non-edible oils is an effective way to overcome all the associated problems with edible oils. However, the potential of converting non-edible oil into biodiesel must be well examined. This is because physical and chemical properties of biodiesel produced from any feedstock must comply with the limits of ASTM and DIN EN specifications for biodiesel fuels. This paper introduces non-edible vegetable oils to be used as biodiesel feedstocks. Several aspects related to these feedstocks have been reviewed from various recent publications. These aspects include overview of non-edible oil resources, advantages of non-edible oils, problems in exploitation of non-edible oils, fatty acid composition profiles (FAC) of various non-edible oils, oil extraction techniques, technologies of biodiesel production from non-edible oils, biodiesel standards and characterization, properties and characteristic of non-edible biodiesel and engine performance and emission production. As a conclusion, it has been found that there is a huge chance to produce biodiesel from non-edible oil sources and therefore it can boost the future production of biodiesel. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f93ebf9beefe35985b6e31445044e6d1",
"text": "Recent genetic studies have suggested that the colonization of East Asia by modern humans was more complex than a single origin from the South, and that a genetic contribution via a Northern route was probably quite substantial. Here we use a spatially-explicit computer simulation approach to investigate the human migration hypotheses of this region based on one-route or two-route models. We test the likelihood of each scenario by using Human Leukocyte Antigen (HLA) − A, −B, and − DRB1 genetic data of East Asian populations, with both selective and demographic parameters considered. The posterior distribution of each parameter is estimated by an Approximate Bayesian Computation (ABC) approach. Our results strongly support a model with two main routes of colonization of East Asia on both sides of the Himalayas, with distinct demographic histories in Northern and Southern populations, characterized by more isolation in the South. In East Asia, gene flow between populations originating from the two routes probably existed until a remote prehistoric period, explaining the continuous pattern of genetic variation currently observed along the latitude. A significant although dissimilar level of balancing selection acting on the three HLA loci is detected, but its effect on the local genetic patterns appears to be minor compared to those of past demographic events.",
"title": ""
},
{
"docid": "af3af0a4102ea0fb555cad52e4cafa50",
"text": "The identification of the exact positions of the first and second heart sounds within a phonocardiogram (PCG), or heart sound segmentation, is an essential step in the automatic analysis of heart sound recordings, allowing for the classification of pathological events. While threshold-based segmentation methods have shown modest success, probabilistic models, such as hidden Markov models, have recently been shown to surpass the capabilities of previous methods. Segmentation performance is further improved when apriori information about the expected duration of the states is incorporated into the model, such as in a hidden semiMarkov model (HSMM). This paper addresses the problem of the accurate segmentation of the first and second heart sound within noisy real-world PCG recordings using an HSMM, extended with the use of logistic regression for emission probability estimation. In addition, we implement a modified Viterbi algorithm for decoding the most likely sequence of states, and evaluated this method on a large dataset of 10 172 s of PCG recorded from 112 patients (including 12 181 first and 11 627 second heart sounds). The proposed method achieved an average F1 score of 95.63 ± 0.85%, while the current state of the art achieved 86.28 ± 1.55% when evaluated on unseen test recordings. The greater discrimination between states afforded using logistic regression as opposed to the previous Gaussian distribution-based emission probability estimation as well as the use of an extended Viterbi algorithm allows this method to significantly outperform the current state-of-the-art method based on a two-sided paired t-test.",
"title": ""
},
{
"docid": "13173c83ad22ed0bc588a487433c7333",
"text": "001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 Multi-view Sparse Co-clustering via Proximal Alternating Linearized Minimization",
"title": ""
},
{
"docid": "77ff4bd27b795212d355162822fc0cdc",
"text": "We consider the problem of enriching current object detection systems with veridical object sizes and relative depth estimates from a single image. There are several technical challenges to this, such as occlusions, lack of calibration data and the scale ambiguity between object size and distance. These have not been addressed in full generality in previous work. Here we propose to tackle these issues by building upon advances in object recognition and using recently created large-scale datasets. We first introduce the task of amodal bounding box completion, which aims to infer the the full extent of the object instances in the image. We then propose a probabilistic framework for learning category-specific object size distributions from available annotations and leverage these in conjunction with amodal completions to infer veridical sizes of objects in novel images. Finally, we introduce a focal length prediction approach that exploits scene recognition to overcome inherent scale ambiguities and demonstrate qualitative results on challenging real-world scenes.",
"title": ""
},
{
"docid": "5aa7b8f78bea23dcdd0a083cb88ba6eb",
"text": "PURPOSE\nParents, professionals, and policy makers need information on the long-term prognosis for children with communication disorders. Our primary purpose in this report was to help fill this gap by profiling the family, educational, occupational, and quality of life outcomes of young adults at 25 years of age (N = 244) from the Ottawa Language Study, a 20-year, prospective, longitudinal study of a community sample of individuals with (n = 112) and without (n = 132) a history of early speech and/or language impairments. A secondary purpose of this report was to use data from earlier phases of the study to predict important, real-life outcomes at age 25.\n\n\nMETHOD\nParticipants were initially identified at age 5 and subsequently followed at 12, 19, and 25 years of age. Direct assessments were conducted at all 4 time periods in multiple domains (demographic, communicative, cognitive, academic, behavioral, and psychosocial).\n\n\nRESULTS\nAt age 25, young adults with a history of language impairments showed poorer outcomes in multiple objective domains (communication, cognitive/academic, educational attainment, and occupational status) than their peers without early communication impairments and those with early speech-only impairments. However, those with language impairments did not differ in subjective perceptions of their quality of life from those in the other 2 groups. Objective outcomes at age 25 were predicted differentially by various combinations of multiple, interrelated risk factors, including poor language and reading skills, low family socioeconomic status, low performance IQ, and child behavior problems. Subjective well-being, however, was primarily associated with strong social networks of family, friends, and others.\n\n\nCONCLUSION\nThis information on the natural history of communication disorders may be useful in answering parents' questions, anticipating challenges that children with language disorders might encounter, and planning services to address those issues.",
"title": ""
},
{
"docid": "4e2c119fe6ba46240c2286849a4ee27c",
"text": "Rapid expansion in the digitization of image and image collections has vastly increased the numbers of images available to scholars and researchers through electronic means. This research review will familiarize the reader with current research applicable to the development of image retrieval systems and provides additional material for exploring the topic further, both in print and online. The discussion will cover several broad areas, among them classification and indexing systems used for describing image collections and research initiatives into image access focusing on image attributes, users, queries, tasks, and cognitive aspects of searching. Prospects for the future of image access, including an outline of future research initiatives, are discussed. Further research in each of these areas will provide basic data which will inform and enrich image access system design and will hopefully provide a richer, more flexible, and satisfactory environment for searching for and discovering images. Harnessing the true power of the digital image environment will only be possible when image retrieval systems are coherently designed from principles derived from the fullest range of applicable disciplines, rather than from isolated or fragmented perspectives.",
"title": ""
},
{
"docid": "6b7a1ec7fe105dc7e83291e39e8664ec",
"text": "The clustering problem is well known in the database literature for its numerous applications in problems such as customer segmentation, classification and trend analysis. Unfortunately, all known algorithms tend to break down in high dimensional spaces because of the inherent sparsity of the points. In such high dimensional spaces not all dimensions may be relevant to a given cluster. One way of handling this is to pick the closely correlated dimensions and find clusters in the corresponding subspace. Traditional feature selection algorithms attempt to achieve this. The weakness of this approach is that in typical high dimensional data mining applications different sets of points may cluster better for different subsets of dimensions. The number of dimensions in each such cluster-specific subspace may also vary. Hence, it may be impossible to find a single small subset of dimensions for all the clusters. We therefore discuss a generalization of the clustering problem, referred to as the projected clustering problem, in which the subsets of dimensions selected are specific to the clusters themselves. We develop an algorithmic framework for solving the projected clustering problem, and test its performance on synthetic data.",
"title": ""
},
{
"docid": "313fd10dd4976448a99a40c0d75b4015",
"text": "This paper introduces distributional semantic similarity methods for automatically measuring the coherence of a set of words generated by a topic model. We construct a semantic space to represent each topic word by making use of Wikipedia as a reference corpus to identify context features and collect frequencies. Relatedness between topic words and context features is measured using variants of Pointwise Mutual Information (PMI). Topic coherence is determined by measuring the distance between these vectors computed using a variety of metrics. Evaluation on three data sets shows that the distributional-based measures outperform the state-of-the-art approach for this task.",
"title": ""
},
{
"docid": "6632f299195874af82913b00e67ff652",
"text": "Snake venom possesses various kinds of proteins and neurotoxic polypeptides, which can negatively interfere with the neurotransmitter signaling cascade. This phenomenon occurs mainly due to the blocking of ion channels in the body system. Envenomation prevents or severely interrupts nerve impulses from being transmitted, inhibition of adenosine triphosphate synthesis, and proper functioning of the cardiac muscles. However, some beneficial properties of venoms have also been reported. The aim of this study was to examine the snake venom as an anticancer agent due to its inhibitory effects on cancer progression such as cell motility, cell invasion, and colony formation. In this study, the effect of venoms on phenotypic changes and the change on molecular level in colorectal and breast cancer cell lines were examined. A reduction of 60%-90% in cell motility, colony formation, and cell invasion was observed when these cell lines were treated with different concentrations of snake venom. In addition, the increase in oxidative stress that results in an increase in the number of apoptotic cancer cells was significantly higher in the venom-treated cell lines. Further analysis showed that there was a decrease in the expression of pro-inflammatory cytokines and signaling proteins, strongly suggesting a promising role for snake venom against breast and colorectal cancer cell progression. In conclusion, the snake venoms used in this study showed significant anticancer properties against colorectal and breast cancer cell lines.",
"title": ""
},
{
"docid": "1b41ef1a81776e037b8b4c70f8a45f60",
"text": "The “interpretation” framework in Pattern Recognition (PR) arises in the many cases in which the more classical paradigm of “classification” is not properly applicable, generally because the number of classes is rather large, or simply because the concept of “class” does not hold. A very general way of representing the results of Interpretations of given objects or data is in terms of sentences of a “Semantic Language” in which the actions to be performed for each different object or datum are described. Interpretation can therefore be conveniently formalized through the concept of Formal Transduction, giving rise to the central PR problem of how to automatically learn a transducer from a training set of examples of the desired input-output behavior. This paper presents a formalization of the stated transducer learning problem, as well as an effective and efficient method for the inductive learning of an important class of transducers, namely, the class of Subsequential Tranducers. The capabilities of subsequential transductions are illustrated through a series of experiments which also show the high effectiveness of the proposed learning method to obtain very accurate and compact transducers for the corresponding tasks. * Work partially supported by the Spanish CICYT under grant TIC-0448/89 © IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(5):448-458, 1993.",
"title": ""
}
] |
scidocsrr
|
80080e09b8262a6124c6fed4ccf72a68
|
Linking People in Videos with "Their" Names Using Coreference Resolution
|
[
{
"docid": "74f8127bc620fa1c9797d43dedea4d45",
"text": "A novel system for long-term tracking of a human face in unconstrained videos is built on Tracking-Learning-Detection (TLD) approach. The system extends TLD with the concept of a generic detector and a validator which is designed for real-time face tracking resistent to occlusions and appearance changes. The off-line trained detector localizes frontal faces and the online trained validator decides which faces correspond to the tracked subject. Several strategies for building the validator during tracking are quantitatively evaluated. The system is validated on a sitcom episode (23 min.) and a surveillance (8 min.) video. In both cases the system detects-tracks the face and automatically learns a multi-view model from a single frontal example and an unlabeled video.",
"title": ""
},
{
"docid": "3e3dc575858c21806edbe6149475f5e0",
"text": "This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments. Previous approaches have used models with fixed structure to infer the likelihood of a sequence of actions given the environment and the command. In contrast, our framework, called Generalized Grounding Graphs (G), dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command’s hierarchical and compositional semantic structure. Our system performs inference in the model to successfully find and execute plans corresponding to natural language commands such as “Put the tire pallet on the truck.” The model is trained using a corpus of commands collected using crowdsourcing. We pair each command with robot actions and use the corpus to learn the parameters of the model. We evaluate the robot’s performance by inferring plans from natural language commands, executing each plan in a realistic robot simulator, and asking users to evaluate the system’s performance. We demonstrate that our system can successfully follow many natural language commands from the corpus.",
"title": ""
},
{
"docid": "d5a4c2d61e7d65f1972ed934f399847e",
"text": "We address the problem of learning a joint model of actors and actions in movies using weak supervision provided by scripts. Specifically, we extract actor/action pairs from the script and use them as constraints in a discriminative clustering framework. The corresponding optimization problem is formulated as a quadratic program under linear constraints. People in video are represented by automatically extracted and tracked faces together with corresponding motion features. First, we apply the proposed framework to the task of learning names of characters in the movie and demonstrate significant improvements over previous methods used for this task. Second, we explore the joint actor/action constraint and show its advantage for weakly supervised action learning. We validate our method in the challenging setting of localizing and recognizing characters and their actions in feature length movies Casablanca and American Beauty.",
"title": ""
}
] |
[
{
"docid": "8f6d8c96c51f210a6711802a2ff32dde",
"text": "People are drawn to play different types of videogames and find enjoyment in a range of gameplay experiences. Envisaging a representative game player or persona allows game designers to personalize game content; however, there are many ways to characterize players and little guidance on which approaches best model player behavior and preference. To provide knowledge about how player characteristics contribute to game experience, we investigate how personality traits as well as player styles from the BrianHex model moderate the prediction of player motivation with a social network game. Our results show that several player characteristics impact motivation, expressed in terms of enjoyment and effort. We also show that player enjoyment and effort, as predicted by our models, impact players’ in-game behaviors, illustrating both the predictive power and practical utility of our models for guiding user adaptation.",
"title": ""
},
{
"docid": "ecf90b3e40eb695eb8b4d6d6701d6b06",
"text": "Digital forensic visualization is an understudied area despite its potential to achieve significant improvements in the efficiency of an investigation, criminal or civil. In this study, a three-stage forensic data storage and visualization life cycle is presented. The first stage is the decoding of data, which involves preparing both structured and unstructured data for storage. In the storage stage, data are stored within our proposed database schema designed for ensuring data integrity and speed of storage and retrieval. The final stage is the visualization of stored data in a manner that facilitates user interaction. These functionalities are implemented in a proof of concept to demonstrate the utility of the proposed life cycle. The proof of concept demonstrates the utility of the proposed approach for the storage and visualization of digital forensic data.",
"title": ""
},
{
"docid": "7cfe3122c904953edf3fcd6c35a549de",
"text": "This paper studies the practical impact of the branching heuristics used in Propositional Satisfiability (SAT) algorithms, when applied to solving real-world instances of SAT. In addition, different SAT algorithms are experimentally evaluated. The main conclusion of this study is that even though branching heuristics are crucial for solving SAT, other aspects of the organization of SAT algorithms are also essential. Moreover, we provide empirical evidence that for practical instances of SAT, the search pruning techniques included in the most competitive SAT algorithms may be of more fundamental significance than branching heuristics.",
"title": ""
},
{
"docid": "5c29083624be58efa82b4315976f8dc2",
"text": "This paper presents a structured ordinal measure method for video-based face recognition that simultaneously lear ns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space . The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization metho d is employed to handle the discrete and low-rank constraints , yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition ra tes using fewer features and samples.",
"title": ""
},
{
"docid": "e38aa8466226257ca85e3fe0e709edc9",
"text": "Recently, recurrent neural networks (RNNs) as powerful sequence models have re-emerged as a potential acoustic model for statistical parametric speech synthesis (SPSS). The long short-term memory (LSTM) architecture is particularly attractive because it addresses the vanishing gradient problem in standard RNNs, making them easier to train. Although recent studies have demonstrated that LSTMs can achieve significantly better performance on SPSS than deep feedforward neural networks, little is known about why. Here we attempt to answer two questions: a) why do LSTMs work well as a sequence model for SPSS; b) which component (e.g., input gate, output gate, forget gate) is most important. We present a visual analysis alongside a series of experiments, resulting in a proposal for a simplified architecture. The simplified architecture has significantly fewer parameters than an LSTM, thus reducing generation complexity considerably without degrading quality.",
"title": ""
},
{
"docid": "1cc4048067cc93c2f1e836c77c2e06dc",
"text": "Recent advances in microscope automation provide new opportunities for high-throughput cell biology, such as image-based screening. High-complex image analysis tasks often make the implementation of static and predefined processing rules a cumbersome effort. Machine-learning methods, instead, seek to use intrinsic data structure, as well as the expert annotations of biologists to infer models that can be used to solve versatile data analysis tasks. Here, we explain how machine-learning methods work and what needs to be considered for their successful application in cell biology. We outline how microscopy images can be converted into a data representation suitable for machine learning, and then introduce various state-of-the-art machine-learning algorithms, highlighting recent applications in image-based screening. Our Commentary aims to provide the biologist with a guide to the application of machine learning to microscopy assays and we therefore include extensive discussion on how to optimize experimental workflow as well as the data analysis pipeline.",
"title": ""
},
{
"docid": "f8e6f97f5c797d490e2490dad676f62a",
"text": "Both patients and clinicians may incorrectly diagnose vulvovaginitis symptoms. Patients often self-treat with over-the-counter antifungals or home remedies, although they are unable to distinguish among the possible causes of their symptoms. Telephone triage practices and time constraints on office visits may also hamper effective diagnosis. This review is a guide to distinguish potential causes of vulvovaginal symptoms. The first section describes both common and uncommon conditions associated with vulvovaginitis, including infectious vulvovaginitis, allergic contact dermatitis, systemic dermatoses, rare autoimmune diseases, and neuropathic vulvar pain syndromes. The focus is on the clinical presentation, specifically 1) the absence or presence and characteristics of vaginal discharge; 2) the nature of sensory symptoms (itch and/or pain, localized or generalized, provoked, intermittent, or chronic); and 3) the absence or presence of mucocutaneous changes, including the types of lesions observed and the affected tissue. Additionally, this review describes how such features of the clinical presentation can help identify various causes of vulvovaginitis.",
"title": ""
},
{
"docid": "f2730c0a11e5c3d436c777e51f2142b4",
"text": "The proliferation and ubiquity of temporal data across many disciplines has generated substantial interest in the analysis and mining of time series. Clustering is one of the most popular data-mining methods, not only due to its exploratory power but also because it is often a preprocessing step or subroutine for other techniques. In this article, we present k-Shape and k-MultiShapes (k-MS), two novel algorithms for time-series clustering. k-Shape and k-MS rely on a scalable iterative refinement procedure. As their distance measure, k-Shape and k-MS use shape-based distance (SBD), a normalized version of the cross-correlation measure, to consider the shapes of time series while comparing them. Based on the properties of SBD, we develop two new methods, namely ShapeExtraction (SE) and MultiShapesExtraction (MSE), to compute cluster centroids that are used in every iteration to update the assignment of time series to clusters. k-Shape relies on SE to compute a single centroid per cluster based on all time series in each cluster. In contrast, k-MS relies on MSE to compute multiple centroids per cluster to account for the proximity and spatial distribution of time series in each cluster. To demonstrate the robustness of SBD, k-Shape, and k-MS, we perform an extensive experimental evaluation on 85 datasets against state-of-the-art distance measures and clustering methods for time series using rigorous statistical analysis. SBD, our efficient and parameter-free distance measure, achieves similar accuracy to Dynamic Time Warping (DTW), a highly accurate but computationally expensive distance measure that requires parameter tuning. For clustering, we compare k-Shape and k-MS against scalable and non-scalable partitional, hierarchical, spectral, density-based, and shapelet-based methods, with combinations of the most competitive distance measures. k-Shape outperforms all scalable methods in terms of accuracy. Furthermore, k-Shape also outperforms all non-scalable approaches, with one exception, namely k-medoids with DTW, which achieves similar accuracy. However, unlike k-Shape, this approach requires tuning of its distance measure and is significantly slower than k-Shape. k-MS performs similarly to k-Shape in comparison to rival methods, but k-MS is significantly more accurate than k-Shape. Beyond clustering, we demonstrate the effectiveness of k-Shape to reduce the search space of one-nearest-neighbor classifiers for time series. Overall, SBD, k-Shape, and k-MS emerge as domain-independent, highly accurate, and efficient methods for time-series comparison and clustering with broad applications.",
"title": ""
},
{
"docid": "d55343250b7e13caa787c5b6db52d305",
"text": "Analysis of the face is an essential component of facial plastic surgery. In training, we are taught standards and ideals based on neoclassical models of beauty from Greek and Roman art and architecture. In practice, we encounter a wide range of variation in patient desires and perceptions of beauty. Our goals seem to be ever shifting, yet our education has provided us with a foundation from which to draw ideals of beauty. Plastic surgeons must synthesize classical ideas of beauty with patient desires, cultural nuances, and ethnic considerations all the while maintaining a natural appearance and result. This article gives an overview of classical models of facial proportions and relationships, while also discussing unique ethnic and cultural considerations which may influence the goal for the individual patient.",
"title": ""
},
{
"docid": "bf49aafc53fd8083d5f4e7e015443a71",
"text": "BACKGROUND\nThree intrinsic connectivity networks in the brain, namely the central executive, salience, and default mode networks, have been identified as crucial to the understanding of higher cognitive functioning, and the functioning of these networks has been suggested to be impaired in psychopathology, including posttraumatic stress disorder (PTSD).\n\n\nOBJECTIVE\n1) To describe three main large-scale networks of the human brain; 2) to discuss the functioning of these neural networks in PTSD and related symptoms; and 3) to offer hypotheses for neuroscientifically-informed interventions based on treating the abnormalities observed in these neural networks in PTSD and related disorders.\n\n\nMETHODS\nLiterature relevant to this commentary was reviewed.\n\n\nRESULTS\nIncreasing evidence for altered functioning of the central executive, salience, and default mode networks in PTSD has been demonstrated. We suggest that each network is associated with specific clinical symptoms observed in PTSD, including cognitive dysfunction (central executive network), increased and decreased arousal/interoception (salience network), and an altered sense of self (default mode network). Specific testable neuroscientifically-informed treatments aimed to restore each of these neural networks and related clinical dysfunction are proposed.\n\n\nCONCLUSIONS\nNeuroscientifically-informed treatment interventions will be essential to future research agendas aimed at targeting specific PTSD and related symptoms.",
"title": ""
},
{
"docid": "93076fee7472e1a89b2b3eb93cff4737",
"text": "This paper presents a fast and robust level set method for image segmentation. To enhance the robustness against noise, we embed a Markov random field (MRF) energy function to the conventional level set energy function. This MRF energy function builds the correlation of a pixel with its neighbors and encourages them to fall into the same region. To obtain a fast implementation of the MRF embedded level set model, we explore algebraic multigrid (AMG) and sparse field method (SFM) to increase the time step and decrease the computation domain, respectively. Both AMG and SFM can be conducted in a parallel fashion, which facilitates the processing of our method for big image databases. By comparing the proposed fast and robust level set method with the standard level set method and its popular variants on noisy synthetic images, synthetic aperture radar (SAR) images, medical images, and natural images, we comprehensively demonstrate the new method is robust against various kinds of noises. In particular, the new level set method can segment an image of size 500 × 500 within 3 s on MATLAB R2010b installed in a computer with 3.30-GHz CPU and 4-GB memory.",
"title": ""
},
{
"docid": "b34beab849a50ff04a948f277643fb74",
"text": "To cite: Hirai T, Koster M. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/ bcr-2013-009759 DESCRIPTION A 22-year-old man with a history of intravenous heroin misuse, presented with 1 week of fatigue and fever. Blood cultures were positive for methicillin-sensitive Staphylococcus aureus. Physical examination showed multiple painful 1– 2 mm macular rashes on the palm and soles bilaterally (figures 1 and 2). Splinter haemorrhages (figure 3) and conjunctival petechiae (figure 4) were also noted. A transoesophageal echocardiogram demonstrated a 16-mm vegetation on the mitral valve (figure 5). Vegitations >10 mm in diameter and infection involving the mitral valve are independently associated with an increased risk of embolisation. However, he decided medical management after extensive discussion and was treated with intravenous nafcillin for 6 weeks. He returned 8 weeks later with acute shortness of breath and evidence of a perforated mitral valve for which he subsequently underwent a successful mitral valve repair with an uneventful recovery.",
"title": ""
},
{
"docid": "0a5df67766cd1027913f7f595950754c",
"text": "While a number of efficient sequential pattern mining algorithms were developed over the years, they can still take a long time and produce a huge number of patterns, many of which are redundant. These properties are especially frustrating when the goal of pattern mining is to find patterns for use as features in classification problems. In this paper, we describe BIDE-Discriminative, a modification of BIDE that uses class information for direct mining of predictive sequential patterns. We then perform an extensive evaluation on nine real-life datasets of the different ways in which the basic BIDE-Discriminative can be used in real multi-class classification problems, including 1-versus-rest and model-based search tree approaches. The results of our experiments show that 1-versus-rest provides an efficient solution with good classification performance.",
"title": ""
},
{
"docid": "0ae2a7701d4e75e7fa6891a8ca554273",
"text": "Multi-instance learning studies problems in which labels are assigned to bags that contain multiple instances. In these settings, the relations between instances and labels are usually ambiguous. In contrast, multi-task learning focuses on the output space in which an input sample is associated with multiple labels. In real world, a sample may be associated with multiple labels that are derived from observing multiple aspects of the problem. Thus many real world applications are naturally formulated as multi-instance multi-task (MIMT) problems. A common approach to MIMT is to solve it task-by-task independently under the multi-instance learning framework. On the other hand, convolutional neural networks (CNN) have demonstrated promising performance in single-instance single-label image classification tasks. However, how CNN deals with multi-instance multi-label tasks still remains an open problem. This is mainly due to the complex multiple-to-multiple relations between the input and output space. In this work, we propose a deep leaning model, known as multi-instance multi-task convolutional neural networks (MIMT-CNN), where a number of images representing a multi-task problem is taken as the inputs. Then a shared sub-CNN is connected with each input image to form instance representations. Those sub-CNN outputs are subsequently aggregated as inputs to additional convolutional layers and full connection layers to produce the ultimate multi-label predictions. This CNN model, through transfer learning from other domains, enables transfer of prior knowledge at image level learned from large single-label single-task data sets. The bag level representations in this model are hierarchically abstracted by multiple layers from instance level representations. Experimental results on mouse brain gene expression pattern annotation data show that the proposed MIMT-CNN model achieves superior performance.",
"title": ""
},
{
"docid": "959547839a5769d6bfcca0efa6568cbf",
"text": "Conventionally, maximum capacities for energy assimilation are presented as daily averages. However, maximum daily energy intake is determined by the maximum metabolizable energy intake rate and the time available for assimilation of food energy. Thrush nightingales (Luscinia luscinia) in migratory disposition were given limited food rations for 3 d to reduce their energy stores. Subsequently, groups of birds were fed ad lib. during fixed time periods varying between 7 and 23 h per day. Metabolizable energy intake rate, averaged over the available feeding time, was 1.9 W and showed no difference between groups on the first day of refueling. Total daily metabolizable energy intake increased linearly with available feeding time, and for the 23-h group, it was well above suggested maximum levels for animals. We conclude that both intake rate and available feeding time must be taken into account when interpreting potential constraints acting on animals' energy budgets. In the 7-h group, energy intake rates increased from 1.9 W on the first day to 3.1 W on the seventh day. This supports the idea that small birds can adaptively increase their energy intake rates on a short timescale.",
"title": ""
},
{
"docid": "eb8d1663cf6117d76a6b61de38b55797",
"text": "Many security experts would agree that, had it not been for mobile configurations, the synthesis of online algorithms might never have occurred. In fact, few computational biologists would disagree with the evaluation of von Neumann machines. We construct a peer-to-peer tool for harnessing Smalltalk, which we call TalmaAment.",
"title": ""
},
{
"docid": "2579cb11b9d451d6017ebb642d6a35cb",
"text": "The presence of bots has been felt in many aspects of social media. Twitter, one example of social media, has especially felt the impact, with bots accounting for a large portion of its users. These bots have been used for malicious tasks such as spreading false information about political candidates and inflating the perceived popularity of celebrities. Furthermore, these bots can change the results of common analyses performed on social media. It is important that researchers and practitioners have tools in their arsenal to remove them. Approaches exist to remove bots, however they focus on precision to evaluate their model at the cost of recall. This means that while these approaches are almost always correct in the bots they delete, they ultimately delete very few, thus many bots remain. We propose a model which increases the recall in detecting bots, allowing a researcher to delete more bots. We evaluate our model on two real-world social media datasets and show that our detection algorithm removes more bots from a dataset than current approaches.",
"title": ""
},
{
"docid": "bfde0c836406a25a08b7c95b330aaafa",
"text": "The concept of agile process models has gained great popularity in software (SW) development community in past few years. Agile models promote fast development. This property has certain drawbacks, such as poor documentation and bad quality. Fast development promotes use of agile process models in small-scale projects. This paper modifies and evaluates extreme programming (XP) process model and proposes a novel adaptive process mode based on these modifications. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9a2cfa65fe07d99b354e6f772282ff13",
"text": "Destiny is, to date, the most expensive digital game ever released with a total operating budget of over half a billion US dollars. It stands as one of the main examples of AAA titles, the term used for the largest and most heavily marketed game productions in the games industry. Destiny is a blend of a shooter game and massively multi-player online game, and has attracted dozens of millions of players. As a persistent game title, predicting retention and churn in Destiny is crucial to the running operations of the game, but prediction has not been attempted for this type of game in the past. In this paper, we present a discussion of the challenge of predicting churn in Destiny, evaluate the area under curve (ROC) of behavioral features, and use Hidden Markov Models to develop a churn prediction model for the game.",
"title": ""
},
{
"docid": "1b820143d38afa66e3ccf9da80654200",
"text": "Through virtualization, single physical data planes can logically support multiple networking contexts. We propose HyPer4 as a portable virtualization solution. HyPer4 provides a general purpose program, written in the P4 dataplane programming language, that may be dynamically configured to adopt behavior that is functionally equivalent to other P4 programs. HyPer4 extends, through software, the following features to diverse P4-capable devices: the ability to logically store multiple programs and either run them in parallel (network slicing) or as hot-swappable snapshots; and virtual networking between programs (supporting program composition or multi-tenant service interaction). HyPer4 permits modifying the set of programs, as well as the virtual network connecting them, at runtime, without disrupting currently active programs. We show that realistic ASICs-based hardware would be capable of running HyPer4 today.",
"title": ""
}
] |
scidocsrr
|
836b2877b4f3452764c9ec49b593895e
|
Principles of neural ensemble physiology underlying the operation of brain–machine interfaces
|
[
{
"docid": "8047c0ba3b0a2838e7df95c8246863f4",
"text": "Neurons in the ventral premotor cortex of the monkey encode the locations of visual, tactile, auditory and remembered stimuli. Some of these neurons encode the locations of stimuli with respect to the arm, and may be useful for guiding movements of the arm. Others encode the locations of stimuli with respect to the head, and may be useful for guiding movements of the head. We suggest that a general principle of sensory-motor integration is that the space surrounding the body is represented in body-part-centered coordinates. That is, there are multiple coordinate systems used to guide movement, each one attached to a different part of the body. This and other recent evidence from both monkeys and humans suggest that the formation of spatial maps in the brain and the guidance of limb and body movements do not proceed in separate stages but are closely integrated in both the parietal and frontal lobes.",
"title": ""
},
{
"docid": "c3bf2b73b2693c509c228293cd64ce3d",
"text": "In this work we present the first comprehensive survey of Brain Interface (BI) technology designs published prior to January 2006. Detailed results from this survey, which was based on the Brain Interface Design Framework proposed by Mason and Birch, are presented and discussed to address the following research questions: (1) which BI technologies are directly comparable, (2) what technology designs exist, (3) which application areas (users, activities and environments) have been targeted in these designs, (4) which design approaches have received little or no research and are possible opportunities for new technology, and (5) how well are designs reported. The results of this work demonstrate that meta-analysis of high-level BI design attributes is possible and informative. The survey also produced a valuable, historical cross-reference where BI technology designers can identify what types of technology have been proposed and by whom.",
"title": ""
}
] |
[
{
"docid": "04d66f58cea190d7d7ec8654b6c81d3b",
"text": "Lymphedema is a chronic, progressive condition caused by an imbalance of lymphatic flow. Upper extremity lymphedema has been reported in 16-40% of breast cancer patients following axillary lymph node dissection. Furthermore, lymphedema following sentinel lymph node biopsy alone has been reported in 3.5% of patients. While the disease process is not new, there has been significant progress in the surgical care of lymphedema that can offer alternatives and improvements in management. The purpose of this review is to provide a comprehensive update and overview of the current advances and surgical treatment options for upper extremity lymphedema.",
"title": ""
},
{
"docid": "5e4f388f4b18c6667d67d35c4161e037",
"text": "Degradation of underwater images is an atmospheric phenomenon which is a result of scattering and absorption of light. In this paper, we have defined a fusion based approach to enhance the visibility of underwater images. Our method uses only one single hazy image to derive the contrast improved and colour corrected versions of the original image. Further, it removes the distortion and uplifts the visibility of the distant objects in the image by applying weight maps on each of the derived inputs. We have used multi-scale fusion technique to blend the inputs and weight maps together, ensuring that each fused image contributes its most significant feature into the final image. Our technique is simple and straightforward that effectively contribute in enhancing the quality and appearance of underwater hazy images.",
"title": ""
},
{
"docid": "64a730ce8aad5d4679409be43a291da7",
"text": "Background In the last years, it has been seen a shifting on society's consumption patterns, from mass consumption to second-hand culture. Moreover, consumer's perception towards second-hand stores, has been changing throughout the history of second-hand markets, according to the society's values prevailing in each time. Thus, the purchase intentions regarding second-hand clothes are influence by motivational and moderating factors according to the consumer's perception. Therefore, it was employed the theory of Guiot and Roux (2010) on motivational factors towards second-hand shopping and previous researches on moderating factors towards second-hand shopping. Purpose The purpose of this study is to explore consumer's perception and their purchase intentions towards second-hand clothing stores. Method For this, a qualitative and abductive approach was employed, combined with an exploratory design. Semi-structured face-to-face interviews were conducted utilizing a convenience sampling approach. Conclusion The findings show that consumers perception and their purchase intentions are influenced by their age and the environment where they live. However, the environment affect people in different ways. From this study, it could be found that elderly consumers are influenced by values and beliefs towards second-hand clothes. Young people are very influenced by the concept of fashion when it comes to second-hand clothes. For adults, it could be observed that price and the sense of uniqueness driver their decisions towards second-hand clothes consumption. The main motivational factor towards second-hand shopping was price. On the other hand, risk of contamination was pointed as the main moderating factor towards second-hand purchase. The study also revealed two new motivational factors towards second-hand clothing shopping, such charity and curiosity. Managers of second-hand clothing stores can make use of these findings to guide their decisions, especially related to improvements that could be done in order to make consumers overcoming the moderating factors towards second-hand shopping. The findings of this study are especially useful for second-hand clothing stores in Borås, since it was suggested couple of improvements for those stores based on the participant's opinions.",
"title": ""
},
{
"docid": "8355b67520999a30c94ac36a6a01d60c",
"text": "Bitcoin is designed to protect user anonymity (or pseudonymity) in a financial transaction, and has been increasingly adopted by major ecommerce websites such as Dell, PayPal and Expedia. While the anonymity of Bitcoin transactions has been extensively studied, little attention has been paid to the security of post-transaction correspondence. In a commercial application, the merchant and the user often need to engage in follow-up correspondence after a Bitcoin transaction is completed, e.g., to acknowledge the receipt of payment, to confirm the billing address, to arrange the product delivery, to discuss refund and so on. Currently, such follow-up correspondence is typically done in plaintext via email with no guarantee on confidentiality. Obviously, leakage of sensitive data from the correspondence (e.g., billing address) can trivially compromise the anonymity of Bitcoin users. In this paper, we initiate the first study on how to realise end-to-end secure communication between Bitcoin users in a post-transaction scenario without requiring any trusted third party or additional authentication credentials. This is an important new area that has not been covered by any IEEE or ISO/IEC security standard, as none of the existing PKI-based or password-based AKE schemes are suitable for the purpose. Instead, our idea is to leverage the Bitcoin’s append-only ledger as an additional layer of authentication between previously confirmed transactions. This naturally leads to a new category of AKE protocols that bootstrap trust entirely from the block chain. We call this new category “Bitcoin-based AKE” and present two concrete protocols: one is non-interactive with no forward secrecy, while the other is interactive with additional guarantee of forward secrecy. Finally, we present proof-of-concept prototypes for both protocols with experimental results to demonstrate their practical feasibility.",
"title": ""
},
{
"docid": "b7dd9d1cb89ec4aab21b9bb35cec1beb",
"text": "Target detection is one of the important applications in the field of remote sensing. The Gaofen-3 (GF-3) Synthetic Aperture Radar (SAR) satellite launched by China is a powerful tool for maritime monitoring. This work aims at detecting ships in GF-3 SAR images using a new land masking strategy, the appropriate model for sea clutter and a neural network as the discrimination scheme. Firstly, the fully convolutional network (FCN) is applied to separate the sea from the land. Then, by analyzing the sea clutter distribution in GF-3 SAR images, we choose the probability distribution model of Constant False Alarm Rate (CFAR) detector from K-distribution, Gamma distribution and Rayleigh distribution based on a tradeoff between the sea clutter modeling accuracy and the computational complexity. Furthermore, in order to better implement CFAR detection, we also use truncated statistic (TS) as a preprocessing scheme and iterative censoring scheme (ICS) for boosting the performance of detector. Finally, we employ a neural network to re-examine the results as the discrimination stage. Experiment results on three GF-3 SAR images verify the effectiveness and efficiency of this approach.",
"title": ""
},
{
"docid": "8988596b2b38cf61b8d0f7bb3ad8f5d7",
"text": "National cyber security centers (NCSCs) are gaining more and more importance to ensure the security and proper operations of critical infrastructures (CIs). As a prerequisite, NCSCs need to collect, analyze, process, assess and share security-relevant information from infrastructure operators. A vital capability of mentioned NCSCs is to establish Cyber Situational Awareness (CSA) as a precondition for understanding the security situation of critical infrastructures. This is important for proper risk assessment and subsequent reduction of potential attack surfaces at national level. In this paper, we therefore survey theoretical models relevant for Situational Awareness (SA) and present a collaborative CSA model for NCSCs in order to enhance the protection of CIs at national level. Additionally, we provide an application scenario to illustrate a handson case of utilizing a CSA model in a NCSC, especially focusing on information sharing. We foresee this illustrative scenario to aid decision makers and practitioners who are involved in establishing NCSCs and cyber security processes on national level to better understand the specific implications regarding the application of the CSA model for NCSCs.",
"title": ""
},
{
"docid": "22727f9a6951582de1e98b522b40f68e",
"text": "High-speed electric machines are becoming increasingly important and utilized in many applications. This paper addresses the considerations and challenges of the rotor design of high-speed surface permanent magnet machines. The paper focuses particularly on mechanical aspects of the design. Special attention is given to the rotor sleeve design including thickness and material. Permanent magnet design parameters are discussed. Surface permanent magnet rotor dynamic considerations and challenges are also discussed.",
"title": ""
},
{
"docid": "9d99851970492cc4e8f6ac54967a5229",
"text": "BACKGROUND AND PURPOSE\nTranscranial Doppler (TCD) is used for diagnosis of vasospasm in patients with subarachnoid hemorrhage due to a ruptured aneurysm. Our aim was to evaluate both the accuracy of TCD compared with angiography and its usefulness as a screening method in this setting.\n\n\nMETHODS\nA search (MEDLINE, EMBASE, Cochrane Library, bibliographies, hand searching, any language, through January 31, 2001) was performed for studies comparing TCD with angiography. Data were critically appraised using a modified published 10-point score and were combined using a random-effects model.\n\n\nRESULTS\nTwenty-six reports compared TCD with angiography. Median validity score was 4.5 (range 1 to 8). Meta-analyses could be performed with data from 7 trials. For the middle cerebral artery (5 trials, 317 tests), sensitivity was 67% (95% CI 48% to 87%), specificity was 99% (98% to 100%), positive predictive value (PPV) was 97% (95% to 98%), and negative predictive value (NPV) was 78% (65% to 91%). For the anterior cerebral artery (3 trials, 171 tests), sensitivity was 42% (11% to 72%), specificity was 76% (53% to 100%), PPV was 56% (27% to 84%), and NPV was 69% (43% to 95%). Three of these 7 studies reported on the same patients, each on another artery, and for 4, data recycling could not be disproved. Other arteries were tested in only 1 trial each.\n\n\nCONCLUSIONS\nFor the middle cerebral artery, TCD is not likely to indicate a spasm when angiography does not show one (high specificity), and TCD may be used to identify patients with a spasm (high PPV). For all other situations and arteries, there is either lack of evidence of accuracy or of any usefulness of TCD. Most of these data are of low methodological quality, bias cannot not be ruled out, and data reporting is often uncritical.",
"title": ""
},
{
"docid": "d44fd68ef1713b080594971ec25e26c7",
"text": "This paper presents an experimental study of the electronic differential system with four-wheel, dual-rear in wheel motor independently driven an electric vehicle. It is worth bearing in mind that the electronic differential is a new technology used in electric vehicle technology and provides better balancing in curved paths. In addition, it is more lightweight than the mechanical differential and can be controlled by a single controller. In this study, intelligently supervised electronic differential design and control is carried out for electric vehicles. Embedded system is used to provide motor control with a fuzzy logic controller. High accuracy is obtained from experimental study. Keywords—Electronic differential; electric vehicle; embedded system; fuzzy logic controller; in-wheel motor",
"title": ""
},
{
"docid": "dcdb6242febbef358efe5a1461957291",
"text": "Neuromorphic Engineering has emerged as an exciting research area, primarily owing to the paradigm shift from conventional computing architectures to data-driven, cognitive computing. There is a diversity of work in the literature pertaining to neuromorphic systems, devices and circuits. This review looks at recent trends in neuromorphic engineering and its sub-domains, with an attempt to identify key research directions that would assume significance in the future. We hope that this review would serve as a handy reference to both beginners and experts, and provide a glimpse into the broad spectrum of applications of neuromorphic hardware and algorithms. Our survey indicates that neuromorphic engineering holds a promising future, particularly with growing data volumes, and the imminent need for intelligent, versatile computing.",
"title": ""
},
{
"docid": "333fd7802029f38bda35cd2077e7de59",
"text": "Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric bodypart segmentation.",
"title": ""
},
{
"docid": "e6aef17f7efdf1cb497ef89262325162",
"text": "OBJECTIVES\nTo evaluate the erectile function (EF) domain of the International Index of Erectile Function (IIEF) as a diagnostic tool to discriminate between men with and without erectile dysfunction (ED) and to develop a clinically meaningful gradient of severity for ED.\n\n\nMETHODS\nOne thousand one hundred fifty-one men (1035 with and 116 without ED) who reported attempting sexual activity were evaluated using data from four clinical trials of sildenafil citrate (Viagra) and two control samples. The statistical program Classification and Regression Trees was used to determine optimal cutoff scores on the EF domain (range 6 to 30) to distinguish between men with and without ED and to determine levels of ED severity on the EF domain using the IIEF item on sexual intercourse satisfaction.\n\n\nRESULTS\nFor a 0.5 prevalence rate of ED, the optimal cutoff score was 25, with men scoring less than or equal to 25 classified as having ED and those scoring above 25 as not having ED (sensitivity 0.97, specificity 0.88). Sensitivity analyses revealed a robust statistical solution that was well supported with different assumed prevalence rates and several cross-validations. The severity of ED was classified into five categories: no ED (EF score 26 to 30), mild (EF score 22 to 25), mild to moderate (EF score 17 to 21), moderate (EF score 11 to 16), and severe (EF score 6 to 10). Substantial agreement was shown between these predicted and \"true\" classes (weighted kappa 0.80).\n\n\nCONCLUSIONS\nThe EF domain possesses favorable statistical properties as a diagnostic tool, not only in distinguishing between men with and without ED, but also in classifying levels of ED severity. Clinical validation with self-rated assessments of ED severity is warranted.",
"title": ""
},
{
"docid": "fc3c4f6c413719bbcf7d13add8c3d214",
"text": "Disentangling the effects of selection and influence is one of social science's greatest unsolved puzzles: Do people befriend others who are similar to them, or do they become more similar to their friends over time? Recent advances in stochastic actor-based modeling, combined with self-reported data on a popular online social network site, allow us to address this question with a greater degree of precision than has heretofore been possible. Using data on the Facebook activity of a cohort of college students over 4 years, we find that students who share certain tastes in music and in movies, but not in books, are significantly likely to befriend one another. Meanwhile, we find little evidence for the diffusion of tastes among Facebook friends-except for tastes in classical/jazz music. These findings shed light on the mechanisms responsible for observed network homogeneity; provide a statistically rigorous assessment of the coevolution of cultural tastes and social relationships; and suggest important qualifications to our understanding of both homophily and contagion as generic social processes.",
"title": ""
},
{
"docid": "70745e8cdf957b1388ab38a485e98e60",
"text": "Network studies of large-scale brain connectivity have begun to reveal attributes that promote the segregation and integration of neural information: communities and hubs. Network communities are sets of regions that are strongly interconnected among each other while connections between members of different communities are less dense. The clustered connectivity of network communities supports functional segregation and specialization. Network hubs link communities to one another and ensure efficient communication and information integration. This review surveys a number of recent reports on network communities and hubs, and their role in integrative processes. An emerging focus is the shifting balance between segregation and integration over time, which manifest in continuously changing patterns of functional interactions between regions, circuits and systems.",
"title": ""
},
{
"docid": "09f2d2f6eb77d6c4174661addc95c942",
"text": "The purpose of the study was to examine the effects of two instructional approaches designed to improve the reading fluency of 2nd-grade children. The first approach was based on Stahl and Heubach's (2005) fluency-oriented reading instruction (FORI) and involved the scaffolded, repeated reading of grade-level texts over the course of each week. The second was a wide-reading approach that also involved scaffolded instruction. hut that incorporated the reading of 3 different grade-level texts each week and provicled significantly less opportunity for repetition. By the end of the school year. FORI and wide-reading approaches showed similar benefits for standardized measures of word reading efficiency and reading comprehension skills compared to control approachcs. although the benefits of the wide-reading approach emerged earlier and included oral text reading fluency skill. Thus, we conclude that fluency instruction that emphasizes extensive oral reading of grade-level text using scaffolded approaches is effective for promoting reading development in young learners.",
"title": ""
},
{
"docid": "6dc6943bfa8bc8b1a91bcad2e43e37b4",
"text": "The best ebooks about Model Driven Architecture In Practice A Software Production Environment Based On Conceptual Modeling that you can get for free here by download this Model Driven Architecture In Practice A Software Production Environment Based On Conceptual Modeling and save to your desktop. This ebooks is under topic such as model driven architecture in practice a software free download model-driven architecture in practice: a model-driven architecture in practice springer full model-driven practice: from requirements to code return of the antichristand the new world order ebook welfare reform and pensions bill1st sitting tuesday 2 dermatology skills for primary carean illustrated guide labour law and worker protection in developing countries patient communication for pharmacya case study approach on the tools of argumenthow the best lawyers think argue and systems and meaningconsulting in organisations systematic prayer and meditations by bahaulah ebook | thesustainablecorp informe técnico / technical report arxiv mdd4ms: a model driven development framework for modeling model driven architecture nanjing university",
"title": ""
},
{
"docid": "07e0aa2b1bb7457efc7ed17d00c7ecb4",
"text": "Barcode reading mobile applications that identify products from pictures taken using mobile devices are widely used by customers to perform online price comparisons or to access reviews written by others. Most of the currently available barcode reading approaches focus on decoding degraded barcodes and treat the underlying barcode detection task as a side problem that can be addressed using appropriate object detection methods. However, the majority of modern mobile devices do not meet the minimum working requirements of complex general purpose object detection algorithms and most of the efficient specifically designed barcode detection algorithms require user interaction to work properly. In this paper, we present a novel method for barcode detection in camera captured images based on a supervised machine learning algorithm that identifies one-dimensional barcodes in the two-dimensional Hough Transform space. Our model is angle invariant, requires no user interaction and can be executed on a modern mobile device. It achieves excellent results for two standard one-dimensional barcode datasets: WWU Muenster Barcode Database and ArTe-Lab 1D Medium Barcode Dataset. Moreover, we prove that it is possible to enhance the overall performance of a state-of-the-art barcode reading algorithm by combining it with our detection method.",
"title": ""
},
{
"docid": "6476066913e37c88e94cc83c15b05f43",
"text": "The Aduio-visual Speech Recognition (AVSR) which employs both the video and audio information to do Automatic Speech Recognition (ASR) is one of the application of multimodal leaning making ASR system more robust and accuracy. The traditional models usually treated AVSR as inference or projection but strict prior limits its ability. As the revival of deep learning, Deep Neural Networks (DNN) becomes an important toolkit in many traditional classification tasks including ASR, image classification, natural language processing. Some DNN models were used in AVSR like Multimodal Deep Autoencoders (MDAEs), Multimodal Deep Belief Network (MDBN) and Multimodal Deep Boltzmann Machine (MDBM) that actually work better than traditional methods. However, such DNN models have several shortcomings: (1) They don’t balance the modal fusion and temporal fusion, or even haven’t temporal fusion; (2)The architecture of these models isn’t end-to-end, the training and testing getting cumbersome. We propose a DNN model, Auxiliary Multimodal LSTM (am-LSTM), to overcome such weakness. The am-LSTM could be trained and tested in one time, alternatively easy to train and preventing overfitting automatically. The extensibility and flexibility are also take into consideration. The experiments shows that am-LSTM is much better than traditional methods and other DNN models in three datasets: AVLetters, AVLetters2, AVDigits.",
"title": ""
}
] |
scidocsrr
|
e82fba42e24e64f001350fc5046f79fb
|
Anomaly Detection Using an Ensemble of Feature Models
|
[
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
}
] |
[
{
"docid": "e3ac61e2a8fe211124446c22f7f88b69",
"text": "Requirement elicitation is a critical activity in the requirement development process and it explores the requirements of stakeholders. The common challenges that analysts face during elicitation process are to ensure effective communication between analyst and the users. Mostly errors in the systems are due to poor communication between user and analyst. This paper proposes an improved approach for requirements elicitation using paper prototype. The paper progresses through an assessment of the new approach using student projects developed for various organizations. A case study project is explained in the paper.",
"title": ""
},
{
"docid": "a4c17b823d325ed5f339f78cd4d1e9ab",
"text": "A 34–40 GHz VCO fabricated in 65 nm digital CMOS technology is demonstrated in this paper. The VCO uses a combination of switched capacitors and varactors for tuning and has a maximum Kvco of 240 MHz/V. It exhibits a phase noise of better than −98 dBc/Hz @ 1-MHz offset across the band while consuming 12 mA from a 1.2-V supply, an FOMT of −182.1 dBc/Hz. A cascode buffer following the VCO consumes 11 mA to deliver 0 dBm LO signal to a 50Ω load.",
"title": ""
},
{
"docid": "f5b3519d4ec0fd7f9cb67bf409bec5ac",
"text": "The AECOO industry is highly fragmented; therefore, efficient information sharing and exchange between various players are evidently needed. Furthermore, the information about facility components should be managed throughout the lifecycle and be easily accessible for all players in the AECOO industry. BIM is emerging as a method of creating, sharing, exchanging and managing the information throughout the lifecycle between all the stakeholders. RFID, on the other hand, has emerged as an automatic data collection and information storage technology, and has been used in different applications in AECOO. This research proposes permanently attaching RFID tags to facility components where the memory of the tags is populated with accumulated lifecycle information of the components taken from a standard BIM database. This information is used to enhance different processes throughout the lifecycle. A conceptual RFID-based system structure and data storage/retrieval design are elaborated. To explore the technical feasibility of the proposed approach, two case studies have been implemented and tested.",
"title": ""
},
{
"docid": "2ecd815af00b9961259fa9b2a9185483",
"text": "This paper describes the current development status of a mobile robot designed to inspect the outer surface of large oil ship hulls and floating production storage and offloading platforms. These vessels require a detailed inspection program, using several nondestructive testing techniques. A robotic crawler designed to perform such inspections is presented here. Locomotion over the hull is provided through magnetic tracks, and the system is controlled by two networked PCs and a set of custom hardware devices to drive motors, video cameras, ultrasound, inertial platform, and other devices. Navigation algorithm uses an extended-Kalman-filter (EKF) sensor-fusion formulation, integrating odometry and inertial sensors. It was shown that the inertial navigation errors can be decreased by selecting appropriate Q and R matrices in the EKF formulation.",
"title": ""
},
{
"docid": "c089e788b5cfda6c4a7f518af668bc3a",
"text": "The selection of hyper-parameters is critical in Deep Learning. Because of the long training time of complex models and the availability of compute resources in the cloud, “one-shot” optimization schemes – where the sets of hyper-parameters are selected in advance (e.g. on a grid or in a random manner) and the training is executed in parallel – are commonly used. [1] show that grid search is sub-optimal, especially when only a few critical parameters matter, and suggest to use random search instead. Yet, random search can be “unlucky” and produce sets of values that leave some part of the domain unexplored. Quasi-random methods, such as Low Discrepancy Sequences (LDS) avoid these issues. We show that such methods have theoretical properties that make them appealing for performing hyperparameter search, and demonstrate that, when applied to the selection of hyperparameters of complex Deep Learning models (such as state-of-the-art LSTM language models and image classification models), they yield suitable hyperparameters values with much fewer runs than random search. We propose a particularly simple LDS method which can be used as a drop-in replacement for grid/random search in any Deep Learning pipeline, both as a fully one-shot hyperparameter search or as an initializer in iterative batch optimization.",
"title": ""
},
{
"docid": "c9be0a4079800f173cf9553b9a69581c",
"text": "A 500W classical three-way Doherty power amplifier (DPA) with LDMOS devices at 1.8GHz is presented. Optimized device ratio is selected to achieve maximum efficiency as well as linearity. With a simple passive input driving network implementation, the demonstrator exhibits more than 55% efficiency with 9.9PAR WCDMA signal from 1805MHz-1880MHz. It can be linearized at -60dBc level with 20MHz LTE signal at an average output power of 49dBm.",
"title": ""
},
{
"docid": "cfea41d4bc6580c91ee27201360f8e17",
"text": "It is common sense that cloud-native applications (CNA) are intentionally designed for the cloud. Although this understanding can be broadly used it does not guide and explain what a cloud-native application exactly is. The term ”cloud-native” was used quite frequently in birthday times of cloud computing (2006) which seems somehow obvious nowadays. But the term disappeared almost completely. Suddenly and in the last years the term is used again more and more frequently and shows increasing momentum. This paper summarizes the outcomes of a systematic mapping study analyzing research papers covering ”cloud-native” topics, research questions and engineering methodologies. We summarize research focuses and trends dealing with cloud-native application engineering approaches. Furthermore, we provide a definition for the term ”cloud-native application” which takes all findings, insights of analyzed publications and already existing and well-defined terminology into account.",
"title": ""
},
{
"docid": "54ca6cb3e71574fc741c3181b8a4871c",
"text": "Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.",
"title": ""
},
{
"docid": "b8639c299c26ee93e3add104c4cd0e18",
"text": "This paper works with the concept of Divergent Component of Motion (DCM), also called `(instantaneous) Capture Point'. We present two real-time DCM trajectory generators for uneven (three-dimensional) ground surfaces, which lead to continuous leg (and corresponding ground reaction) force profiles and facilitate the use of toe-off motion during double support. Thus, the resulting DCM trajectories are well suited for real-world robots and allow for increased step length and step height. The performance of the proposed methods was tested in numerous simulations and experiments on IHMC's Atlas robot and DLR's humanoid robot TORO.",
"title": ""
},
{
"docid": "abde419c67119fa9d16f365262d39b34",
"text": "Silicon nitride is the most commonly used passivation layer in biosensor applications where electronic components must be interfaced with ionic solutions. Unfortunately, the predominant method for functionalizing silicon nitride surfaces, silane chemistry, suffers from a lack of reproducibility. As an alternative, we have developed a silane-free pathway that allows for the direct functionalization of silicon nitride through the creation of primary amines formed by exposure to a radio frequency glow discharge plasma fed with humidified air. The aminated surfaces can then be further functionalized by a variety of methods; here we demonstrate using glutaraldehyde as a bifunctional linker to attach a robust NeutrAvidin (NA) protein layer. Optimal amine formation, based on plasma exposure time, was determined by labeling treated surfaces with an amine-specific fluorinated probe and characterizing the coverage using X-ray photoelectron spectroscopy (XPS). XPS and radiolabeling studies also reveal that plasma-modified surfaces, as compared with silane-modified surfaces, result in similar NA surface coverage, but notably better reproducibility.",
"title": ""
},
{
"docid": "588129d869fefae4abb657a8396232e0",
"text": "A cold-adapted lipase producing bacterium, designated SS-33T, was isolated from sea sediment collected from the Bay of Bengal, India, and subjected to a polyphasic taxonomic study. Strain SS-33T exhibited the highest 16S rRNA gene sequence similarity with Staphylococcus cohnii subsp. urealyticus (97.18 %), Staphylococcus saprophyticus subsp. bovis (97.16 %) and Staphylococcus cohnii subsp. cohnii (97.04 %). Phylogenetic analysis based on the 16S rRNA gene sequences showed that strain SS-33T belongs to the genus Staphylococcus. Cells of strain SS-33T were Gram-positive, coccus-shaped, non-spore-forming, non-motile, catalase-positive and oxidase-negative. The major fatty acid detected in strain SS-33T was anteiso-C15:0 and the menaquinone was MK-7. The genomic DNA G + C content was 33 mol%. The DNA-DNA hybridization among strain SS-33T and the closely related species indicated that strain SS-33T represents a novel species of the genus Staphylococcus. On the basis of the morphological, physiological and chemotaxonomic characteristics, the results of phylogenetic analysis and the DNA-DNA hybridization, a novel species is proposed for strain SS-33T, with the name Staphylococcus lipolyticus sp. nov. The strain type is SS-33T (=MTCC 10101T = JCM 16560T). Staphylococcus lipolyticus SS-33T hydrolyzed various substrates including tributyrin, olive oil, Tween 20, Tween 40, Tween 60, and Tween 80 at low temperatures, as well as mesophilic temperatures. Lipase from strain SS-33T was partially purified by acetone precipitation. The molecular weight of lipase protein was determined 67 kDa by SDS-PAGE. Zymography was performed to monitor the lipase activity in Native-PAGE. Calcium ions increased lipase activity twofold. The optimum pH of lipase was pH 7.0 and optimum temperature was 30 °C. However, lipase exhibited 90 % activity of its optimum temperature at 10 °C and became more stable at 10 °C as compared to 30 °C. The lipase activity and stability at low temperature has wide ranging applications in various industrial processes. Therefore, cold-adapted mesophilic lipase from strain SS-33T may be used for industrial applications. This is the first report of the production of cold-adapted mesophilic lipase by any Staphylococcus species.",
"title": ""
},
{
"docid": "75b0a7b0fa0320a3666fb147471dd45f",
"text": "Maximum power densities by air-driven microbial fuel cells (MFCs) are considerably influenced by cathode performance. We show here that application of successive polytetrafluoroethylene (PTFE) layers (DLs), on a carbon/PTFE base layer, to the air-side of the cathode in a single chamber MFC significantly improved coulombic efficiencies (CEs), maximum power densities, and reduced water loss (through the cathode). Electrochemical tests using carbon cloth electrodes coated with different numbers of DLs indicated an optimum increase in the cathode potential of 117 mV with four-DLs, compared to a <10 mV increase due to the carbon base layer alone. In MFC tests, four-DLs was also found to be the optimum number of coatings, resulting in a 171% increase in the CE (from 19.1% to 32%), a 42% increase in the maximum power density (from 538 to 766 mW m ), and measurable water loss was prevented. The increase in CE due is believed to result from the increased power output and the increased operation time (due to a reduction in aerobic degradation of substrate sustained by oxygen diffusion through the cathode). 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b5997c5c88f57b387e56dc68445b38e2",
"text": "Identifying the relationship between two text objects is a core research problem underlying many natural language processing tasks. A wide range of deep learning schemes have been proposed for text matching, mainly focusing on sentence matching, question answering or query document matching. We point out that existing approaches do not perform well at matching long documents, which is critical, for example, to AI-based news article understanding and event or story formation. The reason is that these methods either omit or fail to fully utilize complicated semantic structures in long documents. In this paper, we propose a graph approach to text matching, especially targeting long document matching, such as identifying whether two news articles report the same event in the real world, possibly with different narratives. We propose the Concept Interaction Graph to yield a graph representation for a document, with vertices representing different concepts, each being one or a group of coherent keywords in the document, and with edges representing the interactions between different concepts, connected by sentences in the document. Based on the graph representation of document pairs, we further propose a Siamese Encoded Graph Convolutional Network that learns vertex representations through a Siamese neural network and aggregates the vertex features though Graph Convolutional Networks to generate the matching result. Extensive evaluation of the proposed approach based on two labeled news article datasets created at Tencent for its intelligent news products show that the proposed graph approach to long document matching significantly outperforms a wide range of state-of-the-art methods.",
"title": ""
},
{
"docid": "c6f173f75917ee0632a934103ca7566c",
"text": "Mersenne Twister (MT) is a widely-used fast pseudorandom number generator (PRNG) with a long period of 2 − 1, designed 10 years ago based on 32-bit operations. In this decade, CPUs for personal computers have acquired new features, such as Single Instruction Multiple Data (SIMD) operations (i.e., 128bit operations) and multi-stage pipelines. Here we propose a 128-bit based PRNG, named SIMD-oriented Fast Mersenne Twister (SFMT), which is analogous to MT but making full use of these features. Its recursion fits pipeline processing better than MT, and it is roughly twice as fast as optimised MT using SIMD operations. Moreover, the dimension of equidistribution of SFMT is better than MT. We also introduce a block-generation function, which fills an array of 32-bit integers in one call. It speeds up the generation by a factor of two. A speed comparison with other modern generators, such as multiplicative recursive generators, shows an advantage of SFMT. The implemented C-codes are downloadable from http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html.",
"title": ""
},
{
"docid": "550e19033cb00938aed89eb3cce50a76",
"text": "This paper presents a high gain wide band 2×2 microstrip array antenna. The microstrip array antenna (MSA) is fabricated on inexpensive FR4 substrate and placed 1mm above ground plane to improve the bandwidth and efficiency of the antenna. A reactive impedance surface (RIS) consisting of 13×13 array of 4 mm square patches with inter-element spacing of 1 mm is fabricated on the bottom side of FR4 substrate. RIS reduces the coupling between the ground plane and MSA array and therefore increases the efficiency of antenna. It enhances the bandwidth and gain of the antenna. RIS also helps in reduction of SLL and cross polarization. This MSA array with RIS is place in a Fabry Perot cavity (FPC) resonator to enhance the gain of the antenna. 2×2 and 4×4 array of square parasitic patches are fed by MSA array fabricated on a FR4 superstrate which forms the partially reflecting surface of FPC. The FR4 superstrate layer is supported with help of dielectric rods at the edges with air at about λ0/2 from ground plane. A microstrip feed line network is designed and the printed MSA array is fed by a 50 Ω coaxial probe. The VSWR is <; 2 is obtained over 5.725-6.4 GHz, which covers 5.725-5.875 GHz ISM WLAN frequency band and 5.9-6.4 GHz satellite uplink C band. The antenna gain increases from 12 dB to 15.8 dB as 4×4 square parasitic patches are fabricated on superstrate layer. The gain variation is less than 2 dB over the entire band. The antenna structure provides SLL and cross polarization less than -2ο dB, front to back lobe ratio higher than 20 dB and more than 70 % antenna efficiency. A prototype structure is realized and tested. The measured results satisfy with the simulation results. The antenna can be a suitable candidate for access point, satellite communication, mobile base station antenna and terrestrial communication system.",
"title": ""
},
{
"docid": "b1151d3588dc4abff883bef8c60005d1",
"text": "Here, we demonstrate that action video game play enhances subjects' ability in two tasks thought to indicate the number of items that can be apprehended. Using an enumeration task, in which participants have to determine the number of quickly flashed squares, accuracy measures showed a near ceiling performance for low numerosities and a sharp drop in performance once a critical number of squares was reached. Importantly, this critical number was higher by about two items in video game players (VGPs) than in non-video game players (NVGPs). A following control study indicated that this improvement was not due to an enhanced ability to instantly apprehend the numerosity of the display, a process known as subitizing, but rather due to an enhancement in the slower more serial process of counting. To confirm that video game play facilitates the processing of multiple objects at once, we compared VGPs and NVGPs on the multiple object tracking task (MOT), which requires the allocation of attention to several items over time. VGPs were able to successfully track approximately two more items than NVGPs. Furthermore, NVGPs trained on an action video game established the causal effect of game playing in the enhanced performance on the two tasks. Together, these studies confirm the view that playing action video games enhances the number of objects that can be apprehended and suggest that this enhancement is mediated by changes in visual short-term memory skills.",
"title": ""
},
{
"docid": "63f911d28fe0b844856a4d98c3dbf79f",
"text": "The idea that music makes you smarter has received considerable attention from scholars and the media. The present report is the first to test this hypothesis directly with random assignment of a large sample of children (N = 144) to two different types of music lessons (keyboard or voice) or to control groups that received drama lessons or no lessons. IQ was measured before and after the lessons. Compared with children in the control groups, children in the music groups exhibited greater increases in full-scale IQ. The effect was relatively small, but it generalized across IQ subtests, index scores, and a standardized measure of academic achievement. Unexpectedly, children in the drama group exhibited substantial pre- to post-test improvements in adaptive social behavior that were not evident in the music groups.",
"title": ""
},
{
"docid": "83c184c457e35e80ce7ff8012b5dcd06",
"text": "The goal of this paper is to enable a 3D “virtual-tour” of an apartment given a small set of monocular images of different rooms, as well as a 2D floor plan. We frame the problem as inference in a Markov Random Field which reasons about the layout of each room and its relative pose (3D rotation and translation) within the full apartment. This gives us accurate camera pose in the apartment for each image. What sets us apart from past work in layout estimation is the use of floor plans as a source of prior knowledge, as well as localization of each image within a bigger space (apartment). In particular, we exploit the floor plan to impose aspect ratio constraints across the layouts of different rooms, as well as to extract semantic information, e.g., the location of windows which are marked in floor plans. We show that this information can significantly help in resolving the challenging room-apartment alignment problem. We also derive an efficient exact inference algorithm which takes only a few ms per apartment. This is due to the fact that we exploit integral geometry as well as our new bounds on the aspect ratio of rooms which allow us to carve the space, significantly reducing the number of physically possible configurations. We demonstrate the effectiveness of our approach on a new dataset which contains over 200 apartments.",
"title": ""
},
{
"docid": "cd11e079db25441a1a5801c71fcff781",
"text": "Quad-robot type (QRT) unmanned aerial vehicles (UAVs) have been developed for quick detection and observation of the circumstances under calamity environment such as indoor fire spots. The UAV is equipped with four propellers driven by each electric motor, an embedded controller, an Inertial Navigation System (INS) using three rate gyros and accelerometers, a CCD (Charge Coupled Device) camera with wireless communication transmitter for observation, and an ultrasonic range sensor for height control. Accurate modeling and robust flight control of QRT UAVs are mainly discussed in this work. Rigorous dynamic model of a QRT UAV is obtained both in the reference and body frame coordinate systems. A disturbance observer (DOB) based controller using the derived dynamic models is also proposed for robust hovering control. The control input induced by DOB is helpful to use simple equations of motion satisfying accurately derived dynamics. The developed hovering robot shows stable flying performances under the adoption of DOB and the vision based localization method. Although a model is incorrect, DOB method can design a controller by regarding the inaccurate part of the model J. Kim Department of Mechanical Engineering, Seoul National University of Technology, Seoul, South Korea e-mail: jinhyun@snut.ac.kr M.-S. Kang Department of Mechatronics Engineering, Hanyang University, Ansan, South Korea e-mail: wowmecha@gmail.com S. Park (B) Division of Applied Robot Technology, Korea Institute of Industrial Technology, Ansan, South Korea e-mail: sdpark@kitech.re.kr 10 J Intell Robot Syst (2010) 57:9–26 and sensor noises as disturbances. The UAV can also avoid obstacles using eight IR (Infrared) and four ultrasonic range sensors. This kind of micro UAV can be widely used in various calamity observation fields without danger of human beings under harmful environment. The experimental results show the performance of the proposed control algorithm.",
"title": ""
}
] |
scidocsrr
|
f879ae22c409e9a62a6576f7912b257b
|
Software debugging, testing, and verification
|
[
{
"docid": "d733f07d3b022ad8a7020c05292bcddd",
"text": "In Chapter 9 we discussed quality management models with examples of in-process metrics and reports. The models cover both the front-end design and coding activities and the back-end testing phases of development. The focus of the in-process data and reports, however, are geared toward the design review and code inspection data, although testing data is included. This chapter provides a more detailed discussion of the in-process metrics from the testing perspective. 1 These metrics have been used in the IBM Rochester software development laboratory for some years with continual evolution and improvement, so there is ample implementation experience with them. This is important because although there are numerous metrics for software testing, and new ones being proposed frequently, relatively few are supported by sufficient experiences of industry implementation to demonstrate their usefulness. For each metric, we discuss its purpose, data, interpretation , and use, and provide a graphic example based on real-life data. Then we discuss in-process quality management vis-à-vis these metrics and revisit the metrics 271 1. This chapter is a modified version of a white paper written for the IBM corporate-wide Software Test Community Leaders (STCL) group, which was published as \" In-process Metrics for Software Testing, \" in",
"title": ""
}
] |
[
{
"docid": "143da39941ecc8fb69e87d611503b9c0",
"text": "A dual-core 64b Xeonreg MP processor is implemented in a 65nm 8M process. The 435mm2 die has 1.328B transistors. Each core has two threads and a unified 1MB L2 cache. The 16MB unified, 16-way set-associative L3 cache implements both sleep and shut-off leakage reduction modes",
"title": ""
},
{
"docid": "fa1440ce586681326b18807e41e5465a",
"text": "Governments and businesses increasingly rely on data analytics and machine learning (ML) for improving their competitive edge in areas such as consumer satisfaction, threat intelligence, decision making, and product efficiency. However, by cleverly corrupting a subset of data used as input to a target’s ML algorithms, an adversary can perturb outcomes and compromise the effectiveness of ML technology. While prior work in the field of adversarial machine learning has studied the impact of input manipulation on correct ML algorithms, we consider the exploitation of bugs in ML implementations. In this paper, we characterize the attack surface of ML programs, and we show that malicious inputs exploiting implementation bugs enable strictly more powerful attacks than the classic adversarial machine learning techniques. We propose a semi-automated technique, called steered fuzzing, for exploring this attack surface and for discovering exploitable bugs in machine learning programs, in order to demonstrate the magnitude of this threat. As a result of our work, we responsibly disclosed five vulnerabilities, established three new CVE-IDs, and illuminated a common insecure practice across many machine learning systems. Finally, we outline several research directions for further understanding and mitigating this threat.",
"title": ""
},
{
"docid": "dc8d9a7da61aab907ee9def56dfbd795",
"text": "The ability to detect change-points in a dynamic network or a time series of graphs is an increasingly important task in many applications of the emerging discipline of graph signal processing. This paper formulates change-point detection as a hypothesis testing problem in terms of a generative latent position model, focusing on the special case of the Stochastic Block Model time series. We analyze two classes of scan statistics, based on distinct underlying locality statistics presented in the literature. Our main contribution is the derivation of the limiting properties and power characteristics of the competing scan statistics. Performance is compared theoretically, on synthetic data, and empirically, on the Enron email corpus.",
"title": ""
},
{
"docid": "f5311de600d7e50d5c9ecff5c49f7167",
"text": "Most work in machine reading focuses on question answering problems where the answer is directly expressed in the text to read. However, many real-world question answering problems require the reading of text not because it contains the literal answer, but because it contains a recipe to derive an answer together with the reader’s background knowledge. One example is the task of interpreting regulations to answer “Can I...?” or “Do I have to...?” questions such as “I am working in Canada. Do I have to carry on paying UK National Insurance?” after reading a UK government website about this topic. This task requires both the interpretation of rules and the application of background knowledge. It is further complicated due to the fact that, in practice, most questions are underspecified, and a human assistant will regularly have to ask clarification questions such as “How long have you been working abroad?” when the answer cannot be directly derived from the question and text. In this paper, we formalise this task and develop a crowd-sourcing strategy to collect 32k task instances based on real-world rules and crowd-generated questions and scenarios. We analyse the challenges of this task and assess its difficulty by evaluating the performance of rule-based and machine-learning baselines. We observe promising results when no background knowledge is necessary, and substantial room for improvement whenever background knowledge is needed.",
"title": ""
},
{
"docid": "da63c4d9cc2f3278126490de54c34ce5",
"text": "The growth of Web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences. Our research looks particularly at assigning trust in Web-based social networks and investigates how trust information can be mined and integrated into applications. This article introduces a definition of trust suitable for use in Web-based social networks with a discussion of the properties that will influence its use in computation. We then present two algorithms for inferring trust relationships between individuals that are not directly connected in the network. Both algorithms are shown theoretically and through simulation to produce calculated trust values that are highly accurate.. We then present TrustMail, a prototype email client that uses variations on these algorithms to score email messages in the user's inbox based on the user's participation and ratings in a trust network.",
"title": ""
},
{
"docid": "982d7d2d65cddba4fa7dac3c2c920790",
"text": "In this paper, we present our multichannel neural architecture for recognizing emerging named entity in social media messages, which we applied in the Novel and Emerging Named Entity Recognition shared task at the EMNLP 2017 Workshop on Noisy User-generated Text (W-NUT). We propose a novel approach, which incorporates comprehensive word representations with multichannel information and Conditional Random Fields (CRF) into a traditional Bidirectional Long Short-Term Memory (BiLSTM) neural network without using any additional hand-crafted features such as gazetteers. In comparison with other systems participating in the shared task, our system won the 3rd place in terms of the average of two evaluation metrics.",
"title": ""
},
{
"docid": "509fa5630ed7e3e7bd914fb474da5071",
"text": "Languages with rich type systems are beginning to employ a blend of type inference and type checking, so that the type inference engine is guided by programmer-supplied type annotations. In this paper we show, for the first time, how to combine the virtues of two well-established ideas: unification-based inference, and bidi-rectional propagation of type annotations. The result is a type system that conservatively extends Hindley-Milner, and yet supports both higher-rank types and impredicativity.",
"title": ""
},
{
"docid": "772fc1cf2dd2837227facd31f897dba3",
"text": "Eighty-three brains obtained at autopsy from nondemented and demented individuals were examined for extracellular amyloid deposits and intraneuronal neurofibrillary changes. The distribution pattern and packing density of amyloid deposits turned out to be of limited significance for differentiation of neuropathological stages. Neurofibrillary changes occurred in the form of neuritic plaques, neurofibrillary tangles and neuropil threads. The distribution of neuritic plaques varied widely not only within architectonic units but also from one individual to another. Neurofibrillary tangles and neuropil threads, in contrast, exhibited a characteristic distribution pattern permitting the differentiation of six stages. The first two stages were characterized by an either mild or severe alteration of the transentorhinal layer Pre-α (transentorhinal stages I–II). The two forms of limbic stages (stages III–IV) were marked by a conspicuous affection of layer Pre-α in both transentorhinal region and proper entorhinal cortex. In addition, there was mild involvement of the first Ammon's horn sector. The hallmark of the two isocortical stages (stages V–VI) was the destruction of virtually all isocortical association areas. The investigation showed that recognition of the six stages required qualitative evaluation of only a few key preparations.",
"title": ""
},
{
"docid": "1cfa5ee5d737e42487e6aa1bdf2cafc9",
"text": "This article presents a new platform called PCIV (intelligent platform for vehicular control) for traffic monitoring, based on radio frequency Identification (RFID) and cloud computing, applied to road traffic monitoring in public transportation systems. This paper shows the design approach and the experimental validation of the platform in two real scenarios: a university campus and a small city. Experiments demonstrated RFID technology is viable to be implemented to monitor traffic in smart cities.",
"title": ""
},
{
"docid": "dcdb6242febbef358efe5a1461957291",
"text": "Neuromorphic Engineering has emerged as an exciting research area, primarily owing to the paradigm shift from conventional computing architectures to data-driven, cognitive computing. There is a diversity of work in the literature pertaining to neuromorphic systems, devices and circuits. This review looks at recent trends in neuromorphic engineering and its sub-domains, with an attempt to identify key research directions that would assume significance in the future. We hope that this review would serve as a handy reference to both beginners and experts, and provide a glimpse into the broad spectrum of applications of neuromorphic hardware and algorithms. Our survey indicates that neuromorphic engineering holds a promising future, particularly with growing data volumes, and the imminent need for intelligent, versatile computing.",
"title": ""
},
{
"docid": "bd039cbb3b9640e917b9cc15e45e5536",
"text": "We introduce adversarial neural networks for representation learning as a novel approach to transfer learning in brain-computer interfaces (BCIs). The proposed approach aims to learn subject-invariant representations by simultaneously training a conditional variational autoencoder (cVAE) and an adversarial network. We use shallow convolutional architectures to realize the cVAE, and the learned encoder is transferred to extract subject-invariant features from unseen BCI users’ data for decoding. We demonstrate a proof-of-concept of our approach based on analyses of electroencephalographic (EEG) data recorded during a motor imagery BCI experiment.",
"title": ""
},
{
"docid": "32334cf8520dde6743aa66b4e35742ff",
"text": "LinKBase® is a biomedical ontology. Its hierarchical structure, coverage, use of operational, formal and linguistic relationships, combined with its underlying language technology, make it an excellent ontology to support Natural Language Processing and Understanding (NLP/NLU) and data integration applications. In this paper we will describe the structure and coverage of LinKBase®. In addition, we will discuss the editing of LinKBase® and how domain experts are guided by specific editing rules to ensure modeling quality and consistency. Finally, we compare the structure of LinKBase® to the structure of third party terminologies and ontologies and discuss the integration of these data sources into",
"title": ""
},
{
"docid": "764e5c5201217be1aa9e24ce4fa3760a",
"text": "Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Please do not copy or distribute without explicit permission of the authors. Abstract Customer defection or churn is a widespread phenomenon that threatens firms across a variety of industries with dramatic financial consequences. To tackle this problem, companies are developing sophisticated churn management strategies. These strategies typically involve two steps – ranking customers based on their estimated propensity to churn, and then offering retention incentives to a subset of customers at the top of the churn ranking. The implicit assumption is that this process would maximize firm's profits by targeting customers who are most likely to churn. However, current marketing research and practice aims at maximizing the correct classification of churners and non-churners. Profit from targeting a customer depends on not only a customer's propensity to churn, but also on her spend or value, her probability of responding to retention offers, as well as the cost of these offers. Overall profit of the firm also depends on the number of customers the firm decides to target for its retention campaign. We propose a predictive model that accounts for all these elements. Our optimization algorithm uses stochastic gradient boosting, a state-of-the-art numerical algorithm based on stage-wise gradient descent. It also determines the optimal number of customers to target. The resulting optimal customer ranking and target size selection leads to, on average, a 115% improvement in profit compared to current methods. Remarkably, the improvement in profit comes along with more prediction errors in terms of which customers will churn. However, the new loss function leads to better predictions where it matters the most for the company's profits. For a company like Verizon Wireless, this translates into a profit increase of at least $28 million from a single retention campaign, without any additional implementation cost.",
"title": ""
},
{
"docid": "0fdd7f5c5cd1225567e89b456ef25ea0",
"text": "In this work we propose a hierarchical approach for labeling semantic objects and regions in scenes. Our approach is reminiscent of early vision literature in that we use a decomposition of the image in order to encode relational and spatial information. In contrast to much existing work on structured prediction for scene understanding, we bypass a global probabilistic model and instead directly train a hierarchical inference procedure inspired by the message passing mechanics of some approximate inference procedures in graphical models. This approach mitigates both the theoretical and empirical difficulties of learning probabilistic models when exact inference is intractable. In particular, we draw from recent work in machine learning and break the complex inference process into a hierarchical series of simple machine learning subproblems. Each subproblem in the hierarchy is designed to capture the image and contextual statistics in the scene. This hierarchy spans coarse-to-fine regions and explicitly models the mixtures of semantic labels that may be present due to imperfect segmentation. To avoid cascading of errors and overfitting, we train the learning problems in sequence to ensure robustness to likely errors earlier in the inference sequence and leverage the stacking approach developed by Cohen et al.",
"title": ""
},
{
"docid": "314fba798c73569f6c8fa266821bac8e",
"text": "Core to integrated navigation systems is the concept of fusing noisy observations from GPS, Inertial Measurement Units (IMU), and other available sensors. The current industry standard and most widely used algorithm for this purpose is the extended Kalman filter (EKF) [6]. The EKF combines the sensor measurements with predictions coming from a model of vehicle motion (either dynamic or kinematic), in order to generate an estimate of the current navigational state (position, velocity, and attitude). This paper points out the inherent shortcomings in using the EKF and presents, as an alternative, a family of improved derivativeless nonlinear Kalman filters called sigma-point Kalman filters (SPKF). We demonstrate the improved state estimation performance of the SPKF by applying it to the problem of loosely coupled GPS/INS integration. A novel method to account for latency in the GPS updates is also developed for the SPKF (such latency compensation is typically inaccurate or not practical with the EKF). A UAV (rotor-craft) test platform is used to demonstrate the results. Performance metrics indicate an approximate 30% error reduction in both attitude and position estimates relative to the baseline EKF implementation.",
"title": ""
},
{
"docid": "33cab0ec47af5e40d64e34f8ffc7dd6f",
"text": "This inaugural article has a twofold purpose: (i) to present a simpler and more general justification of the fundamental scaling laws of quasibrittle fracture, bridging the asymptotic behaviors of plasticity, linear elastic fracture mechanics, and Weibull statistical theory of brittle failure, and (ii) to give a broad but succinct overview of various applications and ramifications covering many fields, many kinds of quasibrittle materials, and many scales (from 10(-8) to 10(6) m). The justification rests on developing a method to combine dimensional analysis of cohesive fracture with second-order accurate asymptotic matching. This method exploits the recently established general asymptotic properties of the cohesive crack model and nonlocal Weibull statistical model. The key idea is to select the dimensionless variables in such a way that, in each asymptotic case, all of them vanish except one. The minimal nature of the hypotheses made explains the surprisingly broad applicability of the scaling laws.",
"title": ""
},
{
"docid": "97adb3a003347f579706cd01a762bdc9",
"text": "The Universal Serial Bus (USB) is an extremely popular interface standard for computer peripheral connections and is widely used in consumer Mass Storage Devices (MSDs). While current consumer USB MSDs provide relatively high transmission speed and are convenient to carry, the use of USB MSDs has been prohibited in many commercial and everyday environments primarily due to security concerns. Security protocols have been previously proposed and a recent approach for the USB MSDs is to utilize multi-factor authentication. This paper proposes significant enhancements to the three-factor control protocol that now makes it secure under many types of attacks including the password guessing attack, the denial-of-service attack, and the replay attack. The proposed solution is presented with a rigorous security analysis and practical computational cost analysis to demonstrate the usefulness of this new security protocol for consumer USB MSDs.",
"title": ""
},
{
"docid": "18c230517b8825b616907548829e341b",
"text": "The application of small Remotely-Controlled (R/C) aircraft for aerial photography presents many unique advantages over manned aircraft due to their lower acquisition cost, lower maintenance issue, and superior flexibility. The extraction of reliable information from these images could benefit DOT engineers in a variety of research topics including, but not limited to work zone management, traffic congestion, safety, and environmental. During this effort, one of the West Virginia University (WVU) R/C aircraft, named ‘Foamy’, has been instrumented for a proof-of-concept demonstration of aerial data acquisition. Specifically, the aircraft has been outfitted with a GPS receiver, a flight data recorder, a downlink telemetry hardware, a digital still camera, and a shutter-triggering device. During the flight a ground pilot uses one of the R/C channels to remotely trigger the camera. Several hundred high-resolution geo-tagged aerial photographs were collected during 10 flight experiments at two different flight fields. A Matlab based geo-reference software was developed for measuring distances from an aerial image and estimating the geo-location of each ground asset of interest. A comprehensive study of potential Sources of Errors (SOE) has also been performed with the goal of identifying and addressing various factors that might affect the position estimation accuracy. The result of the SOE study concludes that a significant amount of position estimation error was introduced by either mismatching of different measurements or by the quality of the measurements themselves. The first issue is partially addressed through the design of a customized Time-Synchronization Board (TSB) based on a MOD 5213 embedded microprocessor. The TSB actively controls the timing of the image acquisition process, ensuring an accurate matching of the GPS measurement and the image acquisition time. The second issue is solved through the development of a novel GPS/INS (Inertial Navigation System) based on a 9-state Extended Kalman Filter (EKF). The developed sensor fusion algorithm provides a good estimation of aircraft attitude angle without the need for using expensive sensors. Through the help of INS integration, it also provides a very smooth position estimation that eliminates large jumps typically seen in the raw GPS measurements.",
"title": ""
},
{
"docid": "84436fc1467a259e0e584da3af6f5ef7",
"text": "BACKGROUND\nMicroRNAs are short regulatory RNAs that negatively modulate protein expression at a post-transcriptional and/or translational level and are deeply involved in the pathogenesis of several types of cancers. Specifically, microRNA-221 (miR-221) is overexpressed in many human cancers, wherein accumulating evidence indicates that it functions as an oncogene. However, the function of miR-221 in human osteosarcoma has not been totally elucidated. In the present study, the effects of miR-221 on osteosarcoma and the possible mechanism by which miR-221 affected the survival, apoptosis, and cisplatin resistance of osteosarcoma were investigated.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nReal-time quantitative PCR analysis revealed miR-221 was significantly upregulated in osteosarcoma cell lines than in osteoblasts. Both human osteosarcoma cell lines SOSP-9607 and MG63 were transfected with miR-221 mimic or inhibitor to regulate miR-221 expression. The effects of miR-221 were then assessed by cell viability, cell cycle analysis, apoptosis assay, and cisplatin resistance assay. In both cells, upregulation of miR-221 induced cell survival and cisplatin resistance and reduced cell apoptosis. In addition, knockdown of miR-221 inhibited cell growth and cisplatin resistance and induced cell apoptosis. Potential target genes of miR-221 were predicted using bioinformatics. Moreover, luciferase reporter assay and western blot confirmed that PTEN was a direct target of miR-221. Furthermore, introduction of PTEN cDNA lacking 3'-UTR or PI3K inhibitor LY294002 abrogated miR-221-induced cisplatin resistance. Finally, both miR-221 and PTEN expression levels in osteosarcoma samples were examined by using real-time quantitative PCR and immunohistochemistry. High miR-221 expression level and inverse correlation between miR-221 and PTEN levels were revealed in osteosarcoma tissues.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThese results for the first time demonstrate that upregulation of miR-221 induces the malignant phenotype of human osteosarcoma whereas knockdown of miR-221 reverses this phenotype, suggesting that miR-221 could be a potential target for osteosarcoma treatment.",
"title": ""
}
] |
scidocsrr
|
92426bec8ae60f467b74ffaf3ac38b32
|
WYSIWYM - Integrated visualization, exploration and authoring of semantically enriched un-structured content
|
[
{
"docid": "9d330ac4c902c80b19b5f578e3bd9125",
"text": "Since its introduction in 1986, the 10-item System Usability Scale (SUS) has been assumed to be unidimensional. Factor analysis of two independent SUS data sets reveals that the SUS actually has two factors – Usability (8 items) and Learnability (2 items). These new scales have reasonable reliability (coefficient alpha of .91 and .70, respectively). They correlate highly with the overall SUS (r = .985 and .784, respectively) and correlate significantly with one another (r = .664), but at a low enough level to use as separate scales. A sensitivity analysis using data from 19 tests had a significant Test by Scale interaction, providing additional evidence of the differential utility of the new scales. Practitioners can continue to use the current SUS as is, but, at no extra cost, can also take advantage of these new scales to extract additional information from their SUS data.",
"title": ""
},
{
"docid": "10496d5427035670d89f72a64b68047f",
"text": "A challenge for human-computer interaction researchers and user interf ace designers is to construct information technologies that support creativity. This ambitious goal can be attained by building on an adequate understanding of creative processes. This article offers a four-phase framework for creativity that might assist designers in providing effective tools for users: (1)Collect: learn from provious works stored in libraries, the Web, etc.; (2) Relate: consult with peers and mentors at early, middle, and late stages, (3)Create: explore, compose, evaluate possible solutions; and (4) Donate: disseminate the results and contribute to the libraries. Within this integrated framework, this article proposes eight activities that require human-computer interaction research and advanced user interface design. A scenario about an architect illustrates the process of creative work within such an environment.",
"title": ""
}
] |
[
{
"docid": "bb238fa1ac5d33233af08f698a1eeb5f",
"text": "This paper presents the results of six tests on R/C bridge cantilever slabs without shear reinforcement subjected to concentrated loading. The specimens represent actual deck slabs of box-girder bridges scaled 3/4. They were 10 m long with a clear cantilever equal to 2.78 m and with variable thickness (190 mm at the tip of the cantilever and 380 mm at the clamped edge). Reinforcement ratios for the specimens were equal to 0.78% and 0.60%. All tests failed in a brittle manner by development of a shear failure surface around the concentrated loads. The experimental results are investigated on the basis of linear elastic shear fields for the various tests. Taking advantage of the experimental and numerical results, practical recommendations for estimating the shear strength of R/C bridge cantilever slabs are proposed. © 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "88a4ab49e7d3263d5d6470d123b6e74b",
"text": "Graph databases have gained renewed interest in the last years, due to its applications in areas such as the Semantic Web and Social Networks Analysis. We study the problem of querying graph databases, and, in particular, the expressiveness and complexity of evaluation for several general-purpose query languages, such as the regular path queries and its extensions with conjunctions and inverses. We distinguish between two semantics for these languages. The first one, based on simple paths, easily leads to intractability, while the second one, based on arbitrary paths, allows tractable evaluation for an expressive family of languages.\n We also study two recent extensions of these languages that have been motivated by modern applications of graph databases. The first one allows to treat paths as first-class citizens, while the second one permits to express queries that combine the topology of the graph with its underlying data.",
"title": ""
},
{
"docid": "46fdba2028abec621e8b9fbd0919e043",
"text": "The HF band, located in between 3-30 MHz, can offer single hop communication channels over a very long distances - even up to around the world. Traditionally, the HF is seen primarily as a solution for long communication ranges although it may also be a perfect choice for much shorter communication ranges when high data rates are not a primary target. It is well known that the HF channel is a demanding environment to operate since it changes rapidly, i.e., channel is available at a moment but the next moment it is not. Therefore, a big problem in HF communications is channel access or channel selection. By choosing the used HF channels wisely, i.e., cognitively, the channel behavior and system reliability considerably improves. This paper discusses about a change of paradigm in HF communication that will take place after applying cognitive principles on the HF system.",
"title": ""
},
{
"docid": "242089e8f694a83cb4432862f2d6b1fc",
"text": "We present an interpretable framework for path prediction that leverages dependencies between agents’ behaviors and their spatial navigation environment. We exploit two sources of information: the past motion trajectory of the agent of interest and a wide top-view image of the navigation scene. We propose a Clairvoyant Attentive Recurrent Network (CAR-Net) that learns where to look in a large image of the scene when solving the path prediction task. Our method can attend to any area, or combination of areas, within the raw image (e.g., road intersections) when predicting the trajectory of the agent. This allows us to visualize fine-grained semantic elements of navigation scenes that influence the prediction of trajectories. To study the impact of space on agents’ trajectories, we build a new dataset made of top-view images of hundreds of scenes (Formula One racing tracks) where agents’ behaviors are heavily influenced by known areas in the images (e.g., upcoming turns). CAR-Net successfully attends to these salient regions. Additionally, CAR-Net reaches state-of-the-art accuracy on the standard trajectory forecasting benchmark, Stanford Drone Dataset (SDD). Finally, we show CAR-Net’s ability to generalize to unseen scenes.",
"title": ""
},
{
"docid": "ef2996a04c819777cc4b88c47f502c21",
"text": "Bioprinting is an emerging technology for constructing and fabricating artificial tissue and organ constructs. This technology surpasses the traditional scaffold fabrication approach in tissue engineering (TE). Currently, there is a plethora of research being done on bioprinting technology and its potential as a future source for implants and full organ transplantation. This review paper overviews the current state of the art in bioprinting technology, describing the broad range of bioprinters and bioink used in preclinical studies. Distinctions between laser-, extrusion-, and inkjet-based bioprinting technologies along with appropriate and recommended bioinks are discussed. In addition, the current state of the art in bioprinter technology is reviewed with a focus on the commercial point of view. Current challenges and limitations are highlighted, and future directions for next-generation bioprinting technology are also presented. [DOI: 10.1115/1.4028512]",
"title": ""
},
{
"docid": "1c5e94746c577008ba56cc559bd6bbbf",
"text": "Online reviews, a form of online word-of-mouth (eWOM), have recently become one of the most important sources of information for modern consumers. Recent scholarship involving eWOM often focuses on the transmission and impact of online reviews but sheds less light on the underlying processes that drive consumers’ receptions of them. Similarly, few studies have explored the recipients’ perspectives in the context of various services. This study addresses the aforementioned gaps in extant literature. The research model in this study is built upon the rich stream of literature related to how people are influenced by information and is tested on reviews collected from Yelp.com, a popular online advisory website dedicated to services businesses throughout the United States. The results of the study show that a combination of both reviewer and review characteristics are significantly correlated with the perceived usefulness of reviews. The study also finds several results that are anomalous to established knowledge related to consumers’ information consumption, both offline and online. The authors present the results of the study and discuss their significance for research and practice. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7a055093ac92c7d2fa7aa8dcbe47a8b8",
"text": "In this paper, we present the design process of a smart bracelet that aims at enhancing the life of elderly people. The bracelet acts as a personal assistant during the user's everyday life, monitoring the health status and alerting him or her about abnormal conditions, reminding medications and facilitating the everyday life in many outdoor and indoor activities.",
"title": ""
},
{
"docid": "47897fc364551338fcaee76d71568e2e",
"text": "As Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to understand behavior patterns of end-hosts and network applications. This paper presents a novel approach based on behavioral graph analysis to study the behavior similarity of Internet end-hosts. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of end-hosts. By applying simple and efficient clustering algorithms on the similarity matrices and clustering coefficient of one-mode projection graphs, we perform network-aware clustering of end-hosts in the same network prefixes into different end-host behavior clusters and discover inherent clustered groups of Internet applications. Our experiment results based on real datasets show that end-host and application behavior clusters exhibit distinct traffic characteristics that provide improved interpretations on Internet traffic. Finally, we demonstrate the practical benefits of exploring behavior similarity in profiling network behaviors, discovering emerging network applications, and detecting anomalous traffic patterns.",
"title": ""
},
{
"docid": "6a2a7b5831f6b3608eb88f5ccda6d520",
"text": "In this paper we examine currently used programming contest systems. We discuss possible reasons why we do not expect any of the currently existing contest systems to be adopted by a major group of different programming contests. We suggest to approach the design of a contest system as a design of a secure IT system, using known methods from the area of computer",
"title": ""
},
{
"docid": "b1b511c0e014861dac12c2254f6f1790",
"text": "This paper describes automatic speech recognition (ASR) systems developed jointly by RWTH, UPB and FORTH for the 1ch, 2ch and 6ch track of the 4th CHiME Challenge. In the 2ch and 6ch tracks the final system output is obtained by a Confusion Network Combination (CNC) of multiple systems. The Acoustic Model (AM) is a deep neural network based on Bidirectional Long Short-Term Memory (BLSTM) units. The systems differ by front ends and training sets used for the acoustic training. The model for the 1ch track is trained without any preprocessing. For each front end we trained and evaluated individual acoustic models. We compare the ASR performance of different beamforming approaches: a conventional superdirective beamformer [1] and an MVDR beamformer as in [2], where the steering vector is estimated based on [3]. Furthermore we evaluated a BLSTM supported Generalized Eigenvalue beamformer using NN-GEV [4]. The back end is implemented using RWTH’s open-source toolkits RASR [5], RETURNN [6] and rwthlm [7]. We rescore lattices with a Long Short-Term Memory (LSTM) based language model. The overall best results are obtained by a system combination that includes the lattices from the system of UPB’s submission [8]. Our final submission scored second in each of the three tracks of the 4th CHiME Challenge.",
"title": ""
},
{
"docid": "52bee48854d8eaca3b119eb71d79c22d",
"text": "In this paper, we present a new combined approach for feature extraction, classification, and context modeling in an iterative framework based on random decision trees and a huge amount of features. A major focus of this paper is to integrate different kinds of feature types like color, geometric context, and auto context features in a joint, flexible and fast manner. Furthermore, we perform an in-depth analysis of multiple feature extraction methods and different feature types. Extensive experiments are performed on challenging facade recognition datasets, where we show that our approach significantly outperforms previous approaches with a performance gain of more than 15% on the most difficult dataset.",
"title": ""
},
{
"docid": "45477e67e1ddc589fde6d989254e4c32",
"text": "Existing process mining approaches are able to tolerate a certain degree of noise in process log. However, processes that contain infrequent paths, multiple (nested) parallel branches, or have been changed in an ad-hoc manner, still pose challenges. For such cases, process mining typically returns “spaghetti-models”, that are hardly usable even as a starting point for process (re-)design. In this paper, we address these challenges by introducing data transformation and pre-processing steps that improve and ensure the quality of mined models for existing process mining approaches. We propose the concept of semantic log purging, i.e., the cleaning of logs based on domain specific constraints utilizing knowledge that typically complements processes. Furthermore we demonstrate the feasibility and effectiveness of the approach based on a case study in the higher education domain. We think that semantic log purging will enable process mining to yield better results, thus giving process (re-)designers a valuable tool.",
"title": ""
},
{
"docid": "eed8fd39830e8058d55427623bb655df",
"text": "In this paper, we present a solution for main content identification in web pages. Our solution is language-independent; Web pages may be written in different languages. It is topic-independent; no domain knowledge or dictionary is applied. And it is unsupervised; no training phase is necessary. The solution exploits the tree structure of web pages and the frequencies of text tokens to attribute scores of content density to the areas of the page and by the way identify the most important one. We tested this solution over representative examples of web pages to show how efficient and accurate it is. The results were satisfying.",
"title": ""
},
{
"docid": "a5c054899abf8aa553da4a576577678e",
"text": "Developmental programming resulting from maternal malnutrition can lead to an increased risk of metabolic disorders such as obesity, insulin resistance, type 2 diabetes and cardiovascular disorders in the offspring in later life. Furthermore, many conditions linked with developmental programming are also known to be associated with the aging process. This review summarizes the available evidence about the molecular mechanisms underlying these effects, with the potential to identify novel areas of therapeutic intervention. This could also lead to the discovery of new treatment options for improved patient outcomes.",
"title": ""
},
{
"docid": "dc736509fbed0afcebc967ca31ffc4d5",
"text": "and William K. Wootters IBM Research Division, T. J. Watson Research Center, Yorktown Heights, New York 10598 Norman Bridge Laboratory of Physics 12-33, California Institute of Technology, Pasadena, California 91125 Département d’Informatique et de Recherche Ope ́rationelle, Succursale Centre-Ville, Montre ́al, Canada H3C 3J7 AT&T Shannon Laboratory, 180 Park Avenue, Building 103, Florham Park, New Jersey 07932 Physics Department, Williams College, Williamstown, Massachusetts 01267 ~Received 17 June 1998 !",
"title": ""
},
{
"docid": "4aaab0aa476c60b2486fc76d63f7d899",
"text": "When evaluating a potential product purchase, customers may have many questions in mind. They want to get adequate information to determine whether the product of interest is worth their money. In this paper we present a simple deep learning model for answering questions regarding product facts and specifications. Given a question and a product specification, the model outputs a score indicating their relevance. To train and evaluate our proposed model, we collected a dataset of 7,119 questions that are related to 153 different products. Experimental results demonstrate that — despite its simplicity — the performance of our model is shown to be comparable to a more complex state-of-the-art baseline.",
"title": ""
},
{
"docid": "ffe218d01142769cf794c1b1a4e7969f",
"text": "Most neurons in the mammalian CNS encode and transmit information via action potentials. Knowledge of where these electrical events are initiated and how they propagate within neurons is therefore fundamental to an understanding of neuronal function. While work from the 1950s suggested that action potentials are initiated in the axon, many subsequent investigations have suggested that action potentials can also be initiated in the dendrites. Recently, experiments using simultaneous patch-pipette recordings from different locations on the same neuron have been used to address this issue directly. These studies show that the site of action potential initiation is in the axon, even when synaptic activation is powerful enough to elicit dendritic electrogenesis. Furthermore, these and other studies also show that following initiation, action potentials actively backpropagate into the dendrites of many neuronal types, providing a retrograde signal of neuronal output to the dendritic tree.",
"title": ""
},
{
"docid": "4354df503e85911040e2f438024f16f3",
"text": "This paper proposes a Hybrid Approximate Representation (HAR) based on unifying several efficient approximations of the generalized reprojection error (which is known as the <italic>gold standard</italic> for multiview geometry). The HAR is an over-parameterization scheme where the approximation is applied simultaneously in multiple parameter spaces. A joint minimization scheme “HAR-Descent” can then solve the PnP problem efficiently, while remaining robust to approximation errors and local minima. The technique is evaluated extensively, including numerous synthetic benchmark protocols and the real-world data evaluations used in previous works. The proposed technique was found to have runtime complexity comparable to the fastest <inline-formula><tex-math notation=\"LaTeX\">$O(n)$</tex-math><alternatives><inline-graphic xlink:href=\"hadfield-ieq1-2806446.gif\"/></alternatives></inline-formula> techniques, and up to 10 times faster than current state of the art minimization approaches. In addition, the accuracy exceeds that of all 9 previous techniques tested, providing definitive state of the art performance on the benchmarks, across all 90 of the experiments in the paper and supplementary material, which can be found on the Computer Society Digital Library at <uri>http://doi.ieeecomputersociety.org/10.1109/TPAMI.2018.2806446</uri>.",
"title": ""
},
{
"docid": "fff6c1ca2fde7f50c3654f1953eb97e6",
"text": "This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.",
"title": ""
},
{
"docid": "93388c2897ec6ec7141bcc820ab6734c",
"text": "We address the task of single depth image inpainting. Without the corresponding color images, previous or next frames, depth image inpainting is quite challenging. One natural solution is to regard the image as a matrix and adopt the low rank regularization just as color image inpainting. However, the low rank assumption does not make full use of the properties of depth images. A shallow observation inspires us to penalize the nonzero gradients by sparse gradient regularization. However, statistics show that though most pixels have zero gradients, there is still a non-ignorable part of pixels, whose gradients are small but nonzero. Based on this property of depth images, we propose a low gradient regularization method in which we reduce the penalty for small gradients while penalizing the nonzero gradients to allow for gradual depth changes. The proposed low gradient regularization is integrated with the low rank regularization into the low rank low gradient approach for depth image inpainting. We compare our proposed low gradient regularization with the sparse gradient regularization. The experimental results show the effectiveness of our proposed approach.",
"title": ""
}
] |
scidocsrr
|
a8ab75a5ab20fe1c2fccf0fe04c3dc29
|
Design and Kinematic Modeling of Constant Curvature Continuum Robots: A Review
|
[
{
"docid": "f11dbf9c32b126de695801957171465c",
"text": "Continuum robots, which are composed of multiple concentric, precurved elastic tubes, can provide dexterity at diameters equivalent to standard surgical needles. Recent mechanics-based models of these “active cannulas” are able to accurately describe the curve of the robot in free space, given the preformed tube curves and the linear and angular positions of the tube bases. However, in practical applications, where the active cannula must interact with its environment or apply controlled forces, a model that accounts for deformation under external loading is required. In this paper, we apply geometrically exact rod theory to produce a forward kinematic model that accurately describes large deflections due to a general collection of externally applied point and/or distributed wrench loads. This model accommodates arbitrarily many tubes, with each having a general preshaped curve. It also describes the independent torsional deformation of the individual tubes. Experimental results are provided for both point and distributed loads. Average tip error under load was 2.91 mm (1.5% - 3% of total robot length), which is similar to the accuracy of existing free-space models.",
"title": ""
}
] |
[
{
"docid": "e5020601a6e4b2c07868ffc0f84498ae",
"text": "We describe a combined nonlinear acoustic echo cancellation and residual echo suppression system. The echo canceler uses parallel Hammerstein branches consisting of fixed nonlinear basis functions and linear adaptive filters. The residual echo suppressor uses an Artificial Neural Network for modeling of the residual echo spectrum from spectral features computed from the far-end signal. We show that modeling nonlinear effects both in the echo canceler and in the echo suppressor leads to an increased performance of the combined system.",
"title": ""
},
{
"docid": "cd4d874d0428a61c27bdcadc752c7d68",
"text": "Recent advances in genome technologies and the ensuing outpouring of genomic information related to cancer have accelerated the convergence of discovery science and clinical medicine. Successful examples of translating cancer genomics into therapeutics and diagnostics reinforce its potential to make possible personalized cancer medicine. However, the bottlenecks along the path of converting a genome discovery into a tangible clinical endpoint are numerous and formidable. In this Perspective, we emphasize the importance of establishing the biological relevance of a cancer genomic discovery in realizing its clinical potential and discuss some of the major obstacles to moving from the bench to the bedside.",
"title": ""
},
{
"docid": "1186bb5c96eebc26ce781d45fae7768d",
"text": "Essential genes are required for the viability of an organism. Accurate and rapid identification of new essential genes is of substantial theoretical interest to synthetic biology and has practical applications in biomedicine. Fractals provide facilitated access to genetic structure analysis on a different scale. In this study, machine learning-based methods using solely fractal features are presented and the problem of predicting essential genes in bacterial genomes is evaluated. Six fractal features were investigated to learn the parameters of five supervised classification methods for the binary classification task. The optimal parameters of these classifiers are determined via grid-based searching technique. All the currently available identified genes from the database of essential genes were utilized to build the classifiers. The fractal features were proven to be more robust and powerful in the prediction performance. In a statistical sense, the ELM method shows superiority in predicting the essential genes. Non-parameter tests of the average AUC and ACC showed that the fractal feature is much better than other five compared features sets. Our approach is promising and convenient to identify new bacterial essential genes.",
"title": ""
},
{
"docid": "557451621286ecd4fbf21909ff88450f",
"text": "BACKGROUND\nMany studies have demonstrated that honey has antibacterial activity in vitro, and a small number of clinical case studies have shown that application of honey to severely infected cutaneous wounds is capable of clearing infection from the wound and improving tissue healing. Research has also indicated that honey may possess anti-inflammatory activity and stimulate immune responses within a wound. The overall effect is to reduce infection and to enhance wound healing in burns, ulcers, and other cutaneous wounds. The objective of the study was to find out the results of topical wound dressings in diabetic wounds with natural honey.\n\n\nMETHODS\nThe study was conducted at department of Orthopaedics, Unit-1, Liaquat University of Medical and Health Sciences, Jamshoro from July 2006 to June 2007. Study design was experimental. The inclusion criteria were patients of either gender with any age group having diabetic foot Wagner type I, II, III and II. The exclusion criteria were patients not willing for studies and who needed urgent amputation due to deteriorating illness. Initially all wounds were washed thoroughly and necrotic tissues removed and dressings with honey were applied and continued up to healing of wounds.\n\n\nRESULTS\nTotal number of patients was 12 (14 feet). There were 8 males (66.67%) and 4 females (33.33%), 2 cases (16.67%) were presented with bilateral diabetic feet. The age range was 35 to 65 years (46 +/- 9.07 years). Amputations of big toe in 3 patients (25%), second and third toe ray in 2 patients (16.67%) and of fourth and fifth toes at the level of metatarsophalengeal joints were done in 3 patients (25%). One patient (8.33%) had below knee amputation.\n\n\nCONCLUSION\nIn our study we observed excellent results in treating diabetic wounds with dressings soaked with natural honey. The disability of diabetic foot patients was minimized by decreasing the rate of leg or foot amputations and thus enhancing the quality and productivity of individual life.",
"title": ""
},
{
"docid": "831b153045d9afc8f92336b3ba8019c6",
"text": "The progress in the field of electronics and technology as well as the processing of signals coupled with advance in the use of computer technology has given the opportunity to record and analyze the bio-electric signals from the human body in real time that requires dealing with many challenges according to the nature of the signal and its frequency. This could be up to 1 kHz, in addition to the need to transfer data from more than one channel at the same time. Moreover, another challenge is a high sensitivity and low noise measurements of the acquired bio-electric signals which may be tens of micro volts in amplitude. For these reasons, a low power wireless Electromyography (EMG) data transfer system is designed in order to meet these challenging demands. In this work, we are able to develop an EMG analogue signal processing hardware, along with computer based supporting software. In the development of the EMG analogue signal processing hardware, many important issues have been addressed. Some of these issues include noise and artifact problems, as well as the bias DC current. The computer based software enables the user to analyze the collected EMG data and plot them on graphs for visual decision making. The work accomplished in this study enables users to use the surface EMG device for recording EMG signals for various purposes in movement analysis in medical diagnosis, rehabilitation sports medicine and ergonomics. Results revealed that the proposed system transmit and receive the signal without any losing in the information of signals.",
"title": ""
},
{
"docid": "f7c62753c37d83d089c5b1e910140ac4",
"text": "It is often desirable to determine if an image has been modified in any way from its original recording. The JPEG format affords engineers many implementation trade-offs which give rise to widely varying JPEG headers. We exploit these variations for image authentication. A camera signature is extracted from a JPEG image consisting of information about quantization tables, Huffman codes, thumbnails, and exchangeable image file format (EXIF). We show that this signature is highly distinct across 1.3 million images spanning 773 different cameras and cell phones. Specifically, 62% of images have a signature that is unique to a single camera, 80% of images have a signature that is shared by three or fewer cameras, and 99% of images have a signature that is unique to a single manufacturer. The signature of Adobe Photoshop is also shown to be unique relative to all 773 cameras. These signatures are simple to extract and offer an efficient method to establish the authenticity of a digital image.",
"title": ""
},
{
"docid": "494ed6efac81a9e8bbdbfa9f19a518d3",
"text": "We studied the possibilities of embroidered antenna-IC interconnections and contour antennas in passive ultrahigh-frequency radio-frequency identification textile tags. The tag antennas were patterned from metal-coated fabrics and embroidered with conductive yarn. The wireless performance of the tags with embroidered antenna-IC interconnections was evaluated through measurements, and the results were compared to identical tags, where the ICs were attached using regular conductive epoxy. Our results show that the textile tags with embroidered antenna-IC interconnections attained similar performance. In addition, the tags where only the borderlines of the antennas were embroidered showed excellent wireless performance.",
"title": ""
},
{
"docid": "60a3ba5263067030434db976e6e121db",
"text": "Background and Objective: Physical inactivity is the fourth leading risk factor for global mortality. Physical inactivity levels are rising in developing countries and Malaysia is of no exception. Malaysian Adult Nutrition Survey 2003 reported that the prevalence of physical inactivity was 39.7% and the prevalence was higher for women (42.6%) than men (36.7%). In Malaysia, the National Health and Morbidity Survey 2006 reported that 43.7% (5.5 million) of Malaysian adults were physically inactive. These statistics show that physically inactive is an important public health concern in Malaysia. College students have been found to have poor physical activity habits. The objective of this study was to identify the physical activity level among students of Asia Metropolitan University (AMU) in Malaysia.",
"title": ""
},
{
"docid": "d27ed8fd2acd0dad6436b7e98853239d",
"text": "a r t i c l e i n f o What are the psychological mechanisms that trigger habits in daily life? Two studies reveal that strong habits are influenced by context cues associated with past performance (e.g., locations) but are relatively unaffected by current goals. Specifically, performance contexts—but not goals—automatically triggered strongly habitual behaviors in memory (Experiment 1) and triggered overt habit performance (Experiment 2). Nonetheless, habits sometimes appear to be linked to goals because people self-perceive their habits to be guided by goals. Furthermore, habits of moderate strength are automatically influenced by goals, yielding a curvilinear, U-shaped relation between habit strength and actual goal influence. Thus, research that taps self-perceptions or moderately strong habits may find habits to be linked to goals. Introduction Having cast off the strictures of behaviorism, psychologists are showing renewed interest in the psychological processes that guide This interest is fueled partly by the recognition that automaticity is not a unitary construct. Hence, different kinds of automatic responses may be triggered and controlled in different ways (Bargh, 1994; Moors & De Houwer, 2006). However, the field has not yet converged on a common understanding of the psychological mechanisms that underlie habits. Habits can be defined as psychological dispositions to repeat past behavior. They are acquired gradually as people repeatedly respond in a recurring context (e.g., performance settings, action sequences, Wood & Neal, 2007, 2009). Most researchers agree that habits often originate in goal pursuit, given that people are likely to repeat actions that are rewarding or yield desired outcomes. In addition, habit strength is a continuum, with habits of weak and moderate strength performed with lower frequency and/or in more variable contexts than strong habits This consensus aside, it remains unclear how goals and context cues influence habit automaticity. Goals are motivational states that (a) define a valued outcome that (b) energizes and directs action (e.g., the goal of getting an A in class energizes late night studying; Förster, Liberman, & Friedman, 2007). In contrast, context cues for habits reflect features of the performance environment in which the response typically occurs (e.g., the college library as a setting for late night studying). Some prior research indicates that habits are activated automatically by goals (e.g., Aarts & Dijksterhuis, 2000), whereas others indicate that habits are activated directly by context cues, with minimal influence of goals In the present experiments, we first test the cognitive associations …",
"title": ""
},
{
"docid": "a691ec038ef76874afe0a2b67ff75d3e",
"text": "Uveitis is a general term for intraocular inflammation and includes a large number of clinical phenotypes. As a group of disorders, it is responsible for 10% of all registered blind patients under the age of 65 years. Immune-mediated uveitis may be associated with a systemic disease or may be localized to the eye. The pro-inflammatory cytokines interleukin (IL)-1beta, IL-2, IL-6, interferon-gamma and tumor necrosis factor-alpha have all been detected within the ocular fluids or tissues in the inflamed eye together with others, such as IL-4, IL-5, IL-10 and transforming growth factor-beta. The chemokines IL-8, monocyte chemoattractant protein-1, macrophage inflammatory protein (MIP)-1alpha, MIP-1beta and fractalkine are also thought to be involved in the associated inflammatory response. There have been a number of studies in recent years investigating cytokine profiles in different forms of uveitis with a view to determining what cytokines are important in the inflamed eye. This review attempts to present the current state of knowledge from in vitro and in vivo research on the inflammatory cytokines in intraocular inflammatory diseases.",
"title": ""
},
{
"docid": "6eb2c0e22ecc0816cb5f83292902d799",
"text": "In this paper, we demonstrate that Android malware can bypass all automated analysis systems, including AV solutions, mobile sandboxes, and the Google Bouncer. We propose a tool called Sand-Finger for the fingerprinting of Android-based analysis systems. By analyzing the fingerprints of ten unique analysis environments from different vendors, we were able to find characteristics in which all tested environments differ from actual hardware. Depending on the availability of an analysis system, malware can either behave benignly or load malicious code at runtime. We classify this group of malware as Divide-and-Conquer attacks that are efficiently obfuscated by a combination of fingerprinting and dynamic code loading. In this group, we aggregate attacks that work against dynamic as well as static analysis. To demonstrate our approach, we create proof-of-concept malware that surpasses up-to-date malware scanners for Android. We also prove that known malware samples can enter the Google Play Store by modifying them only slightly. Due to Android's lack of an API for malware scanning at runtime, it is impossible for AV solutions to secure Android devices against these attacks.",
"title": ""
},
{
"docid": "dda8427a6630411fc11e6d95dbff08b9",
"text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.",
"title": ""
},
{
"docid": "18b7dadfec8b02624b6adeb2a65d7223",
"text": "This paper provides a brief introduction to recent work in st atistical parsing and its applications. We highlight succes ses to date, remaining challenges, and promising future work.",
"title": ""
},
{
"docid": "6fbce446ceb871bc1d832ce8d06398af",
"text": "The 250 kW TRIGA Mark II research reactor, Vienna, operates since 7 March 1962. The initial criticality was achieved with the first core loading of 57 fuel elements (FE) of same type (Aluminium clad fuel with 20% enrichment). Later on due to fuel consumption SST clad 20% enriched FE (s) have been added to compensate the reactor core burn-up. In 1975 high enriched (HEU) TRIGA fuel (FLIP fuel = Fuel Lifetime Improvement Program) was introduced into the core. The addition of this FLIP fuel resulted in the current completely mixed core. Therefore the current core of the TRIGA reactor Vienna is operating with a completely mixed core using three different types of fuels with two categories of enrichments. This makes the reactor physics calculations very complicated. To calculate the current core, a Monte Carlo based radiation transport computer code MCNP5 was employed to develop the current core of the TRIGA reactor. The present work presents the MCNP model of the current core and its validation through two experiments performed on the reactor. The experimental results of criticality and reactivity distribution experiments confirm the current core model. As the basis of this paper is based on the long-term cooperation with our colleague Dr. Matjaz Ravnik we therefore devote this paper in his memory.",
"title": ""
},
{
"docid": "cf42ab9460b2665b6537d6172b4ef3fb",
"text": "Small drones are being utilized in monitoring, transport, safety and disaster management, and other domains. Envisioning that drones form autonomous networks incorporated into the air traffic, we describe a high-level architecture for the design of a collaborative aerial system consisting of drones with on-board sensors and embedded processing, sensing, coordination, and networking capabilities. We implement a multi-drone system consisting of quadcopters and demonstrate its potential in disaster assistance, search and rescue, and aerial monitoring. Furthermore, we illustrate design challenges and present potential solutions based on the lessons learned so far.",
"title": ""
},
{
"docid": "22bcd1d04c92bc6c108638df91997e9b",
"text": "State of the art automatic optimization of OpenCL applications focuses on improving the performance of individual compute kernels. Programmers address opportunities for inter-kernel optimization in specific applications by ad-hoc hand tuning: manually fusing kernels together. However, the complexity of interactions between host and kernel code makes this approach weak or even unviable for applications involving more than a small number of kernel invocations or a highly dynamic control flow, leaving substantial potential opportunities unexplored. It also leads to an over complex, hard to maintain code base. We present Helium, a transparent OpenCL overlay which discovers, manipulates and exploits opportunities for inter-and intra-kernel optimization. Helium is implemented as preloaded library and uses a delay-optimize-replay mechanism in which kernel calls are intercepted, collectively optimized, and then executed according to an improved execution plan. This allows us to benefit from composite optimizations, on large, dynamically complex applications, with no impact on the code base. Our results show that Helium obtains at least the same, and frequently even better performance, than carefully handtuned code. Helium outperforms hand-optimized code where the exact dynamic composition of compute kernel cannot be known statically. In these cases, we demonstrate speedups of up to 3x over unoptimized code and an average speedup of 1.4x over hand optimized code.",
"title": ""
},
{
"docid": "4413ef4f192d5061da7bf2baa82c9048",
"text": "We developed and piloted a program for first-grade students to promote development of legible handwriting and writing fluency. The Write Start program uses a coteaching model in which occupational therapists and teachers collaborate to develop and implement a handwriting-writing program. The small-group format with embedded individualized supports allows the therapist to guide and monitor student performance and provide immediate feedback. The 12-wk program was implemented with 1 class of 19 students. We administered the Evaluation of Children's Handwriting Test, Minnesota Handwriting Assessment, and Woodcock-Johnson Fluency and Writing Samples test at baseline, immediately after the Write Start program, and at the end of the school year. Students made large, significant gains in handwriting legibility and speed and in writing fluency that were maintained at 6-mo follow-up. The Write Start program appears to promote handwriting and writing skills in first-grade students and is ready for further study in controlled trials.",
"title": ""
},
{
"docid": "e4920839c6b2bcacd72cbce578f44f01",
"text": "The ability to predict the reliability of a software system early in its development, e.g., during architectural design, can help to improve the system's quality in a cost-effective manner. Existing architecture-level reliability prediction approaches focus on system-level reliability and assume that the reliabilities of individual components are known. In general, this assumption is unreasonable, making component reliability prediction an important missing ingredient in the current literature. Early prediction of component reliability is a challenging problem because of many uncertainties associated with components under development. In this paper we address these challenges in developing a software component reliability prediction framework. We do this by exploiting architectural models and associated analysis techniques, stochastic modeling approaches, and information sources available early in the development lifecycle. We extensively evaluate our framework to illustrate its utility as an early reliability prediction approach.",
"title": ""
},
{
"docid": "37501837b77c336d01f751a0a2fafd1d",
"text": "Brain-inspired Hyperdimensional (HD) computing emulates cognition tasks by computing with hypervectors rather than traditional numerical values. In HD, an encoder maps inputs to high dimensional vectors (hypervectors) and combines them to generate a model for each existing class. During inference, HD performs the task of reasoning by looking for similarities of the input hypervector and each pre-stored class hypervector However, there is not a unique encoding in HD which can perfectly map inputs to hypervectors. This results in low HD classification accuracy over complex tasks such as speech recognition. In this paper we propose MHD, a multi-encoder hierarchical classifier, which enables HD to take full advantages of multiple encoders without increasing the cost of classification. MHD consists of two HD stages: a main stage and a decider stage. The main stage makes use of multiple classifiers with different encoders to classify a wide range of input data. Each classifier in the main stage can trade between efficiency and accuracy by dynamically varying the hypervectors' dimensions. The decider stage, located before the main stage, learns the difficulty of the input data and selects an encoder within the main stage that will provide the maximum accuracy, while also maximizing the efficiency of the classification task. We test the accuracy/efficiency of the proposed MHD on speech recognition application. Our evaluation shows that MHD can provide a 6.6× improvement in energy efficiency and a 6.3× speedup, as compared to baseline single level HD.",
"title": ""
},
{
"docid": "06654ef57e96d2e7cd969d271240371d",
"text": "The construction industry has been facing a paradigm shift to (i) increase; productivity, efficiency, infrastructure value, quality and sustainability, (ii) reduce; lifecycle costs, lead times and duplications, via effective collaboration and communication of stakeholders in construction projects. Digital construction is a political initiative to address low productivity in the sector. This seeks to integrate processes throughout the entire lifecycle by utilising building information modelling (BIM) systems. The focus is to create and reuse consistent digital information by the stakeholders throughout the lifecycle. However, implementation and use of BIM systems requires dramatic changes in the current business practices, bring new challenges for stakeholders e.g., the emerging knowledge and skill gap. This paper reviews and discusses the status of implementation of the BIM systems around the globe and their implications to the industry. Moreover, based on the lessons learnt, it will provide a guide to tackle these challenges and to facilitate successful transition towards utilizing BIM systems in construction projects.",
"title": ""
}
] |
scidocsrr
|
8e88c9080fff279703d2d6cd0b25773b
|
APPLYING MACHINE LEARNING ALGORITHMS IN SOFTWARE DEVELOPMENT
|
[
{
"docid": "a4a48ae446f073d6926e96703816ce47",
"text": "|This paper illustrates how software can be described precisely using LD-relations, how these descriptions can be presented in a readable manner using tabular notations, and one way such descriptions can be used to test programs. We describe an algorithm that can be used to generate a test oracle from program documentation, and present the results of using a tool based on it to help test part of a commercial network management application. The results demonstrate that these methods can be eeective at detecting errors and greatly increase the speed and accuracy of test evaluation when compared with manual evaluation. Such oracles can be used for unit testing, {in situ} testing, constructing self-checking software and ensuring consistency between code and documentation.",
"title": ""
}
] |
[
{
"docid": "990c123bcc1bf3bbf2a42990ba724169",
"text": "This paper demonstrates an innovative and simple solution for obstacle detection and collision avoidance of unmanned aerial vehicles (UAVs) optimized for and evaluated with quadrotors. The sensors exploited in this paper are low-cost ultrasonic and infrared range finders, which are much cheaper though noisier than more expensive sensors such as laser scanners. This needs to be taken into consideration for the design, implementation, and parametrization of the signal processing and control algorithm for such a system, which is the topic of this paper. For improved data fusion, inertial and optical flow sensors are used as a distance derivative for reference. As a result, a UAV is capable of distance controlled collision avoidance, which is more complex and powerful than comparable simple solutions. At the same time, the solution remains simple with a low computational burden. Thus, memory and time-consuming simultaneous localization and mapping is not required for collision avoidance.",
"title": ""
},
{
"docid": "a922051835f239db76be1dbb8edead3e",
"text": "Among the simplest and most intuitively appealing classes of nonprobabilistic classification procedures are those that weight the evidence of nearby sample observations most heavily. More specifically, one might wish to weight the evidence of a neighbor close to an unclassified observation more heavily than the evidence of another neighbor which is at a greater distance from the unclassified observation. One such classification rule is described which makes use of a neighbor weighting function for the purpose of assigning a class to an unclassified sample. The admissibility of such a rule is also considered.",
"title": ""
},
{
"docid": "b1cb31c70acb17d353116783845f85f5",
"text": "Wireless sensor networks have become increasingly popular due to their wide range of applications. Energy consumption is one of the biggest constraints of the wireless sensor node and this limitation combined with a typical deployment of large number of nodes have added many challenges to the design and management of wireless sensor networks. They are typically used for remote environment monitoring in areas where providing electrical power is difficult. Therefore, the devices need to be powered by batteries and alternative energy sources. Because battery energy is limited, the use of different techniques for energy saving is one of the hottest topics in WSNs. In this work, we present a survey of power saving and energy optimization techniques for wireless sensor networks, which enhances the ones in existence and introduces the reader to the most well known available methods that can be used to save energy. They are analyzed from several points of view: Device hardware, transmission, MAC and routing protocols.",
"title": ""
},
{
"docid": "182dc182f7c814c18cb83a0515149cec",
"text": "This paper discusses about methods for detection of leukemia. Various image processing techniques are used for identification of red blood cell and immature white cells. Different disease like anemia, leukemia, malaria, deficiency of vitamin B12, etc. can be diagnosed accordingly. Objective is to detect the leukemia affected cells and count it. According to detection of immature blast cells, leukemia can be identified and also define that either it is chronic or acute. To detect immature cells, number of methods are used like histogram equalization, linear contrast stretching, some morphological techniques like area opening, area closing, erosion, dilation. Watershed transform, K means, histogram equalization & linear contrast stretching, and shape based features are accurate 72.2%, 72%, 73.7 % and 97.8% respectively.",
"title": ""
},
{
"docid": "ad0892ee2e570a8a2f5a90883d15f2d2",
"text": "Supervised event extraction systems are limited in their accuracy due to the lack of available training data. We present a method for self-training event extraction systems by bootstrapping additional training data. This is done by taking advantage of the occurrence of multiple mentions of the same event instances across newswire articles from multiple sources. If our system can make a highconfidence extraction of some mentions in such a cluster, it can then acquire diverse training examples by adding the other mentions as well. Our experiments show significant performance improvements on multiple event extractors over ACE 2005 and TAC-KBP 2015 datasets.",
"title": ""
},
{
"docid": "c32cecbc4adc812de6e43b3b0b05866b",
"text": "Reinforcement learning for embodied agents is a challenging problem. The accumulated reward to be optimized is often a very rugged function, and gradient methods are impaired by many local optimizers. We demonstrate, in an experimental setting, that incorporating an intrinsic reward can smoothen the optimization landscape while preserving the global optimizers of interest. We show that policy gradient optimization for locomotion in a complex morphology is significantly improved when supplementing the extrinsic reward by an intrinsic reward defined in terms of the mutual information of time consecutive sensor readings.",
"title": ""
},
{
"docid": "76f66971abcce88b670940c8cc237cfc",
"text": "A challenging goal in neuroscience is to be able to read out, or decode, mental content from brain activity. Recent functional magnetic resonance imaging (fMRI) studies have decoded orientation, position and object category from activity in visual cortex. However, these studies typically used relatively simple stimuli (for example, gratings) or images drawn from fixed categories (for example, faces, houses), and decoding was based on previous measurements of brain activity evoked by those same stimuli or categories. To overcome these limitations, here we develop a decoding method based on quantitative receptive-field models that characterize the relationship between visual stimuli and fMRI activity in early visual areas. These models describe the tuning of individual voxels for space, orientation and spatial frequency, and are estimated directly from responses evoked by natural images. We show that these receptive-field models make it possible to identify, from a large set of completely novel natural images, which specific image was seen by an observer. Identification is not a mere consequence of the retinotopic organization of visual areas; simpler receptive-field models that describe only spatial tuning yield much poorer identification performance. Our results suggest that it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone.",
"title": ""
},
{
"docid": "ca44496c768cfc73075c31a4fc010d4d",
"text": "The most downloaded articles from ScienceDirect in the last 90 days.",
"title": ""
},
{
"docid": "8e8905e6ae4c4d6cd07afa157b253da9",
"text": "Blockchain technology enables the execution of collaborative business processes involving untrusted parties without requiring a central authority. Specifically, a process model comprising tasks performed by multiple parties can be coordinated via smart contracts operating on the blockchain. The consensus mechanism governing the blockchain thereby guarantees that the process model is followed by each party. However, the cost required for blockchain use is highly dependent on the volume of data recorded and the frequency of data updates by smart contracts. This paper proposes an optimized method for executing business processes on top of commodity blockchain technology. The paper presents a method for compiling a process model into a smart contract that encodes the preconditions for executing each task in the process using a space-optimized data structure. The method is empirically compared to a previously proposed baseline by replaying execution logs, including one from a real-life business process, and measuring resource consumption.",
"title": ""
},
{
"docid": "b4a0ab9e1d074bff67f80df57a732d8d",
"text": "We study to what extend Chinese, Japanese and Korean faces can be classified and which facial attributes offer the most important cues. First, we propose a novel way of ob- taining large numbers of facial images with nationality la- bels. Then we train state-of-the-art neural networks with these labeled images. We are able to achieve an accuracy of 75.03% in the classification task, with chances being 33.33% and human accuracy 49% . Further, we train mul- tiple facial attribute classifiers to identify the most distinc- tive features for each group. We find that Chinese, Japanese and Koreans do exhibit substantial differences in certain at- tributes, such as bangs, smiling, and bushy eyebrows. Along the way, we uncover several gender-related cross-country patterns as well. Our work, which complements existing APIs such as Microsoft Cognitive Services and Face++, could find potential applications in tourism, e-commerce, social media marketing, criminal justice and even counter- terrorism.",
"title": ""
},
{
"docid": "433e7a8c4d4a16f562f9ae112102526e",
"text": "Although both extrinsic and intrinsic factors have been identified that orchestrate the differentiation and maturation of oligodendrocytes, less is known about the intracellular signaling pathways that control the overall commitment to differentiate. Here, we provide evidence that activation of the mammalian target of rapamycin (mTOR) is essential for oligodendrocyte differentiation. Specifically, mTOR regulates oligodendrocyte differentiation at the late progenitor to immature oligodendrocyte transition as assessed by the expression of stage specific antigens and myelin proteins including MBP and PLP. Furthermore, phosphorylation of mTOR on Ser 2448 correlates with myelination in the subcortical white matter of the developing brain. We demonstrate that mTOR exerts its effects on oligodendrocyte differentiation through two distinct signaling complexes, mTORC1 and mTORC2, defined by the presence of the adaptor proteins raptor and rictor, respectively. Disrupting mTOR complex formation via siRNA mediated knockdown of raptor or rictor significantly reduced myelin protein expression in vitro. However, mTORC2 alone controlled myelin gene expression at the mRNA level, whereas mTORC1 influenced MBP expression via an alternative mechanism. In addition, investigation of mTORC1 and mTORC2 targets revealed differential phosphorylation during oligodendrocyte differentiation. In OPC-DRG cocultures, inhibiting mTOR potently abrogated oligodendrocyte differentiation and reduced numbers of myelin segments. These data support the hypothesis that mTOR regulates commitment to oligodendrocyte differentiation before myelination.",
"title": ""
},
{
"docid": "925aacab817a20ff527afd4100c2a8bd",
"text": "This paper presents an efficient design approach for band-pass post filters in waveguides, based on mode-matching technique. With this technique, the characteristics of symmetrical cylindrical post arrangements in the cross-section of the considered waveguides can be analyzed accurately and quickly. Importantly, the approach is applicable to post filters in waveguide but can be extended to Substrate Integrated Waveguide (SIW) technologies. The fast computations provide accurate relationships for the K factors as a function of the post radii and the distances between posts, and allow analyzing the influence of machining tolerances on the filter performance. The computations are used to choose reasonable posts for designing band-pass filters, while the error analysis helps to judge whether a given machining precision is sufficient. The approach is applied to a Chebyshev band-pass post filter and a band-pass SIW filter with a center frequency of 10.5 GHz and a fractional bandwidth of 9.52% with verification via full-wave simulations using HFSS and measurements on manufactured prototypes.",
"title": ""
},
{
"docid": "fcf70e8f0a35ae805ec682a0d8cacae2",
"text": "One of the most important factors in training object recognition networks using convolutional neural networks (CNN) is the provision of annotated data accompanying human judgment. Particularly, in object detection or semantic segmentation, the annotation process requires considerable human effort. In this paper, we propose a semi-supervised learning (SSL)-based training methodology for object detection, which makes use of automatic labeling of un-annotated data by applying a network previously trained from an annotated dataset. Because an inferred label by the trained network is dependent on the learned parameters, it is often meaningless for re-training the network. To transfer a valuable inferred label to the unlabeled data, we propose a re-alignment method based on co-occurrence matrix analysis that takes into account one-hot-vector encoding of the estimated label and the correlation between the objects in the image. We used an MS-COCO detection dataset to verify the performance of the proposed SSL method and deformable neural networks (D-ConvNets) [1] as an object detector for basic training. The performance of the existing state-of-the-art detectors (D-ConvNets, YOLO v2 [2], and single shot multi-box detector (SSD) [3]) can be improved by the proposed SSL method without using the additional model parameter or modifying the network architecture.",
"title": ""
},
{
"docid": "8f1a5420deb75a2b664ceeaae8fc03f9",
"text": "A stretchable and multiple-force-sensitive electronic fabric based on stretchable coaxial sensor electrodes is fabricated for artificial-skin application. This electronic fabric, with only one kind of sensor unit, can simultaneously map and quantify the mechanical stresses induced by normal pressure, lateral strain, and flexion.",
"title": ""
},
{
"docid": "d1048297794d59687d3cf33eafbf0af3",
"text": "Voids are one of the major defects in solder balls and their detection and assessment can help in reducing unit and board yield issues caused by excessive or very large voids. Voids are difficult to detect using manual inspection alone. 2-D X-ray machines are often used to make voids visible to an operator for manual inspection. Automated methods do not give good accuracy in void detection and measurement because of a number of challenges present in 2-D X-ray images. Some of these challenges include vias, plated-through holes, reflections from the the plating or vias, inconsistent lighting, background traces, noise, void-like artifacts, and parallax effects. None of the existing methods that has been researched or utilized in equipment could accurately and repeatably detect voids in the presence of these challenges. This paper proposes a robust automatic void detection algorithm that detects voids accurately and repeatably in the presence of the aforementioned challenges. The proposed method operates on the 2-D X-ray images by first segregating each individual solder ball, including balls that are overshadowed by components, in preparation for treating each ball independently for void detection. Feature parameters are extracted through different classification steps to classify each artifact detected inside the solder ball as a candidate or phantom void. Several classification steps are used to tackle the challenges exhibited in the 2-D X-ray images. The proposed method is able to detect different-sized voids inside the solder balls under different brightness conditions and voids that are partially obscured by vias. Results show that the proposed method achieves a correlation squared of 86% when compared with manually measured and averaged data from experienced operators from both 2-D and 3-D X-ray tools. The proposed algorithm is fully automated and benefits the manufacturing process by reducing operator inspection time and removing the manual measurement variability from the results, thus providing a cost-effective solution to improve output product quality.",
"title": ""
},
{
"docid": "aaabe81401e33f7e2bb48dd6d5970f9b",
"text": "Brain tumor is the most life undermining sickness and its recognition is the most challenging task for radio logistics by manual detection due to varieties in size, shape and location and sort of tumor. So, detection ought to be quick and precise and can be obtained by automated segmentation methods on MR images. In this paper, neutrosophic sets based segmentation is performed to detect the tumor. MRI is an intense apparatus over CT to analyze the interior segments of the body and the tumor. Tumor is detected and true, false and indeterminacy values of tumor are determined by this technique and the proposed method produce the beholden results.",
"title": ""
},
{
"docid": "e6e7ee19b958b40abeed760be50f2583",
"text": "All distributed-generation units need to be equipped with an anti-islanding protection (AIP) scheme in order to avoid unintentional islanding. Unfortunately, most AIP methods fail to detect islanding if the demand in the islanded circuit matches the production in the island. Another concern is that many active AIP schemes cause power-quality problems. This paper proposes an AIP method which is based on the combination of a reactive power versus frequency droop and rate of change of frequency (ROCOF). The method is designed so that the injection of reactive power is of minor scale during normal operating conditions. Yet, the method can rapidly detect islanding which is verified by PSCAD/EMTDC simulations.",
"title": ""
},
{
"docid": "960022742172d6d0e883a23c74d800ef",
"text": "A novel algorithm to remove rain or snow streaks from a video sequence using temporal correlation and low-rank matrix completion is proposed in this paper. Based on the observation that rain streaks are too small and move too fast to affect the optical flow estimation between consecutive frames, we obtain an initial rain map by subtracting temporally warped frames from a current frame. Then, we decompose the initial rain map into basis vectors based on the sparse representation, and classify those basis vectors into rain streak ones and outliers with a support vector machine. We then refine the rain map by excluding the outliers. Finally, we remove the detected rain streaks by employing a low-rank matrix completion technique. Furthermore, we extend the proposed algorithm to stereo video deraining. Experimental results demonstrate that the proposed algorithm detects and removes rain or snow streaks efficiently, outperforming conventional algorithms.",
"title": ""
},
{
"docid": "17192a9edb1e6eb3d9809d432d2d38bc",
"text": "Purpose This concept paper presents the process of constructing a language tailored to describing insider threat incidents, for the purposes of mitigating threats originating from legitimate users in an IT infrastructure. Various information security surveys indicate that misuse by legitimate (insider) users has serious implications for the health of IT environments. A brief discussion of survey data and insider threat concepts is followed by an overview of existing research efforts to mitigate this particular problem. None of the existing insider threat mitigation frameworks provide facilities for systematically describing the elements of misuse incidents, and thus all threat mitigation frameworks could benefit from the existence of a domain specific language for describing legitimate user actions. The paper presents a language development methodology which centres upon ways to abstract the insider threat domain and approaches to encode the abstracted information into language semantics. Due to lack of suitable insider case repositories, and the fact that most insider misuse frameworks have not been extensively implemented in practice, the aforementioned language construction methodology is based upon observed information security survey trends and the study of existing insider threat and intrusion specification frameworks. The development of a domain specific language goes through various stages of refinement that might eventually contradict these preliminary findings. Practical implications This paper summarizes the picture of the insider threat in IT infrastructures and provides a useful reference for insider threat modeling researchers by indicating ways to abstract insider threats. The problems of constructing insider threat signatures and utilizing them in insider threat models are also discussed.",
"title": ""
},
{
"docid": "a09cfa27c7e5492c6d09b3dff7171588",
"text": "This paper aims to provide a basis for the improvement of software-estimation research through a systematic review of previous work. The review identifies 304 software cost estimation papers in 76 journals and classifies the papers according to research topic, estimation approach, research approach, study context and data set. A Web-based library of these cost estimation papers is provided to ease the identification of relevant estimation research results. The review results combined with other knowledge provide support for recommendations for future software cost estimation research, including: 1) increase the breadth of the search for relevant studies, 2) search manually for relevant papers within a carefully selected set of journals when completeness is essential, 3) conduct more studies on estimation methods commonly used by the software industry, and 4) increase the awareness of how properties of the data sets impact the results when evaluating estimation methods",
"title": ""
}
] |
scidocsrr
|
a83a32f9af7e3c9f191fd6d47ed7b593
|
An algorithm for removing sensitive information: application to race-independent recidivism prediction
|
[
{
"docid": "18a524545090542af81e0a66df3a1395",
"text": "What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process.\n When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the process, we propose making inferences based on the data it uses.\n We present four contributions. First, we link disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on how well the protected class can be predicted from the other attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.",
"title": ""
},
{
"docid": "22e21aab5d41c84a26bc09f9b7402efa",
"text": "Skeem for their thoughtful comments and suggestions.",
"title": ""
}
] |
[
{
"docid": "1bc38456a881461764b26a9785337ac9",
"text": "Recent studies have characterized how host genetics, prenatal environment and delivery mode can shape the newborn microbiome at birth. Following this, postnatal factors, such as antibiotic treatment, diet or environmental exposure, further modulate the development of the infant's microbiome and immune system, and exposure to a variety of microbial organisms during early life has long been hypothesized to exert a protective effect in the newborn. Furthermore, epidemiological studies have shown that factors that alter bacterial communities in infants during childhood increase the risk for several diseases, highlighting the importance of understanding early-life microbiome composition. In this review, we describe how prenatal and postnatal factors shape the development of both the microbiome and the immune system. We also discuss the prospects of microbiome-mediated therapeutics and the need for more effective approaches that can reconfigure bacterial communities from pathogenic to homeostatic configurations.",
"title": ""
},
{
"docid": "d62c4280bbef1039a393e6949a164946",
"text": "Purpose – Achieving goals of better integrated and responsive government services requires moving away from stand alone applications toward more comprehensive, integrated architectures. As a result there is a mounting pressure to integrate disparate systems to support information exchange and cross-agency business processes. There are substantial barriers that governments must overcome to achieve these goals and to profit from enterprise application integration (EAI). Design/methodology/approach – In the research presented here we develop and test a methodology aimed at overcoming the barriers blocking adoption of EAI. This methodology is based on a discrete-event simulation of public sector structure, business processes and applications in combination with an EAI perspective. Findings – The testing suggests that our methodology helps to provide insight into the myriad of existing applications, and the implications of EAI. Moreover, it helps to identify novel options, gain stakeholder commitment, let them agree on the sharing of EAI costs, and finally it supports collaborative decision-making between public agencies. Practical implications – The approach is found to be useful for making the business case for EAI projects, and gaining stakeholder commitment prior to implementation. Originality/value – The joint addressing of the barriers of public sector reform including the transformation of the public sector structure, gaining of stakeholders’ commitment, understanding EAI technology and dependencies between cross-agency business processes, and a fair division of costs and benefits over stakeholders.",
"title": ""
},
{
"docid": "a4738508bec1fe5975ce92c2239d30d0",
"text": "The transpalatal arch might be one of the most common intraoral auxiliary fixed appliances used in orthodontics in order to provide dental anchorage. The aim of the present case report is to describe a case in which an adult patient with a tendency to class III, palatal compression, and bilateral posterior crossbite was treated with double transpalatal bars in order to control the torque of both the first and the second molars. Double transpalatal arches on both first and second maxillary molars are a successful appliance in order to control the posterior sectors and improve the torsion of the molars. They allow the professional to gain overbite instead of losing it as may happen with other techniques and avoid enlarging of Wilson curve, obtaining a more stable occlusion without the need for extra help from bone anchorage.",
"title": ""
},
{
"docid": "28ff3b1e9f29d7ae4b57f6565330cde5",
"text": "To identify the effects of core stabilization exercise on the Cobb angle and lumbar muscle strength of adolescent patients with idiopathic scoliosis. Subjects in the present study consisted of primary school students who were confirmed to have scoliosis on radiologic examination performed during their visit to the National Fitness Center in Seoul, Korea. Depending on whether they participated in a 12-week core stabilization exercise program, subjects were divided into the exercise (n=14, age 12.71±0.72 years) or control (n=15, age 12.80±0.86 years) group. The exercise group participated in three sessions of core stabilization exercise per week for 12 weeks. The Cobb angle, flexibility, and lumbar muscle strength tests were performed before and after core stabilization exercise. Repeated-measure two-way analysis of variance was performed to compare the treatment effects between the exercise and control groups. There was no significant difference in thoracic Cobb angle between the groups. The exercise group had a significant decrease in the lumbar Cobb angle after exercise compared to before exercise (P<0.001). The exercise group also had a significant increase in lumbar flexor and extensor muscles strength after exercise compared to before exercise (P<0.01 and P<0.001, respectively). Core stabilization exercise can be an effective therapeutic exercise to decrease the Cobb angle and improve lumbar muscle strength in adolescents with idiopathic scoliosis.",
"title": ""
},
{
"docid": "4fe25c65a4fd1886018482aceb82ad6f",
"text": "Article history: Received 21 March 2011 Revised 28 February 2012 Accepted 5 March 2012 Available online 26 March 2012 The purpose of this paper is (1) to identify critical issues in the current literature on ethical leadership — i.e., the conceptual vagueness of the construct itself and the focus on a Western-based perspective; and (2) to address these issues and recent calls for more collaboration between normative and empirical-descriptive inquiry of ethical phenomena by developing an interdisciplinary integrative approach to ethical leadership. Based on the analysis of similarities between Western and Eastern moral philosophy and ethics principles of the world religions, the present approach identifies four essential normative reference points of ethical leadership— the four central ethical orientations: (1) humane orientation, (2) justice orientation, (3) responsibility and sustainability orientation, and (4) moderation orientation. Research propositions on predictors and consequences of leader expressions of the four central orientations are offered. Real cases of ethical leadership choices, derived from in-depth interviews with international leaders, illustrate how the central orientations play out in managerial practice. © 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "63a481d452c6f566d88fdb9fa9d21703",
"text": "Compressive sensing (CS) is a novel sampling paradigm that samples signals in a much more efficient way than the established Nyquist sampling theorem. CS has recently gained a lot of attention due to its exploitation of signal sparsity. Sparsity, an inherent characteristic of many natural signals, enables the signal to be stored in few samples and subsequently be recovered accurately, courtesy of CS. This article gives a brief background on the origins of this idea, reviews the basic mathematical foundation of the theory and then goes on to highlight different areas of its application with a major emphasis on communications and network domain. Finally, the survey concludes by identifying new areas of research where CS could be beneficial.",
"title": ""
},
{
"docid": "073d93f6dc89e71ec54fa00e5dd2d194",
"text": "Naive Bayes and logistic regression perform well in different regimes. While the former is a very simple generative model which is efficient to train and performs well empirically in many applications,the latter is a discriminative model which often achieves better accuracy and can be shown to outperform naive Bayes asymptotically. In this paper, we propose a novel hybrid model, partitioned logistic regression, which has several advantages over both naive Bayes and logistic regression. This model separates the original feature space into several disjoint feature groups. Individual models on these groups of features are learned using logistic regression and their predictions are combined using the naive Bayes principle to produce a robust final estimation. We show that our model is better both theoretically and empirically. In addition, when applying it in a practical application, email spam filtering, it improves the normalized AUC score at 10% false-positive rate by 28.8% and 23.6% compared to naive Bayes and logistic regression, when using the exact same training examples.",
"title": ""
},
{
"docid": "c2da932aec6f3d8c6fddc9aaa994c9cd",
"text": "As more companies embrace the concepts of sustainable development, there is a need to bring the ideas inherent in eco-efficiency and the \" triple-bottom line \" thinking down to a practical implementation level. Putting this concept into operation requires an understanding of the key indicators of sustainability and how they can be measured to determine if, in fact, progress is being made. Sustainability metrics are intended as simple yardsticks that are applicable across industry. The primary objective of this approach is to improve internal management decision-making with respect to the sustainability of processes, products and services. This approach can be used to make better decisions at any stage of the stage-gate process: from identification of an innovation to design to manufacturing and ultimately to exiting a business. More specifically, sustainability metrics can assist decision makers in setting goals, benchmarking, and comparing alternatives such as different suppliers, raw materials, and improvement options from the sustainability perspective. This paper provides a review on the early efforts and recent progress in the development of sustainability metrics. The experience of BRIDGES to Sustainability™, a not-for-profit organization, in testing, adapting, and refining the sustainability metrics are summarized. Basic and complementary metrics under six impact categories: material, energy, water, solid wastes, toxic release, and pollutant effects, are discussed. The development of BRIDGESworks™ Metrics, a metrics management software tool, is also presented. The software was designed to be both easy to use and flexible. It incorporates a base set of metrics and their heuristics for calculation, as well as a robust set of impact assessment data for use in identifying pollutant effects. While providing a metrics management starting point, the user has the option of creating other metrics defined by the user. The sustainability metrics work at BRIDGES to Sustainability™ was funded partially by the U.S. Department of Energy through a subcontract with the American Institute of Chemical Engineers and through corporate pilots.",
"title": ""
},
{
"docid": "2ca62de84fadc2d00831447eb72325b7",
"text": "Research across many fields of medicine now points towards the clinical advantages of combining regenerative procedures with platelet-rich fibrin (PRF). This systematic review aimed to gather the extensive number of articles published to date on PRF in the dental field to better understand the clinical procedures where PRF may be utilized to enhance tissue/bone formation. Manuscripts were searched systematically until May 2016 and separated into the following categories: intrabony and furcation defect regeneration, extraction socket management, sinus lifting procedures, gingival recession treatment, and guided bone regeneration (GBR) including horizontal/vertical bone augmentation procedures. Only human randomized clinical trials were included for assessment. In total, 35 articles were selected and divided accordingly (kappa = 0.94). Overall, the use of PRF has been most investigated in periodontology for the treatment of periodontal intrabony defects and gingival recessions where the majority of studies have demonstrated favorable results in soft tissue management and repair. Little to no randomized clinical trials were found for extraction socket management although PRF has been shown to significantly decrease by tenfold dry sockets of third molars. Very little to no data was available directly investigating the effects of PRF on new bone formation in GBR, horizontal/vertical bone augmentation procedures, treatment of peri-implantitis, and sinus lifting procedures. Much investigation now supports the use of PRF for periodontal and soft tissue repair. Despite this, there remains a lack of well-conducted studies demonstrating convincingly the role of PRF during hard tissue bone regeneration. Future human randomized clinical studies evaluating the use of PRF on bone formation thus remain necessary. PRF was shown to improve soft tissue generation and limit dimensional changes post-extraction, with little available data to date supporting its use in GBR.",
"title": ""
},
{
"docid": "8222f36e2aa06eac76085fb120c8edab",
"text": "Small jobs, that are typically run for interactive data analyses in datacenters, continue to be plagued by disproportionately long-running tasks called stragglers. In the production clusters at Facebook and Microsoft Bing, even after applying state-of-the-art straggler mitigation techniques, these latency sensitive jobs have stragglers that are on average 8 times slower than the median task in that job. Such stragglers increase the average job duration by 47%. This is because current mitigation techniques all involve an element of waiting and speculation. We instead propose full cloning of small jobs, avoiding waiting and speculation altogether. Cloning of small jobs only marginally increases utilization because workloads show that while the majority of jobs are small, they only consume a small fraction of the resources. The main challenge of cloning is, however, that extra clones can cause contention for intermediate data. We use a technique, delay assignment, which efficiently avoids such contention. Evaluation of our system, Dolly, using production workloads shows that the small jobs speedup by 34% to 46% after state-of-the-art mitigation techniques have been applied, using just 5% extra resources for cloning.",
"title": ""
},
{
"docid": "3cf1197436af89889edc04cae8acfb0f",
"text": "The rapid growth of new radio technologies for Smart City/Building/Home applications means that models of cross-technology interference are needed to inform the development of higher layer protocols and applications. We systematically investigate interference interactions between LoRa and IEEE 802.15.4g networks. Our results show that LoRa can obtain high packet reception rates, even in presence of strong IEEE 802.15.4g interference. IEEE 802.15.4g is also shown to have some resilience to LoRa interference. Both effects are highly dependent on the LoRa radio's spreading factor and bandwidth configuration, as well as on the channelization. The results are shown to arise from the interaction between the two radios' modulation schemes. The data have implications for the design and analysis of protocols for both radio technologies.",
"title": ""
},
{
"docid": "1846bbaac13e4a8d5c34b1657a5b634c",
"text": "Technology advancement entails an analog design scenario in which sophisticated signal processing algorithms are deployed in mixed-mode and radio frequency circuits to compensate for deterministic and random deficiencies of process technologies. This article reviews one such approach of applying a common communication technique, equalization, to correct for nonlinear distortions in analog circuits, which is analogized as non-ideal communication channels. The efficacy of this approach is showcased by a few latest advances in data conversion and RF transmission integrated circuits, where unprecedented energy efficiency, circuit linearity, and post-fabrication adaptability have been attained with low-cost digital processing.",
"title": ""
},
{
"docid": "3d8345898c5d058217447de807027902",
"text": "A problem in the design of decision aids is how to design them so that decision makers will trust them and therefore use them appropriately. This problem is approached in this paper by taking models of trust between humans as a starting point, and extending these to the human-machine relationship. A definition and model of human-machine trust are proposed, and the dynamics of trust between humans and machines are examined. Based upon this analysis, recommendations are made for calibrating users' trust in decision aids.",
"title": ""
},
{
"docid": "c4490ecc0b0fb0641dc41313d93ccf44",
"text": "Machine learning predictive modeling algorithms are governed by “hyperparameters” that have no clear defaults agreeable to a wide range of applications. The depth of a decision tree, number of trees in a forest, number of hidden layers and neurons in each layer in a neural network, and degree of regularization to prevent overfitting are a few examples of quantities that must be prescribed for these algorithms. Not only do ideal settings for the hyperparameters dictate the performance of the training process, but more importantly they govern the quality of the resulting predictive models. Recent efforts to move from a manual or random adjustment of these parameters include rough grid search and intelligent numerical optimization strategies. This paper presents an automatic tuning implementation that uses local search optimization for tuning hyperparameters of modeling algorithms in SAS® Visual Data Mining and Machine Learning. The AUTOTUNE statement in the TREESPLIT, FOREST, GRADBOOST, NNET, SVMACHINE, and FACTMAC procedures defines tunable parameters, default ranges, user overrides, and validation schemes to avoid overfitting. Given the inherent expense of training numerous candidate models, the paper addresses efficient distributed and parallel paradigms for training and tuning models on the SAS® ViyaTM platform. It also presents sample tuning results that demonstrate improved model accuracy and offers recommendations for efficient and effective model tuning.",
"title": ""
},
{
"docid": "de73980005a62a24820ed199fab082a3",
"text": "Natural language interfaces offer end-users a familiar and convenient option for querying ontology-based knowledge bases. Several studies have shown that they can achieve high retrieval performance as well as domain independence. This paper focuses on usability and investigates if NLIs are useful from an end-user’s point of view. To that end, we introduce four interfaces each allowing a different query language and present a usability study benchmarking these interfaces. The results of the study reveal a clear preference for full sentences as query language and confirm that NLIs are useful for querying Semantic Web data.",
"title": ""
},
{
"docid": "11d458252bc83d062c8cf46a556c6c80",
"text": "A new bio-inspired optimisation algorithm: Bird Swarm Algorithm Xian-Bing Meng, X.Z. Gao, Lihua Lu, Yu Liu & Hengzhen Zhang a College of Information Engineering, Shanghai Maritime University, Shanghai, P.R. China b Chengdu Green Energy and Green Manufacturing R&D Center, Chengdu, P.R. China c Department of Electrical Engineering and Automation, Aalto University School of Electrical Engineering, Aalto, Finland d School of Computer Science, Fudan University, Shanghai, P.R. China e College of Mathematics and Information Science, Zhengzhou University of Light Industry, Zhengzhou, P.R. China Published online: 17 Jun 2015.",
"title": ""
},
{
"docid": "db1ea53535365f0ca01a806fa0f7b6d7",
"text": "OBJECTIVES\nThe purpose of this study was to examine the overlap in burnout and depression.\n\n\nMETHOD\nThe sample comprised 1,386 schoolteachers (mean [M]age = 43; Myears taught = 15; 77% women) from 18 different U.S. states. We assessed burnout, using the Shirom-Melamed Burnout Measure, and depression, using the depression module of the Patient Health Questionnaire.\n\n\nRESULTS\nTreated dimensionally, burnout and depressive symptoms were strongly correlated (.77; disattenuated correlation, .84). Burnout and depressive symptoms were similarly correlated with each of 3 stress-related factors, stressful life events, job adversity, and workplace support. In categorical analyses, 86% of the teachers identified as burned out met criteria for a provisional diagnosis of depression. Exploratory analyses revealed a link between burnout and anxiety.\n\n\nCONCLUSIONS\nThis study provides evidence that past research has underestimated burnout-depression overlap. The state of burnout is likely to be a form of depression. Given the magnitude of burnout-depression overlap, treatments for depression may help workers identified as \"burned out.\"",
"title": ""
},
{
"docid": "9b9cff2b6d1313844b88bad5a2724c52",
"text": "A robot is usually an electro-mechanical machine that is guided by computer and electronic programming. Many robots have been built for manufacturing purpose and can be found in factories around the world. Designing of the latest inverted ROBOT which can be controlling using an APP for android mobile. We are developing the remote buttons in the android app by which we can control the robot motion with them. And in which we use Bluetooth communication to interface controller and android. Controller can be interfaced to the Bluetooth module though UART protocol. According to commands received from android the robot motion can be controlled. The consistent output of a robotic system along with quality and repeatability are unmatched. Pick and Place robots can be reprogrammable and tooling can be interchanged to provide for multiple applications.",
"title": ""
},
{
"docid": "7d7ae20eab5f945f9e08969b7ab9152d",
"text": "Semantic tagging of mathematical expressions (STME) gives semantic meanings to tokens in mathematical expressions. In this work, we propose a novel STME approach that relies on neither text along with expressions, nor labelled training data. Instead, our method only requires a mathematical grammar set. We point out that, besides the grammar of mathematics, the special property of variables and user habits of writing expressions help us understand the implicit intents of the user. We build a system that considers both restrictions from the grammar and variable properties, and then apply an unsupervised method to our probabilistic model to learn the user habits. To evaluate our system, we build large-scale training and test datasets automatically from a public math forum. The results demonstrate the significant improvement of our method, compared to the maximum-frequency baseline. We also create statistics to reveal the properties of mathematics language.",
"title": ""
},
{
"docid": "027a5da45d41ce5df40f6b342a9e4485",
"text": "GPipe is a scalable pipeline parallelism library that enables learning of giant deep neural networks. It partitions network layers across accelerators and pipelines execution to achieve high hardware utilization. It leverages recomputation to minimize activation memory usage. For example, using partitions over 8 accelerators, it is able to train networks that are 25× larger, demonstrating its scalability. It also guarantees that the computed gradients remain consistent regardless of the number of partitions. It achieves an almost linear speedup without any changes in the model parameters: when using 4× more accelerators, training the same model is up to 3.5× faster. We train a 557 million parameters AmoebaNet model and achieve a new state-ofthe-art 84.3% top-1 / 97.0% top-5 accuracy on ImageNet 2012 dataset. Finally, we use this learned model to finetune multiple popular image classification datasets and obtain competitive results, including pushing the CIFAR-10 accuracy to 99% and CIFAR-100 accuracy to 91.3%.",
"title": ""
}
] |
scidocsrr
|
f88b750f2f2c21dc9dc32a200aef7dfc
|
Learning Multilayer Channel Features for Pedestrian Detection
|
[
{
"docid": "c9b6f91a7b69890db88b929140f674ec",
"text": "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.",
"title": ""
},
{
"docid": "ca20d27b1e6bfd1f827f967473d8bbdd",
"text": "We propose a simple yet effective detector for pedestrian detection. The basic idea is to incorporate common sense and everyday knowledge into the design of simple and computationally efficient features. As pedestrians usually appear up-right in image or video data, the problem of pedestrian detection is considerably simpler than general purpose people detection. We therefore employ a statistical model of the up-right human body where the head, the upper body, and the lower body are treated as three distinct components. Our main contribution is to systematically design a pool of rectangular templates that are tailored to this shape model. As we incorporate different kinds of low-level measurements, the resulting multi-modal & multi-channel Haar-like features represent characteristic differences between parts of the human body yet are robust against variations in clothing or environmental settings. Our approach avoids exhaustive searches over all possible configurations of rectangle features and neither relies on random sampling. It thus marks a middle ground among recently published techniques and yields efficient low-dimensional yet highly discriminative features. Experimental results on the INRIA and Caltech pedestrian datasets show that our detector reaches state-of-the-art performance at low computational costs and that our features are robust against occlusions.",
"title": ""
},
{
"docid": "fd14b9e25affb05fd9b05036f3ce350b",
"text": "Recent advances in pedestrian detection are attained by transferring the learned features of Convolutional Neural Network (ConvNet) to pedestrians. This ConvNet is typically pre-trained with massive general object categories (e.g. ImageNet). Although these features are able to handle variations such as poses, viewpoints, and lightings, they may fail when pedestrian images with complex occlusions are present. Occlusion handling is one of the most important problem in pedestrian detection. Unlike previous deep models that directly learned a single detector for pedestrian detection, we propose DeepParts, which consists of extensive part detectors. DeepParts has several appealing properties. First, DeepParts can be trained on weakly labeled data, i.e. only pedestrian bounding boxes without part annotations are provided. Second, DeepParts is able to handle low IoU positive proposals that shift away from ground truth. Third, each part detector in DeepParts is a strong detector that can detect pedestrian by observing only a part of a proposal. Extensive experiments in Caltech dataset demonstrate the effectiveness of DeepParts, which yields a new state-of-the-art miss rate of 11:89%, outperforming the second best method by 10%.",
"title": ""
}
] |
[
{
"docid": "04d66f58cea190d7d7ec8654b6c81d3b",
"text": "Lymphedema is a chronic, progressive condition caused by an imbalance of lymphatic flow. Upper extremity lymphedema has been reported in 16-40% of breast cancer patients following axillary lymph node dissection. Furthermore, lymphedema following sentinel lymph node biopsy alone has been reported in 3.5% of patients. While the disease process is not new, there has been significant progress in the surgical care of lymphedema that can offer alternatives and improvements in management. The purpose of this review is to provide a comprehensive update and overview of the current advances and surgical treatment options for upper extremity lymphedema.",
"title": ""
},
{
"docid": "31fc886990140919aabce17aa7774910",
"text": "Today, at the low end of the communication protocols we find the inter-integrated circuit (I2C) and the serial peripheral interface (SPI) protocols. Both protocols are well suited for communications between integrated circuits for slow communication with on-board peripherals. The two protocols coexist in modern digital electronics systems, and they probably will continue to compete in the future, as both I2C and SPI are actually quite complementary for this kind of communication.",
"title": ""
},
{
"docid": "adefe589c396a1af02ab9450f6ff87a0",
"text": "The large performance gap between main memory and secondary storage accounts for many design decisions of traditional database systems. With the upcoming availability of Non-Volatile Memory (NVM), which has latencies in the same order of magnitude as DRAM, is byte-addressable and persistent, a completely new type of technology is added to the memory stack. This changes some basic assumptions such as slow storage, block granular access, and that sequential accesses are much faster than random accesses. New ideas are therefore needed to efficiently leverage NVM. Although several new approaches can be found in the literature, the exact role of NVM is not yet clear. In this paper, we survey recent work in this area and classify the existing approaches. We focus on two key challenges: (1) integration of NVM into the memory hierarchy and (2) the design of NVM-aware data structures. We contrast the different approaches, highlight their advantages and limitations, and make recommendations.",
"title": ""
},
{
"docid": "58925e0088e240f42836f0c5d29f88d3",
"text": "SUMMARY\nDnaSP is a software package for the analysis of DNA polymorphism data. Present version introduces several new modules and features which, among other options allow: (1) handling big data sets (approximately 5 Mb per sequence); (2) conducting a large number of coalescent-based tests by Monte Carlo computer simulations; (3) extensive analyses of the genetic differentiation and gene flow among populations; (4) analysing the evolutionary pattern of preferred and unpreferred codons; (5) generating graphical outputs for an easy visualization of results.\n\n\nAVAILABILITY\nThe software package, including complete documentation and examples, is freely available to academic users from: http://www.ub.es/dnasp",
"title": ""
},
{
"docid": "1856090b401a304f1172c2958d05d6b3",
"text": "The Iranian government operates one of the largest and most sophisticated Internet censorship regimes in the world, but the mechanisms it employs have received little research attention, primarily due to lack of access to network connections within the country and personal risks to Iranian citizens who take part. In this paper, we examine the status of Internet censorship in Iran based on network measurements conducted from a major Iranian ISP during the lead up to the June 2013 presidential election. We measure the scope of the censorship by probing Alexa’s top 500 websites in 18 different categories. We investigate the technical mechanisms used for HTTP Host–based blocking, keyword filtering, DNS hijacking, and protocol-based throttling. Finally, we map the network topology of the censorship infrastructure and find evidence that it relies heavily on centralized equipment, a property that might be fruitfully exploited by next generation approaches to censorship circumvention.",
"title": ""
},
{
"docid": "5f8956868216a6c85fadfaba6aed1413",
"text": "Recent years have witnessed an incredibly increasing interest in the topic of incremental learning. Unlike conventional machine learning situations, data flow targeted by incremental learning becomes available continuously over time. Accordingly, it is desirable to be able to abandon the traditional assumption of the availability of representative training data during the training period to develop decision boundaries. Under scenarios of continuous data flow, the challenge is how to transform the vast amount of stream raw data into information and knowledge representation, and accumulate experience over time to support future decision-making process. In this paper, we propose a general adaptive incremental learning framework named ADAIN that is capable of learning from continuous raw data, accumulating experience over time, and using such knowledge to improve future learning and prediction performance. Detailed system level architecture and design strategies are presented in this paper. Simulation results over several real-world data sets are used to validate the effectiveness of this method.",
"title": ""
},
{
"docid": "2754f8f6357c15c6bc4e479e3823c288",
"text": "The world wide annual expenditures for cosmetics is estimated at U.S.$18 billion, and many players in the field are competing aggressively to capture more and more market. Hence, companies are interested to know about consumer’s attitude towards cosmetics so as to devise strategies to win over competition. The main purpose of this article is to investigate the influence of attitude on cosmetics buying behaviour. The research question is “what kind of attitudes do the customers have towards buying behaviour of cosmetic products?” A questionnaire was developed and distributed to female consumers in Bangalore city by using convenience sampling method. 118 completed questionnaires were returned and then 100 valid were analyzed by using ANOVA, mean and standard deviation. The result of the study confirms that age, occupation, marital status have positive influence towards cosmetic products. But income does not have any influence on the attitude towards cosmetic products.",
"title": ""
},
{
"docid": "c3fda89c22e17144b3046bb4639d6d7a",
"text": "Since 1990s Honeybee Robotics has been developing and testing surface coring drills for future planetary missions. Recently, we focused on developing a rotary-percussive core drill for the 2018 Mars Sample Return mission and in particular for the Mars Astrobiology Explorer-Cacher, MAX-C mission. The goal of the 2018 MAX-C mission is to acquire approximately 20 cores from various rocks and outcrops on the surface of Mars. The acquired cores, 1 cm diameter and 5 cm long, would be cached for return back to Earth either in 2022 or 2024, depending which of the MSR architectures is selected. We built a testbed coring drill that was used to acquire drilling data, such as power, rate of penetration, and Weight on Bit, in various rock formations. Based on these drilling data we designed a prototype Mars Sample Return coring drill. The proposed MSR drill is an arm-mounted, standalone device, requiring no additional arm actuation once positioned and preloaded. A low mass, compact transmission internal to the housing provides all of the actuation of the tool mechanisms. The drill uses a rotary-percussive drilling approach and can acquire a 1 cm diameter and 5 cm long core in Saddleback basalt in less than 30 minutes with only ∼20 N Weight on Bit and less than 100 Watt of power. The prototype MSR drill weighs approximately 5 kg1,2.",
"title": ""
},
{
"docid": "80af9f789b334aae324b549fffe4511a",
"text": "The research community is interested in developing automatic systems for the detection of events in video. This is particularly important in the field of sports data analytics. This paper presents an approach for identifying major complex events in soccer videos, starting from object detection and spatial relations between objects. The proposed framework, firstly, detects objects from each single video frame providing a set of candidate objects with associated confidence scores. The event detection system, then, detects events by means of rules which are based on temporal and logical combinations of the detected objects and their relative distances. The effectiveness of the framework is preliminary demonstrated over different events like \"Ball possession\" and \"Kicking the ball\".",
"title": ""
},
{
"docid": "d965ee924caff5a8cc492d1ef18e02a7",
"text": "Chronic exertional anterior compartment syndrome is debilitating disease of the lower limb. Limited symptomology characterises the clinical picture at rest, pain during sporting activities, tumefaction, and contractures of the limb as well impotency by the pain of the entire forefoot and hypoesthesia. Usually, the most affected patients are athletes. We analyse a case of chronic post-traumatic compartment syndrome of the anterior tibial muscle in an unsportsmanlike patient.",
"title": ""
},
{
"docid": "0f5d64361b601a06a675a4d900152f3f",
"text": "Big Data is a term which denotes data that is beyond storage capacity and processing capabilities of classical computer and getting some insight from large amount of data is a very big challenge at hand. Quantum Computing comes to rescue by offering a lot of promises in information processing systems, particularly in Big Data Analytics. In this paper, we have reviewed the available literature on Big Data Analytics using Quantum Computing for Machine Learning and its current state of the art. We categorized the Quantum Machine learning in different subfields depending upon the logic of their learning followed by a review in each technique. Quantum Walks used to construct Quantum Artificial Neural Networks, which exponentially speed-up the quantum machine learning algorithm is discussed. Quantum Supervised and Unsupervised machine learning and its benefits are compared with that of Classical counterpart. The limitations of some of the existing Machine learning techniques and tools are enunciated, and the significance of Quantum computing in Big Data Analytics is incorporated. Being in its infancy as a totally new field, Quantum computing comes up with a lot of open challenges as well. The challenges, promises, future directions and techniques of the Quantum Computing in Machine Learning are also highlighted.",
"title": ""
},
{
"docid": "b20362bd0533562e80cc9e0b89e99506",
"text": "The aim of this paper is to explicate what is special about emotional information processing, emphasizing the neural foundations that underlie the experience and expression of fear. A functional, anatomical model of defense behavior in animals is presented and applications are described in cognitive and physiological studies of human affect. It is proposed that unpleasant emotions depend on the activation of an evolutionarily primitive subcortical circuit, including the amygdala and the neural structures to which it projects. This motivational system mediates specific autonomic (e.g., heart rate change) and somatic reflexes (e.g., startle change) that originally promoted survival in dangerous conditions. These same response patterns are illustrated in humans, as they process objective, memorial, and media stimuli. Furthermore, it is shown how variations in the neural circuit and its outputs may separately characterize cue-specific fear (as in specific phobia) and more generalized anxiety. Finally, again emphasizing links between the animal and human data, we focus on special, attentional features of emotional processing: The automaticity of fear reactions, hyper-reactivity to minimal threat-cues, and evidence that the physiological responses in fear may be independent of slower, language-based appraisal processes.",
"title": ""
},
{
"docid": "08f7b46ed2d134737c62381a7e193af3",
"text": "We have been advocating cognitive developmental robotics to obtain new insight into the development of human cognitive functions by utilizing synthetic and constructive approaches. Among the different emotional functions, empathy is difficult to model, but essential for robots to be social agents in our society. In my previous review on artificial empathy (Asada, 2014b), I proposed a conceptual model for empathy development beginning with emotional contagion to envy/schadenfreude along with self/other differentiation. In this article, the focus is on two aspects of this developmental process, emotional contagion in relation to motor mimicry, and cognitive/affective aspects of the empathy. It begins with a summary of the previous review (Asada, 2014b) and an introduction to affective developmental robotics as a part of cognitive developmental robotics focusing on the affective aspects. This is followed by a review and discussion on several approaches for two focused aspects of affective developmental robotics. Finally, future issues involved in the development of a more authentic form of artificial empathy are discussed.",
"title": ""
},
{
"docid": "229cdcef4b7a28b73d4bde192ad0cb53",
"text": "The problem of anomaly detection is a critical topic across application domains and is the subject of extensive research. Applications include finding frauds and intrusions, warning on robot safety, and many others. Standard approaches in this field exploit simple or complex system models, created by experts using detailed domain knowledge. In this paper, we put forth a statistics-based anomaly detector motivated by the fact that anomalies are sparse by their very nature. Powerful sparsity directed algorithms—namely Robust Principal Component Analysis and the Group Fused LASSO—form the basis of the methodology. Our novel unsupervised single-step solution imposes a convex optimisation task on the vector time series data of the monitored system by employing group-structured, switching and robust regularisation techniques. We evaluated our method on data generated by using a Baxter robot arm that was disturbed randomly by a human operator. Our procedure was able to outperform two baseline schemes in terms of F1 score. Generalisations to more complex dynamical scenarios are desired.",
"title": ""
},
{
"docid": "ea277c160544fb54bef69e2a4fa85233",
"text": "This paper proposes approaches to measure linkography in protocol studies of designing. It outlines the ideas behind using clustering and Shannon’s entropy as measures of designing behaviour. Hypothetical cases are used to illustrate the methods. The paper concludes that these methods may form the basis of a new tool to assess designer behaviour in terms of chunking of design ideas and the opportunities for idea development.",
"title": ""
},
{
"docid": "06a6a63bf7b675557dc8cfaccee60831",
"text": "By 2020, it is estimated that the number of connected devices is expected to grow exponentially to 50 billion. Internet of things has gained extensive attention, the deployment of sensors, actuators are increasing at a rapid pace around the world. There is tremendous scope for more streamlined living through an increase of smart services, but this coincides with an increase in security and privacy concerns. There is a need to perform a systematic review of Information security governance frameworks in the Internet of things (IoT). Objective – The aim of this paper to evaluate systematic review of information security management frameworks which are related to the Internet of things (IoT). It will also discuss different information security frameworks that cover IoT models and deployments across different verticals. These frameworks are classified according to the area of the framework, the security executives and senior management of any enterprise that plans to start using smart services needs to define a clear governance strategy concerning the security of their assets, this system review will help them to make a better decision for their investment for secure IoT deployments. Method – A set of standard criteria has been established to analyze which security framework will be the best fit among these classified security structures in particularly for Internet of Things (IoT). The first step to evaluate security framework by using standard criteria methodology is to identify resources, the security framework for IoT is selected to be assessed according to CCS. The second step is to develop a set of Security Targets (ST). The ST is the set of criteria to apply for the target of evaluation (TOE). The third step is data extraction, fourth step data synthesis, and final step is to write-up study as a report. Conclusion– After reviewing four information security risk frameworks, this study makes some suggestions related to information security risk governance in Internet of Things (IoT). The organizations that have decided to move to smart devices have to define the benefits and risks and deployment processes to manage security risk. The information security risk policies should comply with an organization's IT policies and standards to protect the confidentiality, integrity and availability of information security. The study observes some of the main processes that are needed to manage security risks. Moreover, the paper also drew attention on some suggestions that may assist companies which are associated with the information security framework in Internet of things (IoT).",
"title": ""
},
{
"docid": "cd81ad1c571f9e9a80e2d09582b00f9a",
"text": "OBJECTIVE\nThe biologic basis for gender identity is unknown. Research has shown that the ratio of the length of the second and fourth digits (2D:4D) in mammals is influenced by biologic sex in utero, but data on 2D:4D ratios in transgender individuals are scarce and contradictory. We investigated a possible association between 2D:4D ratio and gender identity in our transgender clinic population in Albany, New York.\n\n\nMETHODS\nWe prospectively recruited 118 transgender subjects undergoing hormonal therapy (50 female to male [FTM] and 68 male to female [MTF]) for finger length measurement. The control group consisted of 37 cisgender volunteers (18 females, 19 males). The length of the second and fourth digits were measured using digital calipers. The 2D:4D ratios were calculated and analyzed with unpaired t tests.\n\n\nRESULTS\nFTM subjects had a smaller dominant hand 2D:4D ratio (0.983 ± 0.027) compared to cisgender female controls (0.998 ± 0.021, P = .029), but a ratio similar to control males (0.972 ± 0.036, P =.19). There was no difference in the 2D:4D ratio of MTF subjects (0.978 ± 0.029) compared to cisgender male controls (0.972 ± 0.036, P = .434).\n\n\nCONCLUSION\nOur findings are consistent with a biologic basis for transgender identity and the possibilities that FTM gender identity is affected by prenatal androgen activity but that MTF transgender identity has a different basis.\n\n\nABBREVIATIONS\n2D:4D = 2nd digit to 4th digit; FTM = female to male; MTF = male to female.",
"title": ""
},
{
"docid": "93f8ba979ea679d6b9be6f949f8ee6ed",
"text": "This paper presents a method for Simultaneous Localization and Mapping (SLAM), relying on a monocular camera as the only sensor, which is able to build outdoor, closed-loop maps much larger than previously achieved with such input. Our system, based on the Hierarchical Map approach [1], builds independent local maps in real-time using the EKF-SLAM technique and the inverse depth representation proposed in [2]. The main novelty in the local mapping process is the use of a data association technique that greatly improves its robustness in dynamic and complex environments. A new visual map matching algorithm stitches these maps together and is able to detect large loops automatically, taking into account the unobservability of scale intrinsic to pure monocular SLAM. The loop closing constraint is applied at the upper level of the Hierarchical Map in near real-time. We present experimental results demonstrating monocular SLAM as a human carries a camera over long walked trajectories in outdoor areas with people and other clutter, even in the more difficult case of forward-looking camera, and show the closing of loops of several hundred meters.",
"title": ""
},
{
"docid": "d3095d26a0fa1ea75b6496d59cbb6b8e",
"text": "This paper describes the application of artificial intelligence (AI) to the creation of digital art. AI is a computational paradigm that codifies intelligence into machines. There are generally three types of AI and these are machine learning, evolutionary programming and soft computing. Machine learning is the statistical approach to building intelligent systems. Evolutionary programming is the use of natural evolutionary systems to design intelligent machines. Some of the evolutionary programming systems include genetic algorithm which is inspired by the principles of evolution and swarm optimization which is inspired by the swarming of birds, fish, ants etc. Soft computing includes techniques such as agent based modelling and fuzzy logic. Opportunities on the applications of these to digital art are explored.",
"title": ""
},
{
"docid": "94e5d19f134670a6ae982311e6c1ccc1",
"text": "In mobile ad hoc networks, it is usually assumed that all the nodes belong to the same authority; therefore, they are expected to cooperate in order to support the basic functions of the network such as routing. In this paper, we consider the case in which each node is its own authority and tries to maximize the bene ts it gets from the network. In order to stimulate cooperation, we introduce a virtual currency and detail the way it can be protected against theft and forgery. We show that this mechanism ful lls our expectations without signi cantly decreasing the performance of the network.",
"title": ""
}
] |
scidocsrr
|
5ef8c0bb71a9be7bb95b7ebd3e936980
|
Learning End-to-end Autonomous Driving using Guided Auxiliary Supervision
|
[
{
"docid": "d8f21e77a60852ea83f4ebf74da3bcd0",
"text": "In recent years different lines of evidence have led to the idea that motor actions and movements in both vertebrates and invertebrates are composed of elementary building blocks. The entire motor repertoire can be spanned by applying a well-defined set of operations and transformations to these primitives and by combining them in many different ways according to well-defined syntactic rules. Motor and movement primitives and modules might exist at the neural, dynamic and kinematic levels with complicated mapping among the elementary building blocks subserving these different levels of representation. Hence, while considerable progress has been made in recent years in unravelling the nature of these primitives, new experimental, computational and conceptual approaches are needed to further advance our understanding of motor compositionality.",
"title": ""
},
{
"docid": "21abc097d58698c5eae1cddab9bf884e",
"text": "Advances in deep reinforcement learning have allowed autonomous agents to perform well on Atari games, often outperforming humans, using only raw pixels to make their decisions. However, most of these games take place in 2D environments that are fully observable to the agent. In this paper, we present the first architecture to tackle 3D environments in first-person shooter games, that involve partially observable states. Typically, deep reinforcement learning methods only utilize visual input for training. We present a method to augment these models to exploit game feature information such as the presence of enemies or items, during the training phase. Our model is trained to simultaneously learn these features along with minimizing a Q-learning objective, which is shown to dramatically improve the training speed and performance of our agent. Our architecture is also modularized to allow different models to be independently trained for different phases of the game. We show that the proposed architecture substantially outperforms built-in AI agents of the game as well as average humans in deathmatch scenarios.",
"title": ""
},
{
"docid": "04647771810ac62b27ee8da12833a02d",
"text": "Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.",
"title": ""
}
] |
[
{
"docid": "2cec15eeca04fe361f7d11e63b9f2fa7",
"text": "We construct a local Lax-Friedrichs type positivity-preserving flux for compressible Navier-Stokes equations, which can be easily extended to high dimensions for generic forms of equations of state, shear stress tensor and heat flux. With this positivity-preserving flux, any finite volume type schemes including discontinuous Galerkin (DG) schemes with strong stability preserving Runge-Kutta time discretizations satisfy a weak positivity property. With a simple and efficient positivity-preserving limiter, high order explicit Runge-Kutta DG schemes are rendered preserving the positivity of density and internal energy without losing local conservation or high order accuracy. Numerical tests suggest that the positivity-preserving flux and the positivity-preserving limiter do not induce excessive artificial viscosity, and the high order positivity-preserving DG schemes without other limiters can produce satisfying non-oscillatory solutions when the nonlinear diffusion in compressible Navier-Stokes equations is accurately resolved.",
"title": ""
},
{
"docid": "c08bbd6acd494d36afc60f9612fee0bb",
"text": "Guided wave imaging has shown great potential for structural health monitoring applications by providing a way to visualize and characterize structural damage. For successful implementation of delay-and-sum and other elliptical imaging algorithms employing guided ultrasonic waves, some degree of mode purity is required because echoes from undesired modes cause imaging artifacts that obscure damage. But it is also desirable to utilize multiple modes because different modes may exhibit increased sensitivity to different types and orientations of defects. The well-known modetuning effect can be employed to use the same PZT transducers for generating and receiving multiple modes by exciting the transducers with narrowband tone bursts at different frequencies. However, this process is inconvenient and timeconsuming, particularly if extensive signal averaging is required to achieve a satisfactory signal-to-noise ratio. In addition, both acquisition time and data storage requirements may be prohibitive if signals from many narrowband tone burst excitations are measured. In this paper, we utilize a chirp excitation to excite PZT transducers over a broad frequency range to acquire multi-modal data with a single transmission, which can significantly reduce both the measurement time and the quantity of data. Each received signal from a chirp excitation is post-processed to obtain multiple signals corresponding to different narrowband frequency ranges. Narrowband signals with the best mode purity and echo shape are selected and then used to generate multiple images of damage in a target structure. The efficacy of the proposed technique is demonstrated experimentally using an aluminum plate instrumented with a spatially distributed array of piezoelectric sensors and with simulated damage.",
"title": ""
},
{
"docid": "e04cccfd59c056678e39fc4aed0eaa2b",
"text": "BACKGROUND\nBreast cancer is by far the most frequent cancer of women. However the preventive measures for such problem are probably less than expected. The objectives of this study are to assess breast cancer knowledge and attitudes and factors associated with the practice of breast self examination (BSE) among female teachers of Saudi Arabia.\n\n\nPATIENTS AND METHODS\nWe conducted a cross-sectional survey of teachers working in female schools in Buraidah, Saudi Arabia using a self-administered questionnaire to investigate participants' knowledge about the risk factors of breast cancer, their attitudes and screening behaviors. A sample of 376 female teachers was randomly selected. Participants lived in urban areas, and had an average age of 34.7 ±5.4 years.\n\n\nRESULTS\nMore than half of the women showed a limited knowledge level. Among participants, the most frequently reported risk factors were non-breast feeding and the use of female sex hormones. The printed media was the most common source of knowledge. Logistic regression analysis revealed that high income was the most significant predictor of better knowledge level. Knowing a non-relative case with breast cancer and having a high knowledge level were identified as the significant predictors for practicing BSE.\n\n\nCONCLUSIONS\nThe study points to the insufficient knowledge of female teachers about breast cancer and identified the negative influence of low knowledge on the practice of BSE. Accordingly, relevant educational programs to improve the knowledge level of women regarding breast cancer are needed.",
"title": ""
},
{
"docid": "ab2c4d5317d2e10450513283c21ca6d3",
"text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.",
"title": ""
},
{
"docid": "802ea65d9d03f7a39797eeed1e7fb283",
"text": "Chinese clinical entity recognition is a fundamental task of Chinese clinical natural language processing, which has attracted plenty of attention. In this paper, we propose a novel neural network, called attention-based CNN-LSTM-CRF, for this task. The neural network employs a CNN (convolutional neural network) layer to capture local context information of words of interest, a LSTM (long-short term memory) layer to obtain global information of each sentence, an attention layer to select relevant words, and a CRF layer to predict a label sequence for an input sentence. In order to evaluate the performance of the proposed method, we compare it with other two state-of-the-art methods, CRF (conditional random field) and LSTM-CRF, on two benchmark datasets. Experimental results show that the proposed neural network outperforms CRF and LSTM-CRF.",
"title": ""
},
{
"docid": "b540cb8f0f0825662d21a5e2ed100012",
"text": "Social media platforms are popular venues for fashion brand marketing and advertising. With the introduction of native advertising, users don’t have to endure banner ads that hold very little saliency and are unattractive. Using images and subtle text overlays, even in a world of ever-depreciating attention span, brands can retain their audience and have a capacious creative potential. While an assortment of marketing strategies are conjectured, the subtle distinctions between various types of marketing strategies remain under-explored. This paper presents a qualitative analysis on the influence of social media platforms on different behaviors of fashion brand marketing. We employ both linguistic and computer vision techniques while comparing and contrasting strategic idiosyncrasies. We also analyze brand audience retention and social engagement hence providing suggestions in adapting advertising and marketing strategies over Twitter and Instagram.",
"title": ""
},
{
"docid": "186f3c7df4505fd488e7effacdae4df2",
"text": "Patients nationwide experience difficulties in accessing medical appointments in a timely manner due to long backlogs. Meanwhile, patients do not always show up for their scheduled services, with significant no-show rates. Unattended appointments result in under-utilization of a clinic’s valuable resources, and limit the access for other patients who could have filled the missed slots. Medical practices aim to utilize their valuable resources efficiently, provide timely access to care, and at the same time they strive to provide short waits for patients present at the medical facility. We study the joint problem of determining the panel size of a medical practice and the number of offered appointment slots per day, so that patients do not face long backlogs and the clinic is not overcrowded. We explicitly model the two separate time scales involved in accessing medical care: appointment delay (order of days, weeks) and clinic delay (order of minutes, hours). We analyze the two queueing systems associated with each type of delay, and provide explicit expressions for the performance measures of interest based on diffusion approximations. In our analysis we capture many features of the complex reality of outpatient care, including patients’ non-punctuality, no-shows, balking behavior, and stochastic service times. Two additional distinctive characteristics of this study are the balking behavior of the patients who face long appointment backlogs, and the transient-state analysis of the clinic delay, which allow the study of a system with traffic intensity greater than one. Concerning the panel sizing and appointment scheduling decisions, our analysis provides theoretical and numerical support that the two-variable optimization problem reduces to a single variable-one, and either an “Open Access” policy is optimal, or supply and demand are perfectly matched and are both very small (“Limited Access” regime). Under our Open Access regime, the clinic offers as many appointment slots as possible per day, and the optimal panel size depends on the clinic’s characteristics. A solution within the Limited Access regime arises when the service times are long, and the patients are very sensitive to the appointment delay.",
"title": ""
},
{
"docid": "2a0194f2af99910546ece94abc4ee6e9",
"text": "CBCT is a widely applied imaging modality in dentistry. It enables the visualization of high-contrast structures of the oral region (bone, teeth, air cavities) at a high resolution. CBCT is now commonly used for the assessment of bone quality, primarily for pre-operative implant planning. Traditionally, bone quality parameters and classifications were primarily based on bone density, which could be estimated through the use of Hounsfield units derived from multidetector CT (MDCT) data sets. However, there are crucial differences between MDCT and CBCT, which complicates the use of quantitative gray values (GVs) for the latter. From experimental as well as clinical research, it can be seen that great variability of GVs can exist on CBCT images owing to various reasons that are inherently associated with this technique (i.e. the limited field size, relatively high amount of scattered radiation and limitations of currently applied reconstruction algorithms). Although attempts have been made to correct for GV variability, it can be postulated that the quantitative use of GVs in CBCT should be generally avoided at this time. In addition, recent research and clinical findings have shifted the paradigm of bone quality from a density-based analysis to a structural evaluation of the bone. The ever-improving image quality of CBCT allows it to display trabecular bone patterns, indicating that it may be possible to apply structural analysis methods that are commonly used in micro-CT and histology.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "c188731b9047bbbe70c35690a5a584ab",
"text": "Resource Managers like YARN and Mesos have emerged as a critical layer in the cloud computing system stack, but the developer abstractions for leasing cluster resources and instantiating application logic are very low level. This flexibility comes at a high cost in terms of developer effort, as each application must repeatedly tackle the same challenges (e.g., fault tolerance, task scheduling and coordination) and reimplement common mechanisms (e.g., caching, bulk-data transfers). This article presents REEF, a development framework that provides a control plane for scheduling and coordinating task-level (data-plane) work on cluster resources obtained from a Resource Manager. REEF provides mechanisms that facilitate resource reuse for data caching and state management abstractions that greatly ease the development of elastic data processing pipelines on cloud platforms that support a Resource Manager service. We illustrate the power of REEF by showing applications built atop: a distributed shell application, a machine-learning framework, a distributed in-memory caching system, and a port of the CORFU system. REEF is currently an Apache top-level project that has attracted contributors from several institutions and it is being used to develop several commercial offerings such as the Azure Stream Analytics service.",
"title": ""
},
{
"docid": "a92dc78bd087d6ffba1eede1da2e2c30",
"text": "Novelty identification is accustomed to distinguishing novel data from an approaching stream of documents. In this study, we proposed a novel methodology for document-level novelty identification by utilizing document-to-sentence-level strategy. This work first splits a document into sentences, decides the novelty of every sentence, then registers the record-level novelty score in view of an altered limit. Exploratory results on an arrangement of document demonstrate that our methodology beats standard document-level novelty discovery as far as repetition exactness and excess review. This work applies on the document-level information from an arrangement of documents. It is valuable in identifying novel data in information with a high rate of new documents. It has been effectively incorporated in a true novelty identification framework in the zone of information retrieval.",
"title": ""
},
{
"docid": "f2fa4fa43c21e8c65c752d6ad1d39d06",
"text": "Singing voice synthesis techniques have been proposed based on a hidden Markov model (HMM). In these approaches, the spectrum, excitation, and duration of singing voices are simultaneously modeled with context-dependent HMMs and waveforms are generated from the HMMs themselves. However, the quality of the synthesized singing voices still has not reached that of natural singing voices. Deep neural networks (DNNs) have largely improved on conventional approaches in various research areas including speech recognition, image recognition, speech synthesis, etc. The DNN-based text-to-speech (TTS) synthesis can synthesize high quality speech. In the DNN-based TTS system, a DNN is trained to represent the mapping function from contextual features to acoustic features, which are modeled by decision tree-clustered context dependent HMMs in the HMM-based TTS system. In this paper, we propose singing voice synthesis based on a DNN and evaluate its effectiveness. The relationship between the musical score and its acoustic features is modeled in frames by a DNN. For the sparseness of pitch context in a database, a musical-note-level pitch normalization and linear-interpolation techniques are used to prepare the excitation features. Subjective experimental results show that the DNN-based system outperformed the HMM-based system in terms of naturalness.",
"title": ""
},
{
"docid": "8a95182e28a22b074e9cccf01ab05b1c",
"text": "We introduce negative binomial matrix factorization (NBMF), a matrix factorization technique specially designed for analyzing over-dispersed count data. It can be viewed as an extension of Poisson matrix factorization (PF) perturbed by a multiplicative term which models exposure. This term brings a degree of freedom for controlling the dispersion, making NBMF more robust to outliers. We show that NBMF allows to skip traditional pre-processing stages, such as binarization, which lead to loss of information. Two estimation approaches are presented: maximum likelihood and variational Bayes inference. We test our model with a recommendation task and show its ability to predict user tastes with better precision than PF.",
"title": ""
},
{
"docid": "e44d7f7668590726def631c5ec5f5506",
"text": "Today thanks to low cost and high performance DSP's, Kalman filtering (KF) becomes an efficient candidate to avoid mechanical sensors in motor control. We present in this work experimental results by using a steady state KF method to estimate the speed and rotor position for hybrid stepper motor. With this method the computing time is reduced. The Kalman gain is pre-computed from numerical simulation and introduced as a constant in the real time algorithm. The load torque is also on-line estimated by the same algorithm. At start-up the initial rotor position is detected by the impulse current method.",
"title": ""
},
{
"docid": "47df1c464b766f2dbd1e7e0cc7ccb6b2",
"text": "he past two decades have seen a dramatic change in the role of risk management in corporations. Twenty years ago, the job of the corporate risk manager—typically, a low-level position in the corporate treasury—involved mainly the purchase of insurance. At the same time, treasurers were responsible for the hedging of interest rate and foreign exchange exposures. Over the last ten years, however, corporate risk management has expanded well beyond insurance and the hedging of financial exposures to include a variety of other kinds of risk—notably operational risk, reputational risk, and, most recently, strategic risk. What’s more, at a large and growing number of companies, the risk management function is directed by a senior executive with the title of chief risk officer (CRO) and overseen by a board of directors charged with monitoring risk measures and setting limits for these measures. A corporation can manage risks in one of two fundamentally different ways: (1) one risk at a time, on a largely compartmentalized and decentralized basis; or (2) all risks viewed together within a coordinated and strategic framework. The latter approach is often called “enterprise risk management,” or “ERM” for short. In this article, we suggest that companies that succeed in creating an effective ERM have a long-run competitive advantage over those that manage and monitor risks individually. Our argument in brief is that, by measuring and managing its risks consistently and systematically, and by giving its business managers the information and incentives to optimize the tradeoff between risk and return, a company strengthens its ability to carry out its strategic plan. In the pages that follow, we start by explaining how ERM can give companies a competitive advantage and add value for shareholders. Next we describe the process and challenges involved in implementing ERM. We begin by discussing how a company should assess its risk “appetite,” an assessment that should guide management’s decision about how much and which risks to retain and which to lay off. Then we show how companies should measure their risks. Third, we discuss various means of laying off “non-core” risks, which, as we argue below, increases the firm’s capacity for bearing those “core” risks the firm chooses to retain. Though ERM is conceptually straightforward, its implementation is not. And in the last—and longest—section of the chapter, we provide an extensive guide to the major difficulties that arise in practice when implementing ERM.",
"title": ""
},
{
"docid": "02d5de2ea87f5bcf27e45fc073fc6b23",
"text": "Sentiment analysis aims to extract users’ opinions from review documents. Nowadays, there are two main approaches for sentiment analysis: the semantic orientation and the machine learning. Sentiment analysis approaches based on Machine Learning (ML) methods work over a set of features extracted from the users’ opinions. However, the high dimensionality of the feature vector reduces the effectiveness of this approach. In this sense, we propose a sentiment classification method based on feature selection mechanisms and ML methods. The present method uses a hybrid feature extraction method based on POS pattern and dependency parsing. The features obtained are enriched semantically through commonsense knowledge bases. Then, a feature selection method is applied to eliminate the noisy and irrelevant features. Finally, a set of classifiers is trained in order to classify unknown data. To prove the effectiveness of our approach, we have conducted an evaluation in the movies and technological products domains. Also, our proposal was compared with well-known methods and algorithms used on the sentiment classification field. Our proposal obtained encouraging results based on the F-measure metric, ranging from 0.786 to 0.898 for the aforementioned domains.",
"title": ""
},
{
"docid": "1be58e70089b58ca3883425d1a46b031",
"text": "In this work, we propose a novel way to consider the clustering and the reduction of the dimension simultaneously. Indeed, our approach takes advantage of the mutual reinforcement between data reduction and clustering tasks. The use of a low-dimensional representation can be of help in providing simpler and more interpretable solutions. We show that by doing so, our model is able to better approximate the relaxed continuous dimension reduction solution by the true discrete clustering solution. Experiment results show that our method gives better results in terms of clustering than the state-of-the-art algorithms devoted to similar tasks for data sets with different proprieties.",
"title": ""
},
{
"docid": "64702593fd9271b7caa4178594f26469",
"text": "Microsoft operates the Azure SQL Database (ASD) cloud service, one of the dominant relational cloud database services in the market today. To aid the academic community in their research on designing and efficiently operating cloud database services, Microsoft is introducing the release of production-level telemetry traces from the ASD service. This telemetry data set provides, over a wide set of important hardware resources and counters, the consumption level of each customer database replica. The first release will be a multi-month time-series data set that includes the full cluster traces from two different ASD global regions.",
"title": ""
},
{
"docid": "dd9f40db5e52817b25849282ffdafe26",
"text": "Pattern classification methods based on learning-from-examples have been widely applied to character recognition from the 1990s and have brought forth significant improvements of recognition accuracies. This kind of methods include statistical methods, artificial neural networks, support vector machines, multiple classifier combination, etc. In this chapter, we briefly review the learning-based classification methods that have been successfully applied to character recognition, with a special section devoted to the classification of large category set. We then discuss the characteristics of these methods, and discuss the remaining problems in character recognition that can be potentially solved by machine learning methods.",
"title": ""
}
] |
scidocsrr
|
5aeb525a835ebe6b54f29e2636c98880
|
Another Generalization of Wiener's Attack on RSA
|
[
{
"docid": "fe0587c51c4992aa03f28b18f610232f",
"text": "We show how to find sufficiently small integer solutions to a polynomial in a single variable modulo N, and to a polynomial in two variables over the integers. The methods sometimes extend to more variables. As applications: RSA encryption with exponent 3 is vulnerable if the opponent knows two-thirds of the message, or if two messages agree over eight-ninths of their length; and we can find the factors of N=PQ if we are given the high order $\\frac{1}{4} \\log_2 N$ bits of P.",
"title": ""
},
{
"docid": "07ed58a5c4fdd926924ad2590ff33113",
"text": "The number field sieve is an algorithm to factor integers of the form r e ± s for small positive r and s . This note is intended as a ‘report on work in progress’ on this algorithm. We informally describe the algorithm, discuss several implementation related aspects, and present some of the factorizations obtained so far. We also mention some solutions to the problems encountered when generalizing the algorithm to general integers using an idea of Buhler and Pomerance. It is not unlikely that this leads to a general purpose factoring algorithm that is asymptotically substantially faster than the fastest factoring algorithms known so far, like the multiple polynomial quadratic sieve.",
"title": ""
},
{
"docid": "85826e44f9b52f94a76f4baa3d18774e",
"text": "Constant round authenticated group key agreement via distributed computation p. 115 Efficient ID-based group key agreement with bilinear maps p. 130 New security results on encrypted key exchange p. 145 New results on the hardness of Diffie-Hellman bits p. 159 Short exponent Diffie-Hellman problems p. 173 Efficient signcryption with key privacy from gap Diffie-Hellman groups p. 187 Algebraic attacks over GF(2[superscript k]), application to HFE Challenge 2 and Sflash-v2 p. 201",
"title": ""
}
] |
[
{
"docid": "0a28c6460818e346c474933b2d37073a",
"text": "Nonnegative matrix factorization (NMF)-based models possess fine representativeness of a target matrix, which is critically important in collaborative filtering (CF)-based recommender systems. However, current NMF-based CF recommenders suffer from the problem of high computational and storage complexity, as well as slow convergence rate, which prevents them from industrial usage in context of big data. To address these issues, this paper proposes an alternating direction method (ADM)-based nonnegative latent factor (ANLF) model. The main idea is to implement the ADM-based optimization with regard to each single feature, to obtain high convergence rate as well as low complexity. Both computational and storage costs of ANLF are linear with the size of given data in the target matrix, which ensures high efficiency when dealing with extremely sparse matrices usually seen in CF problems. As demonstrated by the experiments on large, real data sets, ANLF also ensures fast convergence and high prediction accuracy, as well as the maintenance of nonnegativity constraints. Moreover, it is simple and easy to implement for real applications of learning systems.",
"title": ""
},
{
"docid": "f60186d137156ba97a6a04c1b960d1a0",
"text": "One of the core performing courses in institutions for pre-school teacher training is simultaneous singing and piano playing. To ensure sufficient training hours, it is important to improve teaching methods. As a way to improve the teaching of simultaneous signing and piano playing in a large class, we have incorporated blended learning, in which students are required (1) to submit videos of their performance, and (2) to view and study e-learning materials. We have analyzed how each of these requirements improved students !Gperformance skills in singing and piano playing, and found that they substantially reduce the time required for individual lessons.",
"title": ""
},
{
"docid": "a3774a953758e650077ac2a33613ff58",
"text": "We propose a deep convolutional neural network (CNN) method for natural image matting. Our method takes multiple initial alpha mattes of the previous methods and normalized RGB color images as inputs, and directly learns an end-to-end mapping between the inputs and reconstructed alpha mattes. Among the various existing methods, we focus on using two simple methods as initial alpha mattes: the closed-form matting and KNN matting. They are complementary to each other in terms of local and nonlocal principles. A major benefit of our method is that it can “recognize” different local image structures and then combine the results of local (closed-form matting) and nonlocal (KNN matting) mattings effectively to achieve higher quality alpha mattes than both of the inputs. Furthermore, we verify extendability of the proposed network to different combinations of initial alpha mattes from more advanced techniques such as KL divergence matting and information-flow matting. On the top of deep CNN matting, we build an RGB guided JPEG artifacts removal network to handle JPEG block artifacts in alpha matting. Extensive experiments demonstrate that our proposed deep CNN matting produces visually and quantitatively high-quality alpha mattes. We perform deeper experiments including studies to evaluate the importance of balancing training data and to measure the effects of initial alpha mattes and also consider results from variant versions of the proposed network to analyze our proposed DCNN matting. In addition, our method achieved high ranking in the public alpha matting evaluation dataset in terms of the sum of absolute differences, mean squared errors, and gradient errors. Also, our RGB guided JPEG artifacts removal network restores the damaged alpha mattes from compressed images in JPEG format.",
"title": ""
},
{
"docid": "43f341cf9017305d6b94a11b8b52ec28",
"text": "Tagless interpreters for well-typed terms in some object language are a standard example of the power and benefit of precise indexing in types, whether with dependent types, or generalized algebraic datatypes. The key is to reflect object language types as indices (however they may be constituted) for the term datatype in the host language, so that host type coincidence ensures object type coincidence. Whilst this technique is widespread for simply typed object languages, dependent types have proved a tougher nut with nontrivial computation in type equality. In their type-safe representations, Danielsson [2006] and Chapman [2009] succeed in capturing the equality rules, but at the cost of representing equality derivations explicitly within terms. This article constructs a type-safe representation for a dependently typed object language, dubbed KIPLING, whose computational type equality just appropriates that of its host, Agda. The KIPLING interpreter example is not merely de rigeur - it is key to the construction. At the heart of the technique is that key component of generic programming, the universe.",
"title": ""
},
{
"docid": "76656cc995bb0a3b6644b1c5eeab2cff",
"text": "Article history: Available online 27 April 2013",
"title": ""
},
{
"docid": "a979ef975801baf7c5eaf440fb012fcf",
"text": "Shape representation is a fundamental problem in computer vision. Current approaches to shape representation mainly focus on designing low-level shape descriptors which are robust to rotation, scaling and deformation of shapes. In this paper, we focus on mid-level modeling of shape representation. We develop a new shape representation called Bag of Contour Fragments (BCF) inspired by classical Bag of Words (BoW) model. In BCF, a shape is decomposed into contour fragments each of which is then individually described using a shape descriptor, e.g., the Shape Context descriptor, and encoded into a shape code. Finally, a compact shape representation is built by pooling shape codes in the shape. Shape classification with BCF only requires an efficient linear SVM classifier. In our experiments, we fully study the characteristics of BCF, show that BCF achieves the state-of-the-art performance on several well-known shape benchmarks, and can be applied to real image classification problem.",
"title": ""
},
{
"docid": "eb344bf180467ccbd27d0aff2c57be73",
"text": "Most IP-geolocation mapping schemes [14], [16], [17], [18] take delay-measurement approach, based on the assumption of a strong correlation between networking delay and geographical distance between the targeted client and the landmarks. In this paper, however, we investigate a large region of moderately connected Internet and find the delay-distance correlation is weak. But we discover a more probable rule - with high probability the shortest delay comes from the closest distance. Based on this closest-shortest rule, we develop a simple and novel IP-geolocation mapping scheme for moderately connected Internet regions, called GeoGet. In GeoGet, we take a large number of webservers as passive landmarks and map a targeted client to the geolocation of the landmark that has the shortest delay. We further use JavaScript at targeted clients to generate HTTP/Get probing for delay measurement. To control the measurement cost, we adopt a multistep probing method to refine the geolocation of a targeted client, finally to city level. The evaluation results show that when probing about 100 landmarks, GeoGet correctly maps 35.4 percent clients to city level, which outperforms current schemes such as GeoLim [16] and GeoPing [14] by 270 and 239 percent, respectively, and the median error distance in GeoGet is around 120 km, outperforming GeoLim and GeoPing by 37 and 70 percent, respectively.",
"title": ""
},
{
"docid": "f0cd43ff855d6b10623504bf24a40fdc",
"text": "Neural network-based encoder-decoder models are among recent attractive methodologies for tackling natural language generation tasks. This paper investigates the usefulness of structural syntactic and semantic information additionally incorporated in a baseline neural attention-based model. We encode results obtained from an abstract meaning representation (AMR) parser using a modified version of Tree-LSTM. Our proposed attention-based AMR encoder-decoder model improves headline generation benchmarks compared with the baseline neural attention-based model.",
"title": ""
},
{
"docid": "b4103e5ddc58672334b66cc504dab5a6",
"text": "An open source project typically maintains an open bug repository so that bug reports from all over the world can be gathered. When a new bug report is submitted to the repository, a person, called a triager, examines whether it is a duplicate of an existing bug report. If it is, the triager marks it as DUPLICATE and the bug report is removed from consideration for further work. In the literature, there are approaches exploiting only natural language information to detect duplicate bug reports. In this paper we present a new approach that further involves execution information. In our approach, when a new bug report arrives, its natural language information and execution information are compared with those of the existing bug reports. Then, a small number of existing bug reports are suggested to the triager as the most similar bug reports to the new bug report. Finally, the triager examines the suggested bug reports to determine whether the new bug report duplicates an existing bug report. We calibrated our approach on a subset of the Eclipse bug repository and evaluated our approach on a subset of the Firefox bug repository. The experimental results show that our approach can detect 67%-93% of duplicate bug reports in the Firefox bug repository, compared to 43%-72% using natural language information alone.",
"title": ""
},
{
"docid": "4a80d4ecb00fd27b29f342794213fc41",
"text": "Rapid and accurate analysis of platelet count plays an important role in evaluating hemorrhagic status. Therefore, we evaluated platelet counting performance of a hematology analyzer, Celltac F (MEK-8222, Nihon Kohden Corporation, Tokyo, Japan), that features easy use with low reagent consumption and high throughput while occupying minimal space in the clinical laboratory. All blood samples were anticoagulated with dipotassium ethylenediaminetetraacetic acid (EDTA-2K). The samples were stored at room temperature (18(;)C-22(;)C) and tested within 4 hours of phlebotomy. We evaluated the counting ability of the Celltac F hematology analyzer by comparing it with the platelet counts obtained by the flow cytometry method that ISLH and ICSH recommended, and also the manual visual method by Unopette (Becton Dickinson Vacutainer Systems). The ICSH/ISLH reference method is based on the fact that platelets can be stained with monoclonal antibodies to CD41 and/or CD61. The dilution ratio was optimized after the precision, coincidence events, and debris counts were confirmed by the reference method. Good correlation of platelet count between the Celltac F and the ICSH/ISLH reference method (r = 0.99, and the manual visual method (r= 0.93) were obtained. The regressions were y = 0.90 x+9.0 and y=1.11x+8.4, respectively. We conclude that the Celltac F hematology analyzer for platelet counting was well suited to the ICSH/ISLH reference method for rapidness and reliability.",
"title": ""
},
{
"docid": "b01c1a2eb508ca1f4b2de3978b2fd821",
"text": "The chapter includes a description via examples of the: objectives of integrating programming and robotics in elementary school; the pedagogical infrastructure, including a description of constructionism and computational thinking; the hardware-software support of the projects with Scratch and WeDo; and the academic support to teachers and students with LearnScratch.org. Programming and Robotics are areas of knowledge that have been historically the domain of courses in higher education and more recently in secondary education and professional studies. Today, as a result of technological advances, we have access to graphic platforms of programming, specially designed for younger students, as well as construction kits with simple sensors and actuators that can be programmed from a computer.",
"title": ""
},
{
"docid": "cebdedb344f2ba7efb95c2933470e738",
"text": "To address this shortcoming, we propose a method for training binary neural networks with a mixture of bits, yielding effectively fractional bitwidths. We demonstrate that our method is not only effective in allowing finer tuning of the speed to accuracy trade-off, but also has inherent representational advantages. Middle-Out Algorithm Heterogeneous Bitwidth Binarization in Convolutional Neural Networks",
"title": ""
},
{
"docid": "07b71cad278fc012e1f815c2bf849418",
"text": "Electric vehicle-sharing systems have been introduced to a number of cities as a means of increasing mobility, reducing congestion, and pollution. Electric vehicle-sharing systems can offer one or two-way services. One-way systems provide more flexibility to users since they can be dropped-off at any station. However, their modeling involves a number of complexities arising from the need to relocate vehicles accumulated at certain stations. The planning of one-way electric vehicle-sharing systems involves a host of strongly interacting decisions regarding the number, size and location of stations, as well as the fleet size. In this paper we develop and solve a multi-objective MILP model for planning one-way vehicle-sharing systems taking into account vehicle relocation and electric vehicle charging requirements. For real world problems the size of the problem becomes intractable due to the extremely large number of relocation variables. In order to cope with this problem we introduce an aggregate model using the concept of the virtual hub. This transformation allows the solution of the problem with a branch-and-bound approach. The proposed approach generates the efficient frontier and allows decision makers to examine the trade-off between operator’s and users’ benefits. The capabilities of the proposed approach are demonstrated on a large scale real world problem with available data from Nice, France. Extensive sensitivity analysis was performed by varying demand, station accessibility distance and subsidy levels. The results provide useful insights regarding the efficient planning of one-way electric vehicle-sharing systems and allow decision makers to quantify the trade-off between operator’s and users’ benefits.",
"title": ""
},
{
"docid": "cc570f3d281947d417cd8476af3cced9",
"text": "This paper deals with the problem of fine-grained image classification and introduces the notion of hierarchical metric learning for the same. It is indeed challenging to categorize fine-grained image classes merely in terms of a single level classifier given the subtle inter-class visual differences. In order to tackle this problem, we propose a two stage framework where i) the image categories are represented hierarchically in terms of a binary tree structure where different subset of classes are present in the non-leaf nodes of the tree. This is accomplished in an automatic fashion considering the available training data in the visual domain, and ii) a (non-leaf) node specific metric learning is further deployed for the categories constituting a given node, thus enforcing better separation between both of its children. Subsequently, we construct (non-leaf) node specific binary classifiers in the learned metric spaces on which testing is henceforth carried out by following the outcomes of the classifiers sequence from root to leaf nodes of the tree. By separately focusing on the semantically similar classes at different levels of the hierarchy, it is expected that the classifiers in the learned metric spaces possess better discriminative capabilities than considering all the classes at a single go. Experimental results obtained on two challenging datasets (Oxford Flowers and Leeds Butterfly) establish the superiority of the proposed framework in comparison to the standard single metric learning based methods convincingly.",
"title": ""
},
{
"docid": "c61e5bae4dbccf0381269980a22f726a",
"text": "—Web mining is the application of the data mining which is useful to extract the knowledge. Web mining has been explored to different techniques have been proposed for the variety of the application. Most research on Web mining has been from a 'data-centric' or information based point of view. Web usage mining, Web structure mining and Web content mining are the types of Web mining. Web usage mining is used to mining the data from the web server log files. Web Personalization is one of the areas of the Web usage mining that can be defined as delivery of content tailored to a particular user or as personalization requires implicitly or explicitly collecting visitor information and leveraging that knowledge in your content delivery framework to manipulate what information you present to your users and how you present it. In this paper, we have focused on various Web personalization categories and their research issues.",
"title": ""
},
{
"docid": "56ced0e34c82f085eeba595753d423d1",
"text": "The correctness of software is affected by its constant changes. For that reason, developers use change-impact analysis to identify early the potential consequences of changing their software. Dynamic impact analysis is a practical technique that identifies potential impacts of changes for representative executions. However, it is unknown how reliable its results are because their accuracy has not been studied. This paper presents the first comprehensive study of the predictive accuracy of dynamic impact analysis in two complementary ways. First, we use massive numbers of random changes across numerous Java applications to cover all possible change locations. Then, we study more than 100 changes from software repositories, which are representative of developer practices. Our experimental approach uses sensitivity analysis and execution differencing to systematically measure the precision and recall of dynamic impact analysis with respect to the actual impacts observed for these changes. Our results for both types of changes show that the most cost-effective dynamic impact analysis known is surprisingly inaccurate with an average precision of 38-50% and average recall of 50-56% in most cases. This comprehensive study offers insights on the effectiveness of existing dynamic impact analyses and motivates the future development of more accurate impact analyses.",
"title": ""
},
{
"docid": "e6b4097ead39f9b5144e2bd8551762ed",
"text": "Thanks to advances in medical imaging technologies and numerical methods, patient-specific modelling is more and more used to improve diagnosis and to estimate the outcome of surgical interventions. It requires the extraction of the domain of interest from the medical scans of the patient, as well as the discretisation of this geometry. However, extracting smooth multi-material meshes that conform to the tissue boundaries described in the segmented image is still an active field of research. We propose to solve this issue by combining an implicit surface reconstruction method with a multi-region mesh extraction scheme. The surface reconstruction algorithm is based on multi-level partition of unity implicit surfaces, which we extended to the multi-material case. The mesh generation algorithm consists in a novel multi-domain version of the marching tetrahedra. It generates multi-region meshes as a set of triangular surface patches consistently joining each other at material junctions. This paper presents this original meshing strategy, starting from boundary points extraction from the segmented data to heterogeneous implicit surface definition, multi-region surface triangulation and mesh adaptation. Results indicate that the proposed approach produces smooth and high-quality triangular meshes with a reasonable geometric accuracy. Hence, the proposed method is well suited for subsequent volume mesh generation and finite element simulations.",
"title": ""
},
{
"docid": "181a3d68fd5b5afc3527393fc3b276f9",
"text": "Updating inference in response to new evidence is a fundamental challenge in artificial intelligence. Many real problems require large probabilistic graphical models, containing possibly millions of interdependent variables. For such large models, jointly updating the most likely (i.e., MAP) configuration of the variables each time new evidence is encountered can be infeasible, even if inference is tractable. In this paper, we introduce budgeted online collective inference, in which the MAP configuration of a graphical model is updated efficiently by revising the assignments to a subset of the variables while holding others fixed. The goal is to selectively update certain variables without sacrificing quality with respect to full inference. To formalize the consequences of partially updating inference, we introduce the concept of inference regret. We derive inference regret bounds for a class of graphical models with strongly-convex free energies. These theoretical insights, combined with a thorough analysis of the optimization solver, motivate new approximate methods for efficiently updating the variable assignments under a budget constraint. In experiments, we demonstrate that our algorithms can reduce inference time by 65% with accuracy comparable to full inference.",
"title": ""
},
{
"docid": "a3d7a6d788d6b520a4aa79343bd1b27e",
"text": "This paper explores the possibilities of analogical reasoning with vector space models. Given two pairs of words with the same relation (e.g. man:woman :: king:queen), it was proposed that the offset between one pair of the corresponding word vectors can be used to identify the unknown member of the other pair ( −−→ king − −−→ man + −−−−−→ woman = ?−−−→ queen). We argue against such “linguistic regularities” as a model for linguistic relations in vector space models and as a benchmark, and we show that the vector offset (as well as two other, better-performing methods) suffers from dependence on vector similarity.",
"title": ""
},
{
"docid": "44c9526319039305edf89ce58deb6398",
"text": "Networks of constraints fundamental properties and applications to picture processing Sketchpad: a man-machine graphical communication system Using auxiliary variables and implied constraints to model non-binary problems Solving constraint satisfaction problems using neural-networks C. Search Backtracking algorithms for constraint satisfaction problems; a survey",
"title": ""
}
] |
scidocsrr
|
0a1fb685da1449282c476c6d24625ad7
|
Modern Computer Vision Techniques for X-Ray Testing in Baggage Inspection
|
[
{
"docid": "6a5b587073c46cc584fc01c4f3519fab",
"text": "Baggage inspection using X-ray screening is a priority task that reduces the risk of crime and terrorist attacks. Manual detection of threat items is tedious because very few bags actually contain threat items and the process requires a high degree of concentration. An automated solution would be a welcome development in this field. We propose a methodology for automatic detection of threat objects using single X-ray images. Our approach is an adaptation of a methodology originally created for recognizing objects in photographs based on implicit shape models. Our detection method uses a visual vocabulary and an occurrence structure generated from a training dataset that contains representative X-ray images of the threat object to be detected. Our method can be applied to single views of grayscale X-ray images obtained using a single energy acquisition system. We tested the effectiveness of our method for the detection of three different threat objects: 1) razor blades; 2) shuriken (ninja stars); and 3) handguns. The testing dataset for each threat object consisted of 200 X-ray images of bags. The true positive and false positive rates (TPR and FPR) are: (0.99 and 0.02) for razor blades, (0.97 and 0.06) for shuriken, and (0.89 and 0.18) for handguns. If other representative training datasets were utilized, we believe that our methodology could aid in the detection of other kinds of threat objects.",
"title": ""
},
{
"docid": "2a56702663e6e52a40052a5f9b79a243",
"text": "Many successful models for scene or object recognition transform low-level descriptors (such as Gabor filter responses, or SIFT descriptors) into richer representations of intermediate complexity. This process can often be broken down into two steps: (1) a coding step, which performs a pointwise transformation of the descriptors into a representation better adapted to the task, and (2) a pooling step, which summarizes the coded features over larger neighborhoods. Several combinations of coding and pooling schemes have been proposed in the literature. The goal of this paper is threefold. We seek to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules (hard and soft vector quantization, sparse coding) and pooling schemes (by taking the average, or the maximum), which obtains state-of-the-art performance or better on several recognition benchmarks. We show how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding. We provide theoretical and empirical insight into the remarkable performance of max pooling. By teasing apart components shared by modern mid-level feature extractors, our approach aims to facilitate the design of better recognition architectures.",
"title": ""
}
] |
[
{
"docid": "18e1a3bbb95237862f9d48cf18ce24f1",
"text": "To improve real-time control performance and reduce possible negative impacts of photovoltaic (PV) systems, an accurate forecasting of PV output is required, which is an important function in the operation of an energy management system (EMS) for distributed energy resources. In this paper, a weather-based hybrid method for 1-day ahead hourly forecasting of PV power output is presented. The proposed approach comprises classification, training, and forecasting stages. In the classification stage, the self-organizing map (SOM) and learning vector quantization (LVQ) networks are used to classify the collected historical data of PV power output. The training stage employs the support vector regression (SVR) to train the input/output data sets for temperature, probability of precipitation, and solar irradiance of defined similar hours. In the forecasting stage, the fuzzy inference method is used to select an adequate trained model for accurate forecast, according to the weather information collected from Taiwan Central Weather Bureau (TCWB). The proposed approach is applied to a practical PV power generation system. Numerical results show that the proposed approach achieves better prediction accuracy than the simple SVR and traditional ANN methods.",
"title": ""
},
{
"docid": "f2274a04e0a54fb5a46e2be99863d9ac",
"text": "I find that dialysis providers in the United States exercise market power by reducing the clinical quality, or dose, of dialysis treatment. This market power stems from two sources. The first is a spatial dimension—patients face high travel costs and traveling farther for quality is undesirable. The second source is congestion—technological constraints may require dialysis capacity to be rationed among patients. Both of these sources of market power should be considered when developing policies aimed at improving quality or access in this industry. To this end, I develop and estimate an entry game with quality competition where providers choose both capacity and quality. Increasing the Medicare reimbursement rate for dialysis or subsidizing entry result in increased entry and improved quality for patients. However, these policies are extremely costly because providers are able to capture 84 to 97 percent of the additional surplus, leaving very little pass-through to consumers. Policies targeting the sources of market power provide a cost effective way of improving quality by enhancing competition and forcing providers to give up producer surplus. For example, I find that a program subsidizing patient travel costs $373 million, increases consumer surplus by $440 million, and reduces the mortality rate by 3 percent. ∗I thank my advisers Allan Collard-Wexler, Pat Bayer, Ryan McDevitt, James Roberts, and Chris Timmins for their extensive comments, guidance, and support. I am also grateful to Peter Arcidiacono, Federico Bugni, David Ridley, Adam Rosen, John Singleton, Frank Sloan, Daniel Xu and seminar participants at Duke, the International Industrial Organization Conference, and the Applied Micro Workshop at the Federal Reserve Board. †paul.eliason@duke.edu",
"title": ""
},
{
"docid": "9a1ca37307e4470a121718dd8e579c96",
"text": "We present a language-independent verification framework that can be instantiated with an operational semantics to automatically generate a program verifier. The framework treats both the operational semantics and the program correctness specifications as reachability rules between matching logic patterns, and uses the sound and relatively complete reachability logic proof system to prove the specifications using the semantics. We instantiate the framework with the semantics of one academic language, KernelC, as well as with three recent semantics of real-world languages, C, Java, and JavaScript, developed independently of our verification infrastructure. We evaluate our approach empirically and show that the generated program verifiers can check automatically the full functional correctness of challenging heap-manipulating programs implementing operations on list and tree data structures, like AVL trees. This is the first approach that can turn the operational semantics of real-world languages into correct-by-construction automatic verifiers.",
"title": ""
},
{
"docid": "4a4a0dde01536789bd53ec180a136877",
"text": "CONTEXT\nCurrent assessment formats for physicians and trainees reliably test core knowledge and basic skills. However, they may underemphasize some important domains of professional medical practice, including interpersonal skills, lifelong learning, professionalism, and integration of core knowledge into clinical practice.\n\n\nOBJECTIVES\nTo propose a definition of professional competence, to review current means for assessing it, and to suggest new approaches to assessment.\n\n\nDATA SOURCES\nWe searched the MEDLINE database from 1966 to 2001 and reference lists of relevant articles for English-language studies of reliability or validity of measures of competence of physicians, medical students, and residents.\n\n\nSTUDY SELECTION\nWe excluded articles of a purely descriptive nature, duplicate reports, reviews, and opinions and position statements, which yielded 195 relevant citations.\n\n\nDATA EXTRACTION\nData were abstracted by 1 of us (R.M.E.). Quality criteria for inclusion were broad, given the heterogeneity of interventions, complexity of outcome measures, and paucity of randomized or longitudinal study designs.\n\n\nDATA SYNTHESIS\nWe generated an inclusive definition of competence: the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served. Aside from protecting the public and limiting access to advanced training, assessments should foster habits of learning and self-reflection and drive institutional change. Subjective, multiple-choice, and standardized patient assessments, although reliable, underemphasize important domains of professional competence: integration of knowledge and skills, context of care, information management, teamwork, health systems, and patient-physician relationships. Few assessments observe trainees in real-life situations, incorporate the perspectives of peers and patients, or use measures that predict clinical outcomes.\n\n\nCONCLUSIONS\nIn addition to assessments of basic skills, new formats that assess clinical reasoning, expert judgment, management of ambiguity, professionalism, time management, learning strategies, and teamwork promise a multidimensional assessment while maintaining adequate reliability and validity. Institutional support, reflection, and mentoring must accompany the development of assessment programs.",
"title": ""
},
{
"docid": "782341e7a40a95da2a430faae977dea0",
"text": "Current Web services standards lack the means for expressing a service's nonfunctional attributes - namely, its quality of service. QoS can be objective (encompassing reliability, availability, and request-to-response time) or subjective (focusing on user experience). QoS attributes are key to dynamically selecting the services that best meet user needs. This article addresses dynamic service selection via an agent framework coupled with a QoS ontology. With this approach, participants can collaborate to determine each other's service quality and trustworthiness.",
"title": ""
},
{
"docid": "5b6a73103e7310de86c37185c729b8d9",
"text": "Motion segmentation is currently an active area of research in computer Vision. The task of comparing different methods of motion segmentation is complicated by the fact that researchers may use subtly different definitions of the problem. Questions such as ”Which objects are moving?”, ”What is background?”, and ”How can we use motion of the camera to segment objects, whether they are static or moving?” are clearly related to each other, but lead to different algorithms, and imply different versions of the ground truth. This report has two goals. The first is to offer a precise definition of motion segmentation so that the intent of an algorithm is as welldefined as possible. The second is to report on new versions of three previously existing data sets that are compatible with this definition. We hope that this more detailed definition, and the three data sets that go with it, will allow more meaningful comparisons of certain motion segmentation methods.",
"title": ""
},
{
"docid": "50e7e02f9a4b8b65cf2bce212314e77c",
"text": "Over the past few years, massive amounts of world knowledge have been accumulated in publicly available knowledge bases, such as Freebase, NELL, and YAGO. Yet despite their seemingly huge size, these knowledge bases are greatly incomplete. For example, over 70% of people included in Freebase have no known place of birth, and 99% have no known ethnicity. In this paper, we propose a way to leverage existing Web-search-based question-answering technology to fill in the gaps in knowledge bases in a targeted way. In particular, for each entity attribute, we learn the best set of queries to ask, such that the answer snippets returned by the search engine are most likely to contain the correct value for that attribute. For example, if we want to find Frank Zappa's mother, we could ask the query `who is the mother of Frank Zappa'. However, this is likely to return `The Mothers of Invention', which was the name of his band. Our system learns that it should (in this case) add disambiguating terms, such as Zappa's place of birth, in order to make it more likely that the search results contain snippets mentioning his mother. Our system also learns how many different queries to ask for each attribute, since in some cases, asking too many can hurt accuracy (by introducing false positives). We discuss how to aggregate candidate answers across multiple queries, ultimately returning probabilistic predictions for possible values for each attribute. Finally, we evaluate our system and show that it is able to extract a large number of facts with high confidence.",
"title": ""
},
{
"docid": "bd100b77d129163277b9ea6225fd3af3",
"text": "Spatial interactions (or flows), such as population migration and disease spread, naturally form a weighted location-to-location network (graph). Such geographically embedded networks (graphs) are usually very large. For example, the county-to-county migration data in the U.S. has thousands of counties and about a million migration paths. Moreover, many variables are associated with each flow, such as the number of migrants for different age groups, income levels, and occupations. It is a challenging task to visualize such data and discover network structures, multivariate relations, and their geographic patterns simultaneously. This paper addresses these challenges by developing an integrated interactive visualization framework that consists three coupled components: (1) a spatially constrained graph partitioning method that can construct a hierarchy of geographical regions (communities), where there are more flows or connections within regions than across regions; (2) a multivariate clustering and visualization method to detect and present multivariate patterns in the aggregated region-to-region flows; and (3) a highly interactive flow mapping component to map both flow and multivariate patterns in the geographic space, at different hierarchical levels. The proposed approach can process relatively large data sets and effectively discover and visualize major flow structures and multivariate relations at the same time. User interactions are supported to facilitate the understanding of both an overview and detailed patterns.",
"title": ""
},
{
"docid": "1cee79d4a07b4ef97098be940484afe8",
"text": "We show that existing methods for training preposition error correction systems, whether using well-edited text or error-annotated corpora, do not generalize across very different test sets. We present a new, large errorannotated corpus and use it to train systems that generalize across three different test sets, each from a different domain and with different error characteristics. This new corpus is automatically extracted from Wikipedia revisions and contains over one million instances of preposition corrections.",
"title": ""
},
{
"docid": "980ad058a2856048765f497683557386",
"text": "Hierarchical reinforcement learning (HRL) has recently shown promising advances on speeding up learning, improving the exploration, and discovering intertask transferable skills. Most recent works focus on HRL with two levels, i.e., a master policy manipulates subpolicies, which in turn manipulate primitive actions. However, HRL with multiple levels is usually needed in many real-world scenarios, whose ultimate goals are highly abstract, while their actions are very primitive. Therefore, in this paper, we propose a diversity-driven extensible HRL (DEHRL), where an extensible and scalable framework is built and learned levelwise to realize HRL with multiple levels. DEHRL follows a popular assumption: diverse subpolicies are useful, i.e., subpolicies are believed to be more useful if they are more diverse. However, existing implementations of this diversity assumption usually have their own drawbacks, which makes them inapplicable to HRL with multiple levels. Consequently, we further propose a novel diversity-driven solution to achieve this assumption in DEHRL. Experimental studies evaluate DEHRL with five baselines from four perspectives in two domains; the results show that DEHRL outperforms the state-of-the-art baselines in all four aspects.",
"title": ""
},
{
"docid": "8bc4f7c626ed5884c08d8b41416e9f4d",
"text": "Rescue robotics has been suggested by a recent DARPA/NSF study as an application domain for the research in human-robot integration (HRI). This paper provides a short tutorial on how robots are currently used in urban search and rescue (USAR) and discusses the HRI issues encountered over the past eight years. A domain theory of the search activity is formulated. The domain theory consists of two parts: 1) a workflow model identifying the major tasks, actions, and roles in robot-assisted search (e.g., a workflow model) and 2) a general information flow model of how data from the robot is fused by various team members into information and knowledge. The information flow model also captures the types of situation awareness needed by each agent in the rescue robot system. The article presents a synopsis of the major HRI issues in reducing the number of humans it takes to control a robot, maintaining performance with geographically distributed teams with intermittent communications, and encouraging acceptance within the existing social structure.",
"title": ""
},
{
"docid": "d06f27b688f430acf5652fd4c67905b1",
"text": "A comprehensive in vitro study involving antiglycation, antioxidant and anti-diabetic assays was carried out in mature fruits of strawberry. The effect of aqueous extract of mature strawberry fruits on glycation of guanosine with glucose and fructose with or without oxidizing entities like reactive oxygen species was analyzed. Spectral studies showed that glycation and/or fructation of guanosine was significantly inhibited by aqueous extract of strawberry. The UV absorbance of the glycation reactions was found to be maximum at 24 hrs. and decreased consecutively for 48, 72 and 96 hours. Inhibition of oxidative damage due to reactive oxygen species was also observed in presence of the plant extract. To our knowledge, antiglycation activity of strawberry fruit with reference to guanosine is being demonstrated for the first time. To determine the antioxidant activity of the plant extract, in vitro antioxidant enzymes assays (catalase, peroxidase, polyphenol oxidase and ascorbic acid oxidase) and antioxidant assays (DPPH, superoxide anion scavenging activity and xanthine oxidase) were performed. Maximum inhibition activity of 79.36%, 65.62% and 62.78% was observed for DPPH, superoxide anion scavenging and xanthine oxidase, respectively. In antidiabetic assays, IC50 value for alpha – amylase and alpha – glucosidase activity of fruit extract of strawberry was found to be 86.47 ± 1.12μg/ml and 76.83 ± 0.93 μg/ml, respectively. Thus, the aqueous extract of strawberry showed antiglycation, antioxidant and antidiabetic properties indicating that strawberry fruits, as a dietary supplement, may be utilized towards management of diabetes.",
"title": ""
},
{
"docid": "6b97884f9bc253e1291d816d38608093",
"text": "The World Health Organization (WHO) is currently updating the tenth version of their diagnostic tool, the International Classification of Diseases (ICD, WHO, 1992). Changes have been proposed for the diagnosis of Transsexualism (ICD-10) with regard to terminology, placement and content. The aim of this study was to gather the opinions of transgender individuals (and their relatives/partners) and clinicians in the Netherlands, Flanders (Belgium) and the United Kingdom regarding the proposed changes and the clinical applicability and utility of the ICD-11 criteria of 'Gender Incongruence of Adolescence and Adulthood' (GIAA). A total of 628 participants were included in the study: 284 from the Netherlands (45.2%), 8 from Flanders (Belgium) (1.3%), and 336 (53.5%) from the UK. Most participants were transgender people (or their partners/relatives) (n = 522), 89 participants were healthcare providers (HCPs) and 17 were both healthcare providers and (partners/relatives of) transgender people. Participants completed an online survey developed for this study. Most participants were in favor of the proposed diagnostic term of 'Gender Incongruence' and thought that this was an improvement on the ICD-10 diagnostic term of 'Transsexualism'. Placement in a separate chapter dealing with Sexual- and Gender-related Health or as a Z-code was preferred by many and only a small number of participants stated that this diagnosis should be excluded from the ICD-11. In the UK, most transgender participants thought there should be a diagnosis related to being trans. However, if it were to be removed from the chapter on \"psychiatric disorders\", many transgender respondents indicated that they would prefer it to be removed from the ICD in its entirety. There were no large differences between the responses of the transgender participants (or their partners and relatives) and HCPs. HCPs were generally positive about the GIAA diagnosis; most thought the diagnosis was clearly defined and easy to use in their practice or work. The duration of gender incongruence (several months) was seen by many as too short and required a clearer definition. If the new diagnostic term of GIAA is retained, it should not be stigmatizing to individuals. Moving this diagnosis away from the mental and behavioral chapter was generally supported. Access to healthcare was one area where retaining a diagnosis seemed to be of benefit.",
"title": ""
},
{
"docid": "a79d4b0a803564f417236f2450658fe0",
"text": "Dimensionality reduction has attracted increasing attention, because high-dimensional data have arisen naturally in numerous domains in recent years. As one popular dimensionality reduction method, nonnegative matrix factorization (NMF), whose goal is to learn parts-based representations, has been widely studied and applied to various applications. In contrast to the previous approaches, this paper proposes a novel semisupervised NMF learning framework, called robust structured NMF, that learns a robust discriminative representation by leveraging the block-diagonal structure and the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (especially when <inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function. Specifically, the problems of noise and outliers are well addressed by the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (<inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function, while the discriminative representations of both the labeled and unlabeled data are simultaneously learned by explicitly exploring the block-diagonal structure. The proposed problem is formulated as an optimization problem with a well-defined objective function solved by the proposed iterative algorithm. The convergence of the proposed optimization algorithm is analyzed both theoretically and empirically. In addition, we also discuss the relationships between the proposed method and some previous methods. Extensive experiments on both the synthetic and real-world data sets are conducted, and the experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods.",
"title": ""
},
{
"docid": "8053ea05b8f3de5fbd77118aa4765941",
"text": "A dual-polarized wideband aperture antenna operating in millimeter wave band (about 30GHz) for MIMO applications in 5G mobile communication systems is presented. Using the tapered aperture hybrid structure and the backed-cavity design on PCB technology, the antenna achieves dual-polarized and unidirectional broadside radiation. The proposed antenna also possesses a high port-to-port isolation of better than 26dB and a broad impedance bandwidth of 25%. This work further extends to a 4-element array with an isolation better than 18 dB under array arrangement. The antenna design were verified by simulation and measurement.",
"title": ""
},
{
"docid": "ca356c9bec43950b14014ff3cbb6909b",
"text": "Microbionic robots are used to access small tedious spaces with high maneuverability. These robots are employed in surveying and inspection of pipelines as well as tracking and examination of animal or human body during surgical activities. Relatively bigger and powerful robots are used for searching people trapped under wreckages and dirt after disasters. In order to achieve high maneuverability and to tackle various critical scenarios, a novel design of multi-segment Vermicular Robot is proposed with an adaptable actuation mechanism. Owing to the 3 Degrees of freedom (Dof) actuation mechanism, it will not only have faster forward motion but its full hemispherical turning capability would allow the robot to sharply steer as well as lift with smaller radii. The Robot will have the capability to simultaneously follow peristaltic motion (elongation/retraction) as well as looper motion (lifting body up/down). The paper presents locomotion patterns of the Vermicular Robot having Canfield actuation mechanism and highlights various scenarios in order to avoid obstacles en-route.",
"title": ""
},
{
"docid": "f1c57270a908155954049ff06d33918b",
"text": "Volume 40 Number 11 November 2014 484 Health care organizations today are finding that simply providing a “good” health care experience is insufficient to meet patient expectations. Organizations must train staff to provide excellent customer service to all patients. Many patients have become savvy health care “consumers” and consider customer service when they evaluate the quality of care they receive. The challenge for health care organizations is that patients want and expect not only outstanding clinical interventions but also excellent customer service—on every single visit.1(p. 25) A growing body of evidence suggests that patient (including family) feedback can provide compelling opportunities for developing risk management and quality improvement strategies, as well as improving customer satisfaction.2–5 Research links patient dissatisfaction with malpractice claims and unnecessary expenses.6–10 Cold food, rude behavior, long waiting periods, and quality of care concerns should be taken seriously by hospital leadership not only because such attention is addressed in Joint Commission accreditation standards or required by the Centers for Medicare & Medicaid Services (CMS) but because it is the right thing to do. The Joint Commission standards speak to the collection of, response to, and documentation of complaints from hospital patients and their families,*11 and CMS deems a time frame of 7 days appropriate for resolution for most complaints, with 21 days for complex complaints.†12 In addition, in July 2008 Joint Commission Sentinel Event Alert 40 stated that disruptive and intimidating physician behavior toward patients and colleagues may lead to medical errors, poor patient satisfaction, preventable adverse outcomes Patient-Centered Care",
"title": ""
},
{
"docid": "17cfb720c78e6d028f7578f2c5bdcf13",
"text": "Driver's drowsiness and fatigue have been major causes of the serious traffic accidents, which make this an area of great socioeconomic concern. This paper describes the design of ECG (Electrocardiogram) sensor with conductive fabric electrodes and PPG (Photoplethysmogram) sensor to obtain physiological signals for car driver's health condition monitoring. ECG and PPG signals are transmitted to base station connected to the server PC via personal area network for practical test. Intelligent health condition monitoring system is designed at the server to analyze the PPG and ECG signals. Our purpose for intelligent health condition monitoring system is managed to process HRV signals analysis derived from the physiological signals in time and frequency domain and to evaluate the driver's drowsiness status.",
"title": ""
},
{
"docid": "f1e9c9106dd3cdd7b568d5513b39ac7a",
"text": "This paper presents a novel zero-voltage switching (ZVS) approach to a grid-connected single-stage flyback inverter. The soft-switching of the primary switch is achieved by allowing negative current from the grid side through bidirectional switches placed on the secondary side of the transformer. Basically, the negative current discharges the metal-oxide-semiconductor field-effect transistor's output capacitor, thereby allowing turn on of the primary switch under zero voltage. To optimize the amount of reactive current required to achieve ZVS, a variable-frequency control scheme is implemented over the line cycle. In addition, the bidirectional switches on the secondary side of the transformer have ZVS during the turn- on times. Therefore, the switching losses of the bidirectional switches are negligible. A 250-W prototype has been implemented to validate the proposed scheme. Experimental results confirm the feasibility and superior performance of the converter compared with the conventional flyback inverter.",
"title": ""
},
{
"docid": "e053da2be2dd6917d8887428a3302f0d",
"text": "Inactivation of von Hippel–Lindau tumor-suppressor protein (pVHL) is associated with von Hippel–Lindau disease, an inherited cancer syndrome, as well as the majority of patients with sporadic clear cell renal cell carcinoma (RCC). Although the involvement of pVHL in oxygen sensing through targeting hypoxia-inducible factor-α subunits to ubiquitin-dependent proteolysis has been well documented, less is known about pVHL regulation under both normoxic and hypoxic conditions. We found that pVHL levels decreased in hypoxia and that hypoxia-induced cell cycle arrest is associated with pVHL expression in RCC cells. pVHL levels fluctuate during the cell cycle, paralleling cyclin B1 levels, with decreased levels in mitosis and G1. pVHL contains consensus destruction (D) box sequences, and pVHL associates with Cdh1, an activator of the anaphase-promoting complex/cyclosome (APC/C) E3 ubiquitin ligase. We show that pVHL has a decreased half-life in G1, Cdh1 downregulation results in increased pVHL expression, whereas Cdh1 overexpression results in decreased pVHL expression. Taken together, these results suggest that pVHL is a novel substrate of APC/CCdh1. D box-independent pVHL degradation was also detected, indicating that other ubiquitin ligases are also activated for pVHL degradation.",
"title": ""
}
] |
scidocsrr
|
972c50e9afe9600f054121c345a1eaae
|
Decision-Based Transcription of Jazz Guitar Solos Using a Harmonic Bident Analysis Filter Bank and Spectral Distribution Weighting
|
[
{
"docid": "4f43c8ba81a8b828f225923690e9f7dd",
"text": "Melody extraction algorithms aim to produce a sequence of frequency values corresponding to the pitch of the dominant melody from a musical recording. Over the past decade, melody extraction has emerged as an active research topic, comprising a large variety of proposed algorithms spanning a wide range of techniques. This article provides an overview of these techniques, the applications for which melody extraction is useful, and the challenges that remain. We start with a discussion of ?melody? from both musical and signal processing perspectives and provide a case study that interprets the output of a melody extraction algorithm for specific excerpts. We then provide a comprehensive comparative analysis of melody extraction algorithms based on the results of an international evaluation campaign. We discuss issues of algorithm design, evaluation, and applications that build upon melody extraction. Finally, we discuss some of the remaining challenges in melody extraction research in terms of algorithmic performance, development, and evaluation methodology.",
"title": ""
},
{
"docid": "60d8839833d10b905729e3d672cafdd6",
"text": "In order to account for the phenomenon of virtual pitch, various theories assume implicitly or explicitly that each spectral component introduces a series of subharmonics. The spectral-compression method for pitch determination can be viewed as a direct implementation of this principle. The widespread application of this principle in pitch determination is, however, impeded by numerical problems with respect to accuracy and computational efficiency. A modified algorithm is described that solves these problems. Its performance is tested for normal speech and \"telephone\" speech, i.e., speech high-pass filtered at 300 Hz. The algorithm out-performs the harmonic-sieve method for pitch determination, while its computational requirements are about the same. The algorithm is described in terms of nonlinear system theory, i.c., subharmonic summation. It is argued that the favorable performance of the subharmonic-summation algorithm stems from its corresponding more closely with current pitch-perception theories than does the harmonic sieve.",
"title": ""
}
] |
[
{
"docid": "6fb416991c80cb94ad09bc1bb09f81c7",
"text": "Children with Autism Spectrum Disorder often require therapeutic interventions to support engagement in effective social interactions. In this paper, we present the results of a study conducted in three public schools that use an educational and behavioral intervention for the instruction of social skills in changing situational contexts. The results of this study led to the concept of interaction immediacy to help children maintain appropriate spatial boundaries, reply to conversation initiators, disengage appropriately at the end of an interaction, and identify potential communication partners. We describe design principles for Ubicomp technologies to support interaction immediacy and present an example design. The contribution of this work is twofold. First, we present an understanding of social skills in mobile and dynamic contexts. Second, we introduce the concept of interaction immediacy and show its effectiveness as a guiding principle for the design of Ubicomp applications.",
"title": ""
},
{
"docid": "020c31f1466a5cf16188993078137a93",
"text": "This paper is more about the questions for a theory of language evolution than about the answers. I’d like to ask what there is for a theory of the evolution of language to explain, and I want to show how this depends on what you think language is. So, what is language? Everybody recognizes that language is partly culturally dependent: there is a huge variety of disparate languages in the world, passed down through cultural transmission. If that’s all there is to language, a theory of the evolution of language has nothing at all to explain. We need only explain the cultural evolution of languages: English, Dutch, Mandarin, Hausa, etc. are products of cultural history. However, most readers of the present volume probably subscribe to the contemporary scientific view of language, which goes beneath the cultural differences among languages. It focuses on individual language users and asks:",
"title": ""
},
{
"docid": "8788f14a2615f3065f4f0656a4a66592",
"text": "The ability to communicate in natural language has long been considered a defining characteristic of human intelligence. Furthermore, we hold our ability to express ideas in writing as a pinnacle of this uniquely human language facility—it defies formulaic or algorithmic specification. So it comes as no surprise that attempts to devise computer programs that evaluate writing are often met with resounding skepticism. Nevertheless, automated writing-evaluation systems might provide precisely the platforms we need to elucidate many of the features that characterize good and bad writing, and many of the linguistic, cognitive, and other skills that underlie the human capacity for both reading and writing. Using computers to increase our understanding of the textual features and cognitive skills involved in creating and comprehending written text will have clear benefits. It will help us develop more effective instructional materials for improving reading, writing, and other human communication abilities. It will also help us develop more effective technologies, such as search engines and questionanswering systems, for providing universal access to electronic information. A sketch of the brief history of automated writing-evaluation research and its future directions might lend some credence to this argument.",
"title": ""
},
{
"docid": "3e5041c6883ce6ab59234ed2c8c995b7",
"text": "Self-amputation of the penis treated immediately: case report and review of the literature. Self-amputation of the penis is rare in urological practice. It occurs more often in a context psychotic disease. It can also be secondary to alcohol or drugs abuse. Treatment and care vary according on the severity of the injury, the delay of consultation and the patient's mental state. The authors report a case of self-amputation of the penis in an alcoholic context. The authors analyze the etiological and urological aspects of this trauma.",
"title": ""
},
{
"docid": "3dd8c177ae928f7ccad2aa980bd8c747",
"text": "The quality and nature of knowledge that can be found by an automated knowledge-extraction system depends on its inputs. For systems that learn by reading text, the Web offers a breadth of topics and currency, but it also presents the problems of dealing with casual, unedited writing, non-textual inputs, and the mingling of languages. The results of extraction using the KNEXT system on two Web corpora – Wikipedia and a collection of weblog entries – indicate that, with automatic filtering of the output, even ungrammatical writing on arbitrary topics can yield an extensive knowledge base, which human judges find to be of good quality, with propositions receiving an average score across both corpora of 2.34 (where the range is 1 to 5 and lower is better) versus 3.00 for unfiltered output from the same sources.",
"title": ""
},
{
"docid": "c185493668b49314afea915d1a2fc839",
"text": "In recent years, Particle Swarm Optimization has evolved as an effective global optimization algorithm whose dynamics has been inspired from swarming or collaborative behavior of biological populations. In this paper, PSO has been applied to Triple Link Inverted Pendulum model to find its reduced order model by minimization of error between the step responses of higher and reduced order model. Model Order Reduction using PSO algorithm is advantageous due to ease in implementation, higher accuracy and decreased time of computation. The second and third order reduced transfer functions of Triple Link Inverted Pendulum have been computed for comparison. Keywords—Particle Swarm Optimization, Triple Link Inverted Pendulum, Model Order Reduction, Pole Placement technique.",
"title": ""
},
{
"docid": "982af44d0c5fc3d0bddd2804cee77a04",
"text": "Coprime array offers a larger array aperture than uniform linear array with the same number of physical sensors, and has a better spatial resolution with increased degrees of freedom. However, when it comes to the problem of adaptive beamforming, the existing adaptive beamforming algorithms designed for the general array cannot take full advantage of coprime feature offered by the coprime array. In this paper, we propose a novel coprime array adaptive beamforming algorithm, where both robustness and efficiency are well balanced. Specifically, we first decompose the coprime array into a pair of sparse uniform linear subarrays and process their received signals separately. According to the property of coprime integers, the direction-of-arrival (DOA) can be uniquely estimated for each source by matching the super-resolution spatial spectra of the pair of sparse uniform linear subarrays. Further, a joint covariance matrix optimization problem is formulated to estimate the power of each source. The estimated DOAs and their corresponding power are utilized to reconstruct the interference-plus-noise covariance matrix and estimate the signal steering vector. Theoretical analyses are presented in terms of robustness and efficiency, and simulation results demonstrate the effectiveness of the proposed coprime array adaptive beamforming algorithm.",
"title": ""
},
{
"docid": "758def2083055b147d19b99280e5c8d2",
"text": "We present the Virtual Showcase, a new multiviewer augmented reality display device that has the same form factor as a real showcase traditionally used for museum exhibits.",
"title": ""
},
{
"docid": "a734d59544fd17d6991b71c5f4b8bdf6",
"text": "Transgenic cotton that produced one or more insecticidal proteins of Bacillus thuringiensis (Bt) was planted on over 15 million hectares in 11 countries in 2009 and has contributed to a reduction of over 140 million kilograms of insecticide active ingredient between 1996 and 2008. As a highly selective form of host plant resistance, Bt cotton effectively controls a number of key lepidopteran pests and has become a cornerstone in overall integrated pest management (IPM). Bt cotton has led to large reductions in the abundance of targeted pests and benefited non-Bt cotton adopters and even producers of other crops affected by polyphagous target pests. Reductions in insecticide use have enhanced biological control, which has contributed to significant suppression of other key and sporadic pests in cotton. Although reductions in insecticide use in some regions have elevated the importance of several pest groups, most of these emerging problems can be effectively solved through an IPM approach.",
"title": ""
},
{
"docid": "a0840cf58ca21b738543924f6ed1a2f3",
"text": "Emojis have been widely used in textual communications as a new way to convey nonverbal cues. An interesting observation is the various emoji usage patterns among different users. In this paper, we investigate the correlation between user personality traits and their emoji usage patterns, particularly on overall amounts and specific preferences. To achieve this goal, we build a large Twitter dataset which includes 352,245 users and over 1.13 billion tweets associated with calculated personality traits and emoji usage patterns. Our correlation and emoji prediction results provide insights into the power of diverse personalities that lead to varies emoji usage patterns as well as its potential in emoji recommendation",
"title": ""
},
{
"docid": "3f9eb2e91e0adc0a58f5229141f826ee",
"text": "Box-office performance of a movie is mainly determined by the amount the movie collects in the opening weekend and Pre-Release hype is an important factor as far as estimating the openings of the movie are concerned. This can be estimated through user opinions expressed online on sites such as Twitter which is an online micro-blogging site with a user base running into millions. Each user is entitled to his own opinion which he expresses through his tweets. This paper suggests a novel way to mine and analyze the opinions expressed in these tweets with respect to a movie prior to its release, estimate the hype surrounding it and also predict the box-office openings of the movie.",
"title": ""
},
{
"docid": "fa1a6afff63a91c084aa8b2197479bed",
"text": "Conventional wisdom in deep learning states that increasing depth improves expressiveness but complicates optimization. This paper suggests that, sometimes, increasing depth can speed up optimization. The effect of depth on optimization is decoupled from expressiveness by focusing on settings where additional layers amount to overparameterization – linear neural networks, a wellstudied model. Theoretical analysis, as well as experiments, show that here depth acts as a preconditioner which may accelerate convergence. Even on simple convex problems such as linear regression with `p loss, p > 2, gradient descent can benefit from transitioning to a non-convex overparameterized objective, more than it would from some common acceleration schemes. We also prove that it is mathematically impossible to obtain the acceleration effect of overparametrization via gradients of any regularizer.",
"title": ""
},
{
"docid": "34d7f848427052a1fc5f565a24f628ec",
"text": "This is the solutions manual (web-edition) for the book Pattern Recognition and Machine Learning (PRML; published by Springer in 2006). It contains solutions to the www exercises. This release was created September 8, 2009. Future releases with corrections to errors will be published on the PRML web-site (see below). The authors would like to express their gratitude to the various people who have provided feedback on earlier releases of this document. In particular, the \" Bishop Reading Group \" , held in the Visual Geometry Group at the University of Oxford provided valuable comments and suggestions. The authors welcome all comments, questions and suggestions about the solutions as well as reports on (potential) errors in text or formulae in this document; please send any such feedback to",
"title": ""
},
{
"docid": "e4d58b9b8775f2a30bc15fceed9cd8bf",
"text": "Latency of interactive computer systems is a product of the processing, transport and synchronisation delays inherent to the components that create them. In a virtual environment (VE) system, latency is known to be detrimental to a user's sense of immersion, physical performance and comfort level. Accurately measuring the latency of a VE system for study or optimisation, is not straightforward. A number of authors have developed techniques for characterising latency, which have become progressively more accessible and easier to use. In this paper, we characterise these techniques. We describe a simple mechanical simulator designed to simulate a VE with various amounts of latency that can be finely controlled (to within 3ms). We develop a new latency measurement technique called Automated Frame Counting to assist in assessing latency using high speed video (to within 1ms). We use the mechanical simulator to measure the accuracy of Steed's and Di Luca's measurement techniques, proposing improvements where they may be made. We use the methods to measure latency of a number of interactive systems that may be of interest to the VE engineer, with a significant level of confidence. All techniques were found to be highly capable however Steed's Method is both accurate and easy to use without requiring specialised hardware.",
"title": ""
},
{
"docid": "d4ee96388ca88c0a5d2a364f826dea91",
"text": "Cloud computing, as an emerging computing paradigm, enables users to remotely store their data into a cloud so as to enjoy scalable services on-demand. Especially for small and medium-sized enterprises with limited budgets, they can achieve cost savings and productivity enhancements by using cloud-based services to manage projects, to make collaborations, and the like. However, allowing cloud service providers (CSPs), which are not in the same trusted domains as enterprise users, to take care of confidential data, may raise potential security and privacy issues. To keep the sensitive user data confidential against untrusted CSPs, a natural way is to apply cryptographic approaches, by disclosing decryption keys only to authorized users. However, when enterprise users outsource confidential data for sharing on cloud servers, the adopted encryption system should not only support fine-grained access control, but also provide high performance, full delegation, and scalability, so as to best serve the needs of accessing data anytime and anywhere, delegating within enterprises, and achieving a dynamic set of users. In this paper, we propose a scheme to help enterprises to efficiently share confidential data on cloud servers. We achieve this goal by first combining the hierarchical identity-based encryption (HIBE) system and the ciphertext-policy attribute-based encryption (CP-ABE) system, and then making a performance-expressivity tradeoff, finally applying proxy re-encryption and lazy re-encryption to our scheme.",
"title": ""
},
{
"docid": "3ba14079f728bcf9fbf7f1029655f5d2",
"text": "BACKGROUND\nThis study assessed the prevalence of six alcohol consumption indicators in a sample of university students. We also examined whether students' sociodemographic and educational characteristics were associated with any of the six alcohol consumption indicators; and whether associations between students' sociodemographic and educational characteristics and the six alcohol consumption indicators differed by gender.\n\n\nMETHODS\nA cross-sectional study of 3706 students enrolled at 7 universities in England, Wales and Northern Ireland. A self-administered questionnaire assessed six alcohol consumption measures: length of time of last (most recent) drinking occasion; amount consumed during last drinking occasion; frequency of alcohol consumption; heavy episodic drinking (≥ 5 drinks in a row); problem drinking; and possible alcohol dependence as measured by CAGE. The questionnaire also collected information on seven relevant student sociodemographic characteristics (age, gender, academic year of study, current living circumstances - accommodation with parents, whether student was in intimate relationship, socioeconomic status of parents - parental education, income sufficiency) and two academic achievement variables (importance of achieving good grades at university, and one's academic performance in comparison with one's peers).\n\n\nRESULTS\nThe majority of students (65% of females, 76% of males) reported heavy episodic drinking at least once within the last 2 weeks, and problem drinking was prevalent in 20% of females and 29% of males. Factors consistently positively associated with all six indicators of alcohol consumption were male gender and perceived insufficient income. Other factors such as living away from home, being in 1st or 2nd year of studies, having no intimate partner, and lower academic achievement were associated with some, but not all indicators of alcohol consumption.\n\n\nCONCLUSIONS\nThe high level of alcohol consumption calls for regular/periodic monitoring of student use of alcohol, and for urgent preventive actions and intervention programmes at the universities in the UK.",
"title": ""
},
{
"docid": "df2bc3dce076e3736a195384ae6c9902",
"text": "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.",
"title": ""
},
{
"docid": "39fa66b86ca91c54a2d2020f04ecc7ba",
"text": "We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.",
"title": ""
},
{
"docid": "fce9ff5cdb69f7df9660decf14a741b4",
"text": "This paper presents the design and development of a low-loss and wide-band width multilayered Marchand balun. The balun has been implemented using printed circuit board materials with integrated multilayered organic thin films. We have designed the top layer transmission lines on twin-thickness organic thin films to achieve low loss and wide bandwidth for the balun. The experimental results demonstrate that the balun achieves less than 0.5-0.7 dB insertion loss throughout the 4-20-GHz operation bandwidth. The phase and amplitude imbalances are less than 5/spl deg/ and 0.5 dB, respectively. Time-domain measurement results demonstrate that the balun has negligible dispersion.",
"title": ""
},
{
"docid": "7d7ea6239106f614f892701e527122e2",
"text": "The purpose of this study was to investigate the effects of aromatherapy on the anxiety, sleep, and blood pressure (BP) of percutaneous coronary intervention (PCI) patients in an intensive care unit (ICU). Fifty-six patients with PCI in ICU were evenly allocated to either the aromatherapy or conventional nursing care. Aromatherapy essential oils were blended with lavender, roman chamomile, and neroli with a 6 : 2 : 0.5 ratio. Participants received 10 times treatment before PCI, and the same essential oils were inhaled another 10 times after PCI. Outcome measures patients' state anxiety, sleeping quality, and BP. An aromatherapy group showed significantly low anxiety (t = 5.99, P < .001) and improving sleep quality (t = -3.65, P = .001) compared with conventional nursing intervention. The systolic BP of both groups did not show a significant difference by time or in a group-by-time interaction; however, a significant difference was observed between groups (F = 4.63, P = .036). The diastolic BP did not show any significant difference by time or by a group-by-time interaction; however, a significant difference was observed between groups (F = 6.93, P = .011). In conclusion, the aromatherapy effectively reduced the anxiety levels and increased the sleep quality of PCI patients admitted to the ICU. Aromatherapy may be used as an independent nursing intervention for reducing the anxiety levels and improving the sleep quality of PCI patients.",
"title": ""
}
] |
scidocsrr
|
396e5e313f18d2f0a438337945ecdef5
|
The interaction between sleep quality and academic performance.
|
[
{
"docid": "9c80e8db09202335f427ebf02659eac3",
"text": "The present paper reviews and critiques studies assessing the relation between sleep patterns, sleep quality, and school performance of adolescents attending middle school, high school, and/or college. The majority of studies relied on self-report, yet the researchers approached the question with different designs and measures. Specifically, studies looked at (1) sleep/wake patterns and usual grades, (2) school start time and phase preference in relation to sleep habits and quality and academic performance, and (3) sleep patterns and classroom performance (e.g., examination grades). The findings strongly indicate that self-reported shortened total sleep time, erratic sleep/wake schedules, late bed and rise times, and poor sleep quality are negatively associated with academic performance for adolescents from middle school through the college years. Limitations of the current published studies are also discussed in detail in this review.",
"title": ""
}
] |
[
{
"docid": "37af58543ae2508271439427f424caf7",
"text": "Bitcoin is the first widely adopted decentralized digitale-cash system. All Bitcoin transactions that include addresses of senders and receivers are stored in the public blockchain which could cause privacy problems. The Zerocoin protocol hides the link between individual Bitcoin transactions without adding trusted third parties. However such an untraceable remittance system could cause illegal transfers such as money laundering. In this paper we address this problem and propose an auditable decentralized e-cash scheme based on the Zerocoin protocol. Our scheme allows designated auditors to extract link information from Zerocoin transactions while preventing other users including miners from obtaining it. Respecting the mind of the decentralized system, the auditor doesn't have other authorities such as stopping transfers, confiscating funds, and deactivating accounts. A technical contribution of our scheme is that a coin sender embeds audit information with a non-interactive zeroknowledge proof of knowledge (NIZKP). This zero-knowledge prevents malicious senders from embedding indiscriminate audit information, and we construct it simply using only the standard Schnorr protocol for discrete logarithm without zk-SNARKs or other recent techniques for zero-knowledge proof.",
"title": ""
},
{
"docid": "902a60b23d65c644877b350c63b86ba8",
"text": "The Internet of Things (IoT) is set to occupy a substantial component of future Internet. The IoT connects sensors and devices that record physical observations to applications and services of the Internet[1]. As a successor to technologies such as RFID and Wireless Sensor Networks (WSN), the IoT has stumbled into vertical silos of proprietary systems, providing little or no interoperability with similar systems. As the IoT represents future state of the Internet, an intelligent and scalable architecture is required to provide connectivity between these silos, enabling discovery of physical sensors and interpretation of messages between the things. This paper proposes a gateway and Semantic Web enabled IoT architecture to provide interoperability between systems, which utilizes established communication and data standards. The Semantic Gateway as Service (SGS) allows translation between messaging protocols such as XMPP, CoAP and MQTT via a multi-protocol proxy architecture. Utilization of broadly accepted specifications such as W3Cs Semantic Sensor Network (SSN) ontology for semantic annotations of sensor data provide semantic interoperability between messages and support semantic reasoning to obtain higher-level actionable knowledge from low-level sensor data.",
"title": ""
},
{
"docid": "e64608f39ab082982178ad2b3539890f",
"text": "Hoeschele, Michael David. M.S., Purdue University, May, 2006, Detecting Social Engineering. Major Professor: Marcus K. Rogers. This study consisted of creating and evaluating a proof of concept model of the Social Engineering Defense Architecture (SEDA) as theoretically proposed by Hoeschele and Rogers (2005). The SEDA is a potential solution to the problem of Social Engineering (SE) attacks perpetrated over the phone lines. The proof of concept model implemented some simple attack detection processes and the database to store all gathered information. The model was tested by generating benign telephone conversations in addition to conversations that include Social Engineering (SE) attacks. The conversations were then processed by the model to determine its accuracy to detect attacks. The model was able to detect all attacks and to store all of the correct data in the database, resulting in 100% accuracy.",
"title": ""
},
{
"docid": "b71ec61f22457a5604a1c46087685e45",
"text": "Deep neural networks have been widely adopted for automatic organ segmentation from abdominal CT scans. However, the segmentation accuracy of some small organs (e.g., the pancreas) is sometimes below satisfaction, arguably because deep networks are easily disrupted by the complex and variable background regions which occupies a large fraction of the input volume. In this paper, we formulate this problem into a fixed-point model which uses a predicted segmentation mask to shrink the input region. This is motivated by the fact that a smaller input region often leads to more accurate segmentation. In the training process, we use the ground-truth annotation to generate accurate input regions and optimize network weights. On the testing stage, we fix the network parameters and update the segmentation results in an iterative manner. We evaluate our approach on the NIH pancreas segmentation dataset, and outperform the state-of-the-art by more than 4%, measured by the average Dice-Sørensen Coefficient (DSC). In addition, we report 62.43% DSC in the worst case, which guarantees the reliability of our approach in clinical applications.",
"title": ""
},
{
"docid": "ae59ef9772ea8f8277a2d91030bd6050",
"text": "Modelling and exploiting teammates’ policies in cooperative multi-agent systems have long been an interest and also a big challenge for the reinforcement learning (RL) community. The interest lies in the fact that if the agent knows the teammates’ policies, it can adjust its own policy accordingly to arrive at proper cooperations; while the challenge is that the agents’ policies are changing continuously due to they are learning concurrently, which imposes difficulty to model the dynamic policies of teammates accurately. In this paper, we present ATTention Multi-Agent Deep Deterministic Policy Gradient (ATT-MADDPG) to address this challenge. ATT-MADDPG extends DDPG, a single-agent actor-critic RL method, with two special designs. First, in order to model the teammates’ policies, the agent should get access to the observations and actions of teammates. ATT-MADDPG adopts a centralized critic to collect such information. Second, to model the teammates’ policies using the collected information in an effective way, ATT-MADDPG enhances the centralized critic with an attention mechanism. This attention mechanism introduces a special structure to explicitly model the dynamic joint policy of teammates, making sure that the collected information can be processed efficiently. We evaluate ATT-MADDPG on both benchmark tasks and the real-world packet routing tasks. Experimental results show that it not only outperforms the state-of-the-art RL-based methods and rule-based methods by a large margin, but also achieves better performance in terms of scalability and robustness.",
"title": ""
},
{
"docid": "795247ec761cd3ee514afc880e80fcca",
"text": "This paper describes a new working implementation of the Python language; built on top of the Java language and run-time environment. This is in contrast to the existing implementation of Python, which has been built on top of the C language and run-time environment. Implementing Python in Java has a number of limitations when compared to the current implementation of Python in C. These include about 1.7X slower performance, portability limited by Java VM availability, and lack of compatibility with existing C extension modules. The advantages of Java over C as an implementation language include portability of binary executables, object-orientation in the implementation language to match object-orientation in Python, true garbage collection, run-time exceptions instead of segmentation faults, and the ability to automatically generate wrapper code for arbitrary Java libraries.",
"title": ""
},
{
"docid": "73b0a5820c8268bb5911e1b44401273b",
"text": "In typical reinforcement learning (RL), the environment is assumed given and the goal of the learning is to identify an optimal policy for the agent taking actions through its interactions with the environment. In this paper, we extend this setting by considering the environment is not given, but controllable and learnable through its interaction with the agent at the same time. This extension is motivated by environment design scenarios in the realworld, including game design, shopping space design and traffic signal design. Theoretically, we find a dual Markov decision process (MDP) w.r.t. the environment to that w.r.t. the agent, and derive a policy gradient solution to optimizing the parametrized environment. Furthermore, discontinuous environments are addressed by a proposed general generative framework. Our experiments on a Maze game design task show the effectiveness of the proposed algorithms in generating diverse and challenging Mazes against various agent settings.",
"title": ""
},
{
"docid": "72782fdcc61d1059bce95fe4e7872f5b",
"text": "ÐIn object prototype learning and similar tasks, median computation is an important technique for capturing the essential information of a given set of patterns. In this paper, we extend the median concept to the domain of graphs. In terms of graph distance, we introduce the novel concepts of set median and generalized median of a set of graphs. We study properties of both types of median graphs. For the more complex task of computing generalized median graphs, a genetic search algorithm is developed. Experiments conducted on randomly generated graphs demonstrate the advantage of generalized median graphs compared to set median graphs and the ability of our genetic algorithm to find approximate generalized median graphs in reasonable time. Application examples with both synthetic and nonsynthetic data are shown to illustrate the practical usefulness of the concept of median graphs. Index TermsÐMedian graph, graph distance, graph matching, genetic algorithm,",
"title": ""
},
{
"docid": "86bf67085df96877b3409a80f78c4504",
"text": "Well-known met hods for solving the shape-from-shading problem require knowledge of the reflectance map. Here we show how the shape-from-shading problem can be solved when the reflectance map is not available, but is known to have a given form with some unknown parameters. This happens, for example, when the surface is known to be Lambertian, but the direction to the light source is not known. We give an iterative algorithm which alternately estimate* the surface shape and the light source direction. Use of the unit normal in parameterizing the reflectance map, rather than the gradient or stereographic coordinates, simplifies the analysis. Our approach also leads to an iterative scheme for computing shape from shading that adjusts the current estimates of the local normals toward or away from the direction of the light source. The amount of adjustment is proportional to the current difference between the predicted and the observed brightness. We also develop generalizations to less constrained forms of reflectance maps.",
"title": ""
},
{
"docid": "5b993bd682870138afdc52004038d90e",
"text": "Video Object Segmentation, and video processing in general, has been historically dominated by methods that rely on the temporal consistency and redundancy in consecutive video frames. When temporal smoothness is suddenly broken, such as when an object is occluded, the result of these methods can deteriorate significantly. This paper explores the orthogonal approach of processing each frame independently, i.e. disregarding temporal information. In particular, it tackles the task of semi-supervised video object segmentation: the separation of an object from the background in a video, given its mask in the first frame. We present Semantic One-Shot Video Object Segmentation (OSVOS ), based on a fully-convolutional neural network architecture that is able to successively transfer generic semantic information, learned on ImageNet, to the task of foreground segmentation, and finally to learning the appearance of a single annotated object of the test sequence (hence one shot). We show that instance-level semantic information, when combined effectively, can dramatically improve the results of our previous method, OSVOS. We perform experiments on two recent single-object video segmentation databases, which show that OSVOS is both the fastest and most accurate method in the state of the art. Experiments on multi-object video segmentation show that OSVOS obtains competitive results.",
"title": ""
},
{
"docid": "8717a6e3c20164981131997efbe08a0d",
"text": "The recent maturity of body sensor networks has enabled a wide range of applications in sports, well-being and healthcare. In this paper, we hypothesise that a single unobtrusive head-worn inertial sensor can be used to infer certain biomotion details of specific swimming techniques. The sensor, weighing only seven grams is mounted on the swimmer's goggles, limiting the disturbance to a minimum. Features extracted from the recorded acceleration such as the pitch and roll angles allow to recognise the type of stroke, as well as basic biomotion indices. The system proposed represents a non-intrusive, practical deployment of wearable sensors for swimming performance monitoring.",
"title": ""
},
{
"docid": "dd9e89b7e0c70fcc542a185d6bd98763",
"text": "This study describes metaphorical conceptualizations of the foreign exchange market held by market participants and examines how these metaphors socially construct the financial market. Findings are based on 55 semi-structured interviews with senior foreign exchange experts at banks and at financial news providers in Europe. We analysed interview transcripts by metaphor analysis, a method based on cognitive linguistics. Results indicate that market participants' understanding of financial markets revolves around seven metaphors, namely the market as a bazaar, as a machine, as gambling, as sports, as war, as a living being and as an ocean. Each of these metaphors highlights and conceals certain aspects of the foreign exchange market and entails a different set of implications on crucial market dimensions, such as the role of other market participants and market predictability. A correspondence analysis supports our assumption that metaphorical thinking corresponds with implicit assumptions about market predictability. A comparison of deliberately generated and implicitly used metaphors reveals notable differences. In particular, implicit metaphors are predominantly organic rather than mechanical. In contrast to academic models, interactive and organic metaphors, and not the machine metaphor, dominate the market accounts of participants.",
"title": ""
},
{
"docid": "2eebebc33b83bfcc7490723883ec66a9",
"text": "Getting clear images in underwater environments is an important issue in ocean engineering . The quality of underwater images plays a important role in scientific world. Capturing images underwater is difficult, generally due to deflection and reflection of water particles, and color change due to light travelling in water with different wavelengths. Light dispersion and color transform result in contrast loss and color deviation in images acquired underwater. Restoration and Enhancement of an underwater object from an image distorted by moving water waves is a very challenging task. This paper proposes wavelength compensation and image dehazing technique to balance the color change and light scattering respectively. It also removes artificial light by using depth map technique. Water depth is estimated by background color. Color change compensation is done by residual energy ratio method. A new approach is presented in this paper. We make use of a special technique called wavelength compensation and dehazing technique along with the artificial light removal technique simultaneously to analyze the raw image sequences and recover the true object. We test our approach on both pretended and data of real world, separately. Such technique has wide applications to areas such.",
"title": ""
},
{
"docid": "aa5d6e57350c2c1082091c62b6a941e8",
"text": "MEC is an emerging paradigm that provides computing, storage, and networking resources within the edge of the mobile RAN. MEC servers are deployed on a generic computing platform within the RAN, and allow for delay-sensitive and context-aware applications to be executed in close proximity to end users. This paradigm alleviates the backhaul and core network and is crucial for enabling low-latency, high-bandwidth, and agile mobile services. This article envisions a real-time, context-aware collaboration framework that lies at the edge of the RAN, comprising MEC servers and mobile devices, and amalgamates the heterogeneous resources at the edge. Specifically, we introduce and study three representative use cases ranging from mobile edge orchestration, collaborative caching and processing, and multi-layer interference cancellation. We demonstrate the promising benefits of the proposed approaches in facilitating the evolution to 5G networks. Finally, we discuss the key technical challenges and open research issues that need to be addressed in order to efficiently integrate MEC into the 5G ecosystem.",
"title": ""
},
{
"docid": "7e5cd1252d95bb095e7fabd54211fc38",
"text": "Interorganizational information systems, i.e., systems spanning more than a single organization, are proliferating as companies become aware of the potential of these systems to affect interorganizational interactions in terms of economic efficiency and strategic conduct. This new technology can have far-reaching impacts on the structure of entire industries. This article identifies two types of interorganizational information systems, information links and electronic markets. It then explores how economic models can be employed to study the implications of information links for the coordination of individual organizations with their customers and their suppliers, and the implications of electronic market systems for efficiency and competition in vertical markets. Finally, the strategic significance of interorganizational systems is addressed, and certain potential long-term impacts on the structure of markets, industries and organizations are discussed. This research was supported in part with funding from an Irvine Faculty Research Fellowship and from the National Science Foundation (Grant Number IRI-9015497). The author is grateful to the three anonymous referees for their valuable comments during the review process.",
"title": ""
},
{
"docid": "c8269e0a67ab7f1af77a1ff5d602fd87",
"text": "Cryptanalysis identifies weaknesses of ciphers and investigates methods to exploit them in order to compute the plaintext and/or the secret cipher key. Exploitation is nontrivial and, in many cases, weaknesses have been shown to be effective only on reduced versions of the ciphers. In this paper we apply artificial neural networks to automatically “assist” cryptanalysts into exploiting cipher weaknesses. The networks are trained by providing data in a form that points out the weakness together with the encryption key, until the network is able to generalize and predict the key (or evaluate its likelihood) for any possible ciphertext. We illustrate the effectiveness of the approach through simple classical ciphers, by providing the first ciphertext-only attack on substitution ciphers based on neural networks.",
"title": ""
},
{
"docid": "0ce4a0dfe5ea87fb87f5d39b13196e94",
"text": "Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.",
"title": ""
},
{
"docid": "ec271fa90e4eb72cdda63e0dfddc5b80",
"text": "One property of electromagnetic waves that has been recently explored is the ability to multiplex multiple beams, such that each beam has a unique helical phase front. The amount of phase front 'twisting' indicates the orbital angular momentum state number, and beams with different orbital angular momentum are orthogonal. Such orbital angular momentum based multiplexing can potentially increase the system capacity and spectral efficiency of millimetre-wave wireless communication links with a single aperture pair by transmitting multiple coaxial data streams. Here we demonstrate a 32-Gbit s(-1) millimetre-wave link over 2.5 metres with a spectral efficiency of ~16 bit s(-1) Hz(-1) using four independent orbital-angular momentum beams on each of two polarizations. All eight orbital angular momentum channels are recovered with bit-error rates below 3.8 × 10(-3). In addition, we demonstrate a millimetre-wave orbital angular momentum mode demultiplexer to demultiplex four orbital angular momentum channels with crosstalk less than -12.5 dB and show an 8-Gbit s(-1) link containing two orbital angular momentum beams on each of two polarizations.",
"title": ""
},
{
"docid": "fbd413241603459451b79d0ab9580932",
"text": "Document-level sentiment classification is a fundamental problem which aims to predict a user’s overall sentiment about a product in a document. Several methods have been proposed to tackle the problem whereas most of them fail to consider the influence of users who express the sentiment and products which are evaluated. To address the issue, we propose a deep memory network for document-level sentiment classification which could capture the user and product information at the same time. To prove the effectiveness of our algorithm, we conduct experiments on IMDB and Yelp datasets and the results indicate that our model can achieve better performance than several existing methods.",
"title": ""
},
{
"docid": "5d4797cffc06cbde079bf4019dc196db",
"text": "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and/or 3-D Convolutional Neural Networks (CNNs) to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)—a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8% and 74.0% in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
582392b3533e5ee78a91edb8079783d1
|
Annotating and Automatically Tagging Constructions of Causal Language
|
[
{
"docid": "7161122eaa9c9766e9914ba0f2ee66ef",
"text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.",
"title": ""
},
{
"docid": "8093101949a96d27082712ce086bf11f",
"text": "Transition-based dependency parsers often need sequences of local shift and reduce operations to produce certain attachments. Correct individual decisions hence require global information about the sentence context and mistakes cause error propagation. This paper proposes a novel transition system, arc-swift, that enables direct attachments between tokens farther apart with a single transition. This allows the parser to leverage lexical information more directly in transition decisions. Hence, arc-swift can achieve significantly better performance with a very small beam size. Our parsers reduce error by 3.7–7.6% relative to those using existing transition systems on the Penn Treebank dependency parsing task and English Universal Dependencies.",
"title": ""
}
] |
[
{
"docid": "3339aada96140d392182281a6c819f93",
"text": "Biometric applications have been used globally in everyday life. However, conventional biometrics is created and optimized for high-security scenarios. Being used in daily life by ordinary untrained people is a new challenge. Facing this challenge, designing a biometric system with prior constraints of ergonomics, we propose ergonomic biometrics design model, which attains the physiological factors, the psychological factors, and the conventional security characteristics. With this model, a novel hand-based biometric system, door knob hand recognition system (DKHRS), is proposed. DKHRS has the identical appearance of a conventional door knob, which is an optimum solution in both physiological factors and psychological factors. In this system, a hand image is captured by door knob imaging scheme, which is a tailored omnivision imaging structure and is optimized for this predetermined door knob appearance. Then features are extracted by local Gabor binary pattern histogram sequence method and classified by projective dictionary pair learning. In the experiment on a large data set including 12 000 images from 200 people, the proposed system achieves competitive recognition performance comparing with conventional biometrics like face and fingerprint recognition systems, with an equal error rate of 0.091%. This paper shows that a biometric system could be built with a reliable recognition performance under the ergonomic constraints.",
"title": ""
},
{
"docid": "866abb0de36960fba889282d67ce9dbd",
"text": "We present our experience with the use of local fasciocutaneous V-Y advancement flaps in the reconstruction of 10 axillae in 6 patients for large defects following wide excision of long-standing Hidradenitis suppurativa of the axilla. The defects were closed with local V-Y subcutaneous island flaps. A single flap from the chest wall was sufficient for moderate defects. However, for larger defects, an additional flap was taken from the medial side of the ipsilateral arm. The donor defects could be closed primarily in all the patients. The local areas of the lateral chest wall and the medial side of the arm have a plentiful supply of cutaneous perforators and the flaps can be designed in a V-Y fashion without resorting to preoperative marking of the perforator. The flaps were freed sufficiently to allow adequate movement for closure of the defects. Although no attempt was made to identify the perforators specifically, many perforators were seen entering the flap. Some perforators can be safely divided to increase reach of the flap. All the flaps survived completely. A follow up of 2.5 years is presented.",
"title": ""
},
{
"docid": "6210d2da6100adbd4db89a983d00419f",
"text": "Many binary code encoding schemes based on hashing have been actively studied recently, since they can provide efficient similarity search, especially nearest neighbor search, and compact data representations suitable for handling large scale image databases in many computer vision problems. Existing hashing techniques encode high-dimensional data points by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. Furthermore, we propose a new binary code distance function, spherical Hamming distance, that is tailored to our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve balanced partitioning of data points for each hash function and independence between hashing functions. Our extensive experiments show that our spherical hashing technique significantly outperforms six state-of-the-art hashing techniques based on hyperplanes across various image benchmarks of sizes ranging from one to 75 million of GIST descriptors. The performance gains are consistent and large, up to 100% improvements. The excellent results confirm the unique merits of the proposed idea in using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.",
"title": ""
},
{
"docid": "5e2fc7744cc438a77373bc7694fc03ac",
"text": "Anisotropic impedance surfaces have been demonstrated to be useful for a variety of applications ranging from antennas, to surface wave guiding, to control of scattering. To increase their anisotropy requires elongated unit cells which have reduced symmetry and thus are not easily arranged into arbitrary patterns. We discuss the limitations of existing patterning techniques, and explore options for generating anisotropic impedance surfaces with arbitrary spatial variation. We present an approach that allows a wide range of anisotropic impedance profiles, based on a point-shifting method combined with a Voronoi cell generation technique. This approach can be used to produce patterns which include highly elongated cells with varying orientation, and cells which can smoothly transition between square, rectangular, hexagonal, and other shapes with a wide range of aspect ratios. We demonstrate a practical implementation of this technique which allows us to define gaps between the cells to generate impedance surfaces, and we use it to implement a simple example of a structure which requires smoothly varying impedance, in the form of a planar Luneberg lens. Simulations of the lens are verified by measurements, validating our pattern generation technique.",
"title": ""
},
{
"docid": "5026e994507ce6858114d86238b042d4",
"text": "The scope of scientific computing continues to grow and now includes diverse application areas such as network analysis, combinatorialcomputing, and knowledge discovery, to name just a few. Large problems in these application areas require HPC resources, but they exhibit computation and communication patterns that are irregular, fine-grained, and non-local, making it difficult to apply traditional HPC approaches to achieve scalable solutions. In this paper we present Active Pebbles, a programming and execution model developed explicitly to enable the development of scalable software for these emerging application areas. Our approach relies on five main techniques--scalable addressing, active routing, message coalescing, message reduction, and termination detection--to separate algorithm expression from communication optimization. Using this approach, algorithms can be expressed in their natural forms, with their natural levels of granularity, while optimizations necessary for scalability can be applied automatically to match the characteristics of particular machines. We implement several example kernels using both Active Pebbles and existing programming models, evaluating both programmability and performance. Our experimental results demonstrate that the Active Pebbles model can succinctly and directly express irregular application kernels, while still achieving performance comparable to MPI-based implementations that are significantly more complex.",
"title": ""
},
{
"docid": "4e0e6ca2f4e145c17743c42944da4cc8",
"text": "We demonstrate that, by using a recently proposed leveled homomorphic encryption scheme, it is possible to delegate the execution of a machine learning algorithm to a computing service while retaining confidentiality of the training and test data. Since the computational complexity of the homomorphic encryption scheme depends primarily on the number of levels of multiplications to be carried out on the encrypted data, we define a new class of machine learning algorithms in which the algorithm’s predictions, viewed as functions of the input data, can be expressed as polynomials of bounded degree. We propose confidential algorithms for binary classification based on polynomial approximations to least-squares solutions obtained by a small number of gradient descent steps. We present experimental validation of the confidential machine learning pipeline and discuss the trade-offs regarding computational complexity, prediction accuracy and cryptographic security.",
"title": ""
},
{
"docid": "5f4330e3ddd6339cf340a72c73d2106b",
"text": "As a new trend for data-intensive computing, real-time stream computing is gaining significant attention in the big data era. In theory, stream computing is an effective way to support big data by providing extremely low-latency processing tools and massively parallel processing architectures in real-time data analysis. However, in most existing stream computing environments, how to efficiently deal with big data stream computing, and how to build efficient big data stream computing systems are posing great challenges to big data computing research. First, the data stream graphs and the system architecture for big data stream computing, and some related key technologies, such as system structure, data transmission, application interfaces, and high availability, are systemically researched. Then, we give a classification of the latest research and depict the development status of some popular big data stream computing systems, including Twitter Storm, Yahoo! S4, Microsoft TimeStream, and Microsoft Naiad. Finally, the potential challenges and future directions of big data stream computing are discussed. 11.",
"title": ""
},
{
"docid": "c496424323fa958e09bbe0f6504f842d",
"text": "In this research a new hybrid prediction algorithm for breast cancer has been made from a breast cancer data set. Many approaches are available in diagnosing the medical diseases like genetic algorithm, ant colony optimization, particle swarm optimization, cuckoo search algorithm, etc., The proposed algorithm uses a ReliefF attribute reduction with entropy based genetic algorithm for breast cancer detection. The hybrid combination of these techniques is used to handle the dataset with high dimension and uncertainties. The data are obtained from the Wisconsin breast cancer dataset; these data have been categorized based on different properties. The performance of the proposed method is evaluated and the results are compared with other well known feature selection methods. The obtained result shows that the proposed method has a remarkable ability to generate reduced-size subset of salient features while yielding significant classification accuracy for large datasets.",
"title": ""
},
{
"docid": "7cce3ad08afe6c35046da014d82fc1ef",
"text": "The developmental histories of 32 players in the Australian Football League (AFL), independently classified as either expert or less skilled in their perceptual and decision-making skills, were collected through a structured interview process and their year-on-year involvement in structured and deliberate play activities retrospectively determined. Despite being drawn from the same elite level of competition, the expert decision-makers differed from the less skilled in having accrued, during their developing years, more hours of experience in structured activities of all types, in structured activities in invasion-type sports, in invasion-type deliberate play, and in invasion activities from sports other than Australian football. Accumulated hours invested in invasion-type activities differentiated between the groups, suggesting that it is the amount of invasion-type activity that is experienced and not necessarily intent (skill development or fun) or specificity that facilitates the development of perceptual and decision-making expertise in this team sport.",
"title": ""
},
{
"docid": "9f01b1e2bbc2d2b940c04f07b05bf5bb",
"text": "Inferior parietal lobule (IPL) neurons were studied when monkeys performed motor acts embedded in different actions and when they observed similar acts done by an experimenter. Most motor IPL neurons coding a specific act (e.g., grasping) showed markedly different activations when this act was part of different actions (e.g., for eating or for placing). Many motor IPL neurons also discharged during the observation of acts done by others. Most responded differentially when the same observed act was embedded in a specific action. These neurons fired during the observation of an act, before the beginning of the subsequent acts specifying the action. Thus, these neurons not only code the observed motor act but also allow the observer to understand the agent's intentions.",
"title": ""
},
{
"docid": "92abe28875dbe72fbc16bdf41b324126",
"text": "We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. Grasping a previously unknown object, one for which a 3-d model is not available, is a challenging problem. Further, even if given a model, one still has to decide where to grasp the object. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Given two (or more) images of an object, our algorithm attempts to identify a few points in each image corresponding to good locations at which to grasp the object. This sparse set of points is then triangulated to obtain a 3-d location at which to attempt a grasp. This is in contrast to standard dense stereo, which tries to triangulate every single point in an image (and often fails to return a good 3-d model). Our algorithm for identifying grasp locations from an image is trained via supervised learning, using synthetic images for the training set. We demonstrate this approach on two robotic manipulation platforms. Our algorithm successfully grasps a wide variety of objects, such as plates, tape-rolls, jugs, cellphones, keys, screwdrivers, staplers, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set. We also apply our method to the task of unloading items from dishwashers. 1",
"title": ""
},
{
"docid": "f1cbd60e1bd721e185bbbd12c133ad91",
"text": "Defect prediction models are a well-known technique for identifying defect-prone files or packages such that practitioners can allocate their quality assurance efforts (e.g., testing and code reviews). However, once the critical files or packages have been identified, developers still need to spend considerable time drilling down to the functions or even code snippets that should be reviewed or tested. This makes the approach too time consuming and impractical for large software systems. Instead, we consider defect prediction models that focus on identifying defect-prone (“risky”) software changes instead of files or packages. We refer to this type of quality assurance activity as “Just-In-Time Quality Assurance,” because developers can review and test these risky changes while they are still fresh in their minds (i.e., at check-in time). To build a change risk model, we use a wide range of factors based on the characteristics of a software change, such as the number of added lines, and developer experience. A large-scale study of six open source and five commercial projects from multiple domains shows that our models can predict whether or not a change will lead to a defect with an average accuracy of 68 percent and an average recall of 64 percent. Furthermore, when considering the effort needed to review changes, we find that using only 20 percent of the effort it would take to inspect all changes, we can identify 35 percent of all defect-inducing changes. Our findings indicate that “Just-In-Time Quality Assurance” may provide an effort-reducing way to focus on the most risky changes and thus reduce the costs of developing high-quality software.",
"title": ""
},
{
"docid": "9c2ce030230ccd91fdbfbd9544596604",
"text": "The kind of causal inference seen in natural human thought can be \"algorithmitized\" to help produce human-level machine intelligence.",
"title": ""
},
{
"docid": "ff664eac9ffb8cae9b4db1bc09629935",
"text": "In this paper, we apply sentiment analysis and machine learning principles to find the correlation between ”public sentiment” and ”market sentiment”. We use twitter data to predict public mood and use the predicted mood and previous days’ DJIA values to predict the stock market movements. In order to test our results, we propose a new cross validation method for financial data and obtain 75.56% accuracy using Self Organizing Fuzzy Neural Networks (SOFNN) on the Twitter feeds and DJIA values from the period June 2009 to December 2009. We also implement a naive protfolio management strategy based on our predicted values. Our work is based on Bollen et al’s famous paper which predicted the same with 87% accuracy.",
"title": ""
},
{
"docid": "f7ff118b8f39fa0843c4861306b4910f",
"text": "This article proposes a novel character-aware neural machine translation (NMT) model that views the input sequences as sequences of characters rather than words. On the use of row convolution (Amodei et al., 2015), the encoder of the proposed model composes word-level information from the input sequences of characters automatically. Since our model doesn’t rely on the boundaries between each word (as the whitespace boundaries in English), it is also applied to languages without explicit word segmentations (like Chinese). Experimental results on Chinese-English translation tasks show that the proposed character-aware NMT model can achieve comparable translation performance with the traditional word based NMT models. Despite the target side is still word based, the proposed model is able to generate much less unknown words.",
"title": ""
},
{
"docid": "0c4f09c41c35690de71f106403d14223",
"text": "This paper views Islamist radicals as self-interested political revolutionaries and builds on a general model of political extremism developed in a previous paper (Ferrero, 2002), where extremism is modelled as a production factor whose effect on expected revenue is initially positive and then turns negative, and whose level is optimally chosen by a revolutionary organization. The organization is bound by a free-access constraint and hence uses the degree of extremism as a means of indirectly controlling its level of membership with the aim of maximizing expected per capita income of its members, like a producer co-operative. The gist of the argument is that radicalization may be an optimal reaction to perceived failure (a widespread perception in the Muslim world) when political activists are, at the margin, relatively strongly averse to effort but not so averse to extremism, a configuration that is at odds with secular, Western-style revolutionary politics but seems to capture well the essence of Islamic revolutionary politics, embedded as it is in a doctrinal framework.",
"title": ""
},
{
"docid": "1a834cb0c5d72c6bc58c4898d318cfc2",
"text": "This paper proposes a novel single-stage high-power-factor ac/dc converter with symmetrical topology. The circuit topology is derived from the integration of two buck-boost power-factor-correction (PFC) converters and a full-bridge series resonant dc/dc converter. The switch-utilization factor is improved by using two active switches to serve in the PFC circuits. A high power factor at the input line is assured by operating the buck-boost converters at discontinuous conduction mode. With symmetrical operation and elaborately designed circuit parameters, zero-voltage switching on all the active power switches of the converter can be retained to achieve high circuit efficiency. The operation modes, design equations, and design steps for the circuit parameters are proposed. A prototype circuit designed for a 200-W dc output was built and tested to verify the analytical predictions. Satisfactory performances are obtained from the experimental results.",
"title": ""
},
{
"docid": "f0ced128e23c4f17abc635f88178a6c1",
"text": "This paper explores liquidity risk in a system of interconnected financial institutions when these institutions are subject to regulatory solvency constraints. When the market’s demand for illiquid assets is less than perfectly elastic, sales by distressed institutions depress the market price of such assets. Marking to market of the asset book can induce a further round of endogenously generated sales of assets, depressing prices further and inducing further sales. Contagious failures can result from small shocks. We investigate the theoretical basis for contagious failures and quantify them through simulation exercises. Liquidity requirements on institutions can be as effective as capital requirements in forestalling contagious failures. ∗First version. We thank Andy Haldane and Vicky Saporta for their comments during the preparation of this paper. The opinions expressed in this paper are those of the authors, and do not necessarily reflect those of the Central Bank of Chile, or the Bank of England. Please direct any correspondence to Hyun Song Shin, h.s.shin@lse.ac.uk.",
"title": ""
},
{
"docid": "923363771ee11cc5b06917385f5832c0",
"text": "This article presents a novel automatic method (AutoSummENG) for the evaluation of summarization systems, based on comparing the character n-gram graphs representation of the extracted summaries and a number of model summaries. The presented approach is language neutral, due to its statistical nature, and appears to hold a level of evaluation performance that matches and even exceeds other contemporary evaluation methods. Within this study, we measure the effectiveness of different representation methods, namely, word and character n-gram graph and histogram, different n-gram neighborhood indication methods as well as different comparison methods between the supplied representations. A theory for the a priori determination of the methods' parameters along with supporting experiments concludes the study to provide a complete alternative to existing methods concerning the automatic summary system evaluation process.",
"title": ""
},
{
"docid": "91b386ef617f75dd480e44708eb5a521",
"text": "The recent rise of interest in Virtual Reality (VR) came with the availability of commodity commercial VR products, such as the Head Mounted Displays (HMD) created by Oculus and other vendors. To accelerate the user adoption of VR headsets, content providers should focus on producing high quality immersive content for these devices. Similarly, multimedia streaming service providers should enable the means to stream 360 VR content on their platforms. In this study, we try to cover different aspects related to VR content representation, streaming, and quality assessment that will help establishing the basic knowledge of how to build a VR streaming system.",
"title": ""
}
] |
scidocsrr
|
1362e5aa02e2cc10cdae299a3653a2eb
|
Depression Estimation Using Audiovisual Features and Fisher Vector Encoding
|
[
{
"docid": "cda19d99a87ca769bb915167f8a842e8",
"text": "Sparse coding---that is, modelling data vectors as sparse linear combinations of basis elements---is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on learning the basis set, also called dictionary, to adapt it to specific data, an approach that has recently proven to be very effective for signal reconstruction and classification in the audio and image processing domains. This paper proposes a new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples. A proof of convergence is presented, along with experiments with natural images demonstrating that it leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.",
"title": ""
},
{
"docid": "80bf80719a1751b16be2420635d34455",
"text": "Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood disorders such as unipolar depression shows a strong temporal correlation with the affective dimensions valence, arousal and dominance. In addition to structured self-report questionnaires, psychologists and psychiatrists use in their evaluation of a patient's level of depression the observation of facial expressions and vocal cues. It is in this context that we present the fourth Audio-Visual Emotion recognition Challenge (AVEC 2014). This edition of the challenge uses a subset of the tasks used in a previous challenge, allowing for more focussed studies. In addition, labels for a third dimension (Dominance) have been added and the number of annotators per clip has been increased to a minimum of three, with most clips annotated by 5. The challenge has two goals logically organised as sub-challenges: the first is to predict the continuous values of the affective dimensions valence, arousal and dominance at each moment in time. The second is to predict the value of a single self-reported severity of depression indicator for each recording in the dataset. This paper presents the challenge guidelines, the common data used, and the performance of the baseline system on the two tasks.",
"title": ""
}
] |
[
{
"docid": "aa23a546d17572f6b79c72832d83308b",
"text": "Leader opening and closing behaviors are assumed to foster high levels of employee exploration and exploitation behaviors, hence motivating employee innovative performance. Applying the ambidexterity theory of leadership for innovation, results revealed that leader opening and closing behaviors positively predicted employee exploration and exploitation behaviors, respectively, above and beyond the control variables. Moreover, results showed that employee innovative performance was significantly predicted by leader opening behavior, leader closing behavior, and the interaction between leaders’ opening and closing behaviors, above and beyond control variables.",
"title": ""
},
{
"docid": "999f30cbd208bc7d262de954d29dcd39",
"text": "Purpose\nThe purpose of the study was to determine the sensitivity and specificity, and to establish cutoff points for the severity index Percentage of Consonants Correct - Revised (PCC-R) in Brazilian Portuguese-speaking children with and without speech sound disorders.\n\n\nMethods\n72 children between 5:00 and 7:11 years old - 36 children without speech and language complaints and 36 children with speech sound disorders. The PCC-R was applied to the figure naming and word imitation tasks that are part of the ABFW Child Language Test. Results were statistically analyzed. The ROC curve was performed and sensitivity and specificity values of the index were verified.\n\n\nResults\nThe group of children without speech sound disorders presented greater PCC-R values in both tasks, regardless of the gender of the participants. The cutoff value observed for the picture naming task was 93.4%, with a sensitivity value of 0.89 and specificity of 0.94 (age independent). For the word imitation task, results were age-dependent: for age group ≤6:5 years old, the cutoff value was 91.0% (sensitivity of 0.77 and specificity of 0.94) and for age group >6:5 years-old, the cutoff value was 93.9% (sensitivity of 0.93 and specificity of 0.94).\n\n\nConclusion\nGiven the high sensitivity and specificity of PCC-R, we can conclude that the index was effective in discriminating and identifying children with and without speech sound disorders.",
"title": ""
},
{
"docid": "2084a38c285ebfb2d5e40e8667414d0d",
"text": "Differential Evolution (DE) algorithm is a new heuristic approach mainly having three advantages; finding the true global minimum regardless of the initial parameter values, fast convergence, and using few control parameters. DE algorithm is a population based algorithm like genetic algorithms using similar operators; crossover, mutation and selection. In this work, we have compared the performance of DE algorithm to that of some other well known versions of genetic algorithms: PGA, Grefensstette, Eshelman. In simulation studies, De Jong’s test functions have been used. From the simulation results, it was observed that the convergence speed of DE is significantly better than genetic algorithms. Therefore, DE algorithm seems to be a promising approach for engineering optimization problems.",
"title": ""
},
{
"docid": "749800c4dae57eb13b5c3df9e0c302a0",
"text": "In a contemporary clinical laboratory it is very common to have to assess the agreement between two quantitative methods of measurement. The correct statistical approach to assess this degree of agreement is not obvious. Correlation and regression studies are frequently proposed. However, correlation studies the relationship between one variable and another, not the differences, and it is not recommended as a method for assessing the comparability between methods.
In 1983 Altman and Bland (B&A) proposed an alternative analysis, based on the quantification of the agreement between two quantitative measurements by studying the mean difference and constructing limits of agreement.
The B&A plot analysis is a simple way to evaluate a bias between the mean differences, and to estimate an agreement interval, within which 95% of the differences of the second method, compared to the first one, fall. Data can be analyzed both as unit differences plot and as percentage differences plot.
The B&A plot method only defines the intervals of agreements, it does not say whether those limits are acceptable or not. Acceptable limits must be defined a priori, based on clinical necessity, biological considerations or other goals.
The aim of this article is to provide guidance on the use and interpretation of Bland Altman analysis in method comparison studies.",
"title": ""
},
{
"docid": "df92fe7057593a9312de91c06e1525ca",
"text": "The Formal Theory of Fun and Creativity (1990–2010) [Schmidhuber, J.: Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Trans. Auton. Mental Dev. 2(3), 230–247 (2010b)] describes principles of a curious and creative agent that never stops generating nontrivial and novel and surprising tasks and data. Two modules are needed: a data encoder and a data creator. The former encodes the growing history of sensory data as the agent is interacting with its environment; the latter executes actions shaping the history. Both learn. The encoder continually tries to encode the created data more efficiently, by discovering new regularities in it. Its learning progress is the wow-effect or fun or intrinsic reward of the creator, which maximizes future expected reward, being motivated to invent skills leading to interesting data that the encoder does not yet know but can easily learn with little computational effort. I have argued that this simple formal principle explains science and art and music and humor. Note: This overview heavily draws on previous publications since 1990, especially Schmidhuber (2010b), parts of which are reprinted with friendly permission by IEEE.",
"title": ""
},
{
"docid": "aab2126d980eb594c3c831971d7e3ba9",
"text": "IP traceback can be used to find the origin of anonymous traffic; however, Internet-scale IP traceback systems have not been deployed due to a need for cooperation between Internet Service Providers (ISPs). This article presents an Internet-scale Passive IP Trackback (PIT) mechanism that does not require ISP deployment. PIT analyzes the ICMP messages that may scattered to a network telescope as spoofed packets travel from attacker to victim. An Internet route model is then used to help re-construct the attack path. Applying this mechanism to data collected by Cooperative Association for Internet Data Analysis (CAIDA), we found PIT can construct a trace tree from at least one intermediate router in 55.4% the fiercest packet spoofing attacks, and can construct a tree from at least 10 routers in 23.4% of attacks. This initial result shows PIT is a promising mechanism.",
"title": ""
},
{
"docid": "05874da7b27475377dcd8f7afdd1bc5a",
"text": "The main aim of this paper is to provide automatic irrigation to the plants which helps in saving money and water. The entire system is controlled using 8051 micro controller which is programmed as giving the interrupt signal to the sprinkler.Temperature sensor and humidity sensor are connected to internal ports of micro controller via comparator,When ever there is a change in temperature and humidity of the surroundings these sensors senses the change in temperature and humidity and gives an interrupt signal to the micro-controller and thus the sprinkler is activated.",
"title": ""
},
{
"docid": "8aa188ccf31663abb8a711e8b2b8f36b",
"text": "Research has demonstrated that use of texting slang (textisms) when text messaging does not appear to impact negatively on children's literacy outcomes and may even benefit children's spelling attainment. However, less attention has been paid to the impact of text messaging on the development of children's and young people's understanding of grammar. This study therefore examined the interrelationships between children's and young adults' tendency to make grammatical violations when texting and their performance on formal assessments of spoken and written grammatical understanding, orthographic processing and spelling ability over the course of 1 year. Zero-order correlations showed patterns consistent with previous research on textism use and spelling, and there was no evidence of any negative associations between the development of the children's performance on the grammar tasks and their use of grammatical violations when texting. Adults' tendency to use ungrammatical word forms ('does you') was positively related to performance on the test of written grammar. Grammatical violations were found to be positively associated with growth in spelling for secondary school children. However, not all forms of violation were observed to be consistently used in samples of text messages taken 12 months apart or were characteristic of typical text messages. The need to differentiate between genuine errors and deliberate violation of rules is discussed, as are the educational implications of these findings.",
"title": ""
},
{
"docid": "939a8a41a41f61327e94a4d4eb21a75b",
"text": "This paper proposes an end-to-end learning framework for multiview stereopsis. We term the network SurfaceNet. It takes a set of images and their corresponding camera parameters as input and directly infers the 3D model. The key advantage of the framework is that both photo-consistency as well geometric relations of the surface structure can be directly learned for the purpose of multiview stereopsis in an end-to-end fashion. SurfaceNet is a fully 3D convolutional network which is achieved by encoding the camera parameters together with the images in a 3D voxel representation. We evaluate SurfaceNet on the large-scale DTU benchmark.",
"title": ""
},
{
"docid": "520e87ff9133c15f534b3e8eccb048a3",
"text": "The greater trochanter of the femur is a bony protuberance arising at the femoral neck and shaft interface. The greater trochanter has 4 distinct facets (anterior, superoposterior, lateral, and posterior) that serve for attachments of the abductor tendons and/or sites for bursae [1] (Figures 1 and 2). The gluteus minimus and medius muscles arise from the external iliac fossa and their corresponding tendons insert onto the greater trochanter (Figures 1-3). The gluteus medius muscle almost completely covers the gluteus minimus muscle. The gluteus minimus tendon attaches to the anterior facet (main insertion) (Figures 1-3) and to the anterior and superior hip joint capsule. From posterior to anterior, the gluteus medius tendon attaches to the superoposterior facet (main tendinous attachment), the inferior aspect of the lateral facet, and more anteriorly to the gluteus minimus tendon [2]. The posterior facet is devoid of tendon attachments (Figures 1-3). A variety of bursae have been described in the vicinity of the greater trochanter [3]. The 3 most consistently identified bursae are the subgluteus minimus, subgluteus medius, and subgluteus maximus bursae. The subgluteus minimus bursa lies deep to the gluteus minimus tendon. The subgluteus medius bursa is located between the lateral insertion of the gluteus medius tendon and the superior part of the lateral facet (this portion of the lateral facet is devoid of tendon insertion and is known as the trochanteric bald spot) [4] (Figure 1). The largest bursa is the subgluteus maximus. This bursa covers the posterior facet and lies deep to the gluteus maximus muscle (Figure 4).",
"title": ""
},
{
"docid": "825640f8ce425a34462b98869758e289",
"text": "Recurrent neural networks scale poorly due to the intrinsic difficulty in parallelizing their state computations. For instance, the forward pass computation of ht is blocked until the entire computation of ht−1 finishes, which is a major bottleneck for parallel computing. In this work, we propose an alternative RNN implementation by deliberately simplifying the state computation and exposing more parallelism. The proposed recurrent unit operates as fast as a convolutional layer and 5-10x faster than cuDNN-optimized LSTM. We demonstrate the unit’s effectiveness across a wide range of applications including classification, question answering, language modeling, translation and speech recognition. We open source our implementation in PyTorch and CNTK1.",
"title": ""
},
{
"docid": "1e17455be47fd697a085c8006f5947e9",
"text": "We present a simple, but surprisingly effective, method of self-training a twophase parser-reranker system using readily available unlabeled data. We show that this type of bootstrapping is possible for parsing when the bootstrapped parses are processed by a discriminative reranker. Our improved model achieves an f -score of 92.1%, an absolute 1.1% improvement (12% error reduction) over the previous best result for Wall Street Journal parsing. Finally, we provide some analysis to better understand the phenomenon.",
"title": ""
},
{
"docid": "f5a012a451afbda47ad3b21e7d601b25",
"text": "In recent years Quantum Cryptography gets more attention as well as becomes most promising cryptographic field for faster, effective and more secure communications. Quantum Cryptography is an art of science which overlap quantum properties of light under quantum mechanics on cryptographic task instead of current state of algorithms based on mathematical computing technology. Major algorithms for public key encryption and some digital signature scheme such as RSA, El Gamal cryptosystem, hash function are vulnerable at quantum adversaries. Most of the factoring problem can be broken by Shore's algorithm and quantum computer threatens other hand discrete logarithm problem. Our paper describes why modern cryptography contributes quantum cryptography, security issues and future goals of modern cryptography.",
"title": ""
},
{
"docid": "9d80272f499057c714ff6dee9fba3b7e",
"text": "Classifying Web Queries by User Intent aims to identify the type of information need behind the queries. In this paper we use a set of features extracted only from the terms including in the query, without any external or additional information. We automatically extracted the features proposed from two different corpora, then implemented machine learning algorithms to validate the accuracy of the classification, and evaluate the results. We analyze the distribution of the features in the queries per class, present the classification results obtained and draw some conclusions about the feature query distribution.",
"title": ""
},
{
"docid": "607cff7a41d919bef9f4aa0cec3c1c9d",
"text": "The goal of this work was to develop and validate a neuro-fuzzy intelligent system (LOLIMOT) for rectal temperature prediction of broiler chickens. The neuro-fuzzy network was developed using SCILAB 4.1, on the ground of three Departamento de Engenharia, Universidade Federal de Lavras (UFLA), Caixa Postal 3037, Lavras/MG, Brasil le.ferreira@gmail.com yanagi@deg.ufla.br alisonzille@gmail.com Desenvolvimento de uma rede neuro-fuzzy para predição da temperatura retal de frangos de corte 222 RITA • Volume 17 • Número 2 • 2010 input variables: air temperature, relative humidity and air velocity. The output variable was rectal temperature. Experimental results, used for validation, showed that the average standard deviation between simulated and measured values of RT was 0.11 °C. The neuro-fuzzy system presents as a satisfactory hybrid intelligent system for rectal temperature prediction of broiler chickens, which adds fuzzy logic features based on the fuzzy sets theory to artificial neural networks.",
"title": ""
},
{
"docid": "780d67867f770f61cf1e9d02cfc12935",
"text": "In this correspondence, an improvement to the realization of the linear-phase IIR filters is described. It is based on the rearrangement of the numerator polynomials of the IIR filter functions that are used in the real-time realizations recently proposed in literature. The new realization has better total harmonic distortion when sine input is used, and it has smaller phase and group delay errors due to finite section length.",
"title": ""
},
{
"docid": "27fd27cf86b68822b3cfb73cff2e2cb6",
"text": "Patients with Liver disease have been continuously increasing because of excessive consumption of alcohol, inhale of harmful gases, intake of contaminated food, pickles and drugs. Automatic classification tools may reduce burden on doctors. This paper evaluates the selected classification algorithms for the classification of some liver patient datasets. The classification algorithms considered here are Naïve Bayes classifier, C4.5, Back propagation Neural Network algorithm, and Support Vector Machines. These algorithms are evaluated based on four criteria: Accuracy, Precision, Sensitivity and Specificity.",
"title": ""
},
{
"docid": "84a01029714dfef5d14bc4e2be78921e",
"text": "Integrating frequent pattern mining with interactive visualization for temporal event sequence analysis poses many interesting research questions and challenges. We review and reflect on some of these challenges based on our experiences working on event sequence data from two domains: web analytics and application logs. These challenges can be organized using a three-stage framework: pattern mining, pattern pruning and interactive visualization.",
"title": ""
},
{
"docid": "1ef0a2569a1e6a4f17bfdc742ad30a7f",
"text": "Internet of Things (IoT) is becoming more and more popular. Increasingly, European projects (CityPulse, IoT.est, IoT-i and IERC), standard development organizations (ETSI M2M, oneM2M and W3C) and developers are involved in integrating Semantic Web technologies to Internet of Things. All of them design IoT application uses cases which are not necessarily interoperable with each other. The main innovative research challenge is providing a unified system to build interoperable semantic-based IoT applications. In this paper, to overcome this challenge, we design the Semantic Web of Things (SWoT) generator to assist IoT projects and developers in: (1) building interoperable Semantic Web of Things (SWoT) applications by providing interoperable semantic-based IoT application templates, (2) easily inferring high-level abstractions from sensor measurements thanks to the rules provided by the template, (3) designing domain-specific or inter-domain IoT applications thanks to the interoperable domain knowledge provided by the template, and (4) encouraging to reuse as much as possible the background knowledge already designed. We demonstrate the usefulness of our contribution though three use cases: (1) cloud-based IoT developers, (2) mobile application developers, and (3) assisting IoT projects. A proof-of concept for providing Semantic Web of Things application templates is available at http://www.sensormeasurement.appspot.com/?p=m3api.",
"title": ""
},
{
"docid": "247eebd69a651f6f116f41fdf885ae39",
"text": "RFID identification is a new technology that will become ubiquitous as RFID tags will be applied to every-day items in order to yield great productivity gains or “smart” applications for users. However, this pervasive use of RFID tags opens up the possibility for various attacks violating user privacy. In this work we present an RFID authentication protocol that enforces user privacy and protects against tag cloning. We designed our protocol with both tag-to-reader and reader-to-tag authentication in mind; unless both types of authentication are applied, any protocol can be shown to be prone to either cloning or privacy attacks. Our scheme is based on the use of a secret shared between tag and database that is refreshed to avoid tag tracing. However, this is done in such a way so that efficiency of identification is not sacrificed. Additionally, our protocol is very simple and it can be implemented easily with the use of standard cryptographic hash functions. In analyzing our protocol, we identify several attacks that can be applied to RFID protocols and we demonstrate the security of our scheme. Furthermore, we show how forward privacy is guaranteed; messages seen today will still be valid in the future, even after the tag has been compromised.",
"title": ""
}
] |
scidocsrr
|
5996afed11250f6ca35337a7c3efe964
|
Indoor scene recognition through object detection
|
[
{
"docid": "305efd1823009fe79c9f8ff52ddb5724",
"text": "We explore the problem of classifying images by the object categories they contain in the case of a large number of object categories. To this end we combine three ingredients: (i) shape and appearance representations that support spatial pyramid matching over a region of interest. This generalizes the representation of Lazebnik et al., (2006) from an image to a region of interest (ROI), and from appearance (visual words) alone to appearance and local shape (edge distributions); (ii) automatic selection of the regions of interest in training. This provides a method of inhibiting background clutter and adding invariance to the object instance 's position; and (iii) the use of random forests (and random ferns) as a multi-way classifier. The advantage of such classifiers (over multi-way SVM for example) is the ease of training and testing. Results are reported for classification of the Caltech-101 and Caltech-256 data sets. We compare the performance of the random forest/ferns classifier with a benchmark multi-way SVM classifier. It is shown that selecting the ROI adds about 5% to the performance and, together with the other improvements, the result is about a 10% improvement over the state of the art for Caltech-256.",
"title": ""
}
] |
[
{
"docid": "40ad6bf9f233b58e13cf6a709daba2ca",
"text": "While syntactic dependency annotations concentrate on the surface or functional structure of a sentence, semantic dependency annotations aim to capture betweenword relationships that are more closely related to the meaning of a sentence, using graph-structured representations. We extend the LSTM-based syntactic parser of Dozat and Manning (2017) to train on and generate these graph structures. The resulting system on its own achieves stateof-the-art performance, beating the previous, substantially more complex stateof-the-art system by 1.9% labeled F1. Adding linguistically richer input representations pushes the margin even higher, allowing us to beat it by 2.6% labeled F1.",
"title": ""
},
{
"docid": "6244dcd761e35c5dc4735840593ecaf1",
"text": "AIM\nThe aim of the work was a presentation of one case with Thrombocytopenia absent radius (TAR) syndrome.\n\n\nMETHODS\nDiagnosis of TAR syndrome has been established on the basis of pedigree, laboratory findings (hemogram, platelet count, peripheral smear), bone marrow biopsy, radiological examination and karyotype.\n\n\nRESULTS\nA patient was a two months old female child, hospitalized due petechial bleeding, upper limb anomalies and diarrhea.\n\n\nLABORATORY FINDINGS\nred blood cell count was 2.1 x 1012/L, hemoglobin value was 62 g/L, white blood cell count indicated the existence of leukemoid reaction (40.0 x 109/L), the eosinophyle count at the leukocyte formula was increased (3%), bleeding time was prolonged (10'). The platelets at the peripheral blood smear were rarely present, whereas the megacaryocytes appeared in the bone marrow aspiration in the decreased number, or did not appear at all. At the radiological examination of the upper limbs, radius was absent in both shoulders.\n\n\nCONCLUSION\nTAR syndrome is a rare hereditary disease. Obligatory clinical manifestations are: thrombocytopenia and bilateral absence of the radius. Prenatal diagnosis can be established during the 16th week of gestation by ultrasound and if it is continued with the pregnancy it is preferred that the platelet transfusion be given intrauterine. The mortality rate depends on the age of the patient and the platelet count.",
"title": ""
},
{
"docid": "d72d2bc2184f6baee47eaf3c07dcace4",
"text": "Regional variation in sweating over the body is widely recognised. However, most studies only measured a limited number of regions, with the use of differing thermal states across studies making a good meta-analysis to obtain a whole body map problematic. A study was therefore conducted to investigate regional sweat rates (RSR) and distributions over the whole body in male athletes. A modified absorbent technique was used to collect sweat at two exercise intensities [55% (I1) and 75% (I2) $$ {\\dot{\\text{V}}\\text{O}}_{{2{ \\max }}} $$ ] in moderately warm conditions (25°C, 50% rh, 2 m s−1 air velocity). At I1 and I2, highest sweat rates were observed on the central (upper and mid) and lower back, with values as high as 1,197, 1,148, and 856 g m−2 h−1, respectively, at I2. Lowest values were observed on the fingers, thumbs, and palms, with values of 144, 254, and 119 g m−2 h−1, respectively at I2. Sweat mapping of the head demonstrated high sweat rates on the forehead (1,710 g m−2 h−1 at I2) compared with low values on the chin (302 g m−2 h−1 at I2) and cheeks (279 g m−2 h−1 at I2). Sweat rate increased significantly in all regions from the low to high exercise intensity, with exception of the feet and ankles. No significant correlation was present between RSR and regional skin temperature (T sk), nor did RSR correspond to known patterns of regional sweat gland density. The present study has provided detailed regional sweat data over the whole body and has demonstrated large intra- and inter-segmental variation and the presence of consistent patterns of regional high versus low sweat rate areas in Caucasians male athletes. This data may have important applications for clothing design, thermophysiological modelling and thermal manikin design.",
"title": ""
},
{
"docid": "6558b2a3c43e11d58f3bb829425d6a8d",
"text": "While end-to-end neural conversation models have led to promising advances in reducing hand-crafted features and errors induced by the traditional complex system architecture, they typically require an enormous amount of data due to the lack of modularity. Previous studies adopted a hybrid approach with knowledge-based components either to abstract out domainspecific information or to augment data to cover more diverse patterns. On the contrary, we propose to directly address the problem using recent developments in the space of continual learning for neural models. Specifically, we adopt a domainindependent neural conversational model and introduce a novel neural continual learning algorithm that allows a conversational agent to accumulate skills across different tasks in a data-efficient way. To the best of our knowledge, this is the first work that applies continual learning to conversation systems. We verified the efficacy of our method through a conversational skill transfer from either synthetic dialogs or human-human dialogs to human-computer conversations in a customer support domain.",
"title": ""
},
{
"docid": "fb039b1837209a3f3c01289d9adc275b",
"text": "This paper presents a comprehensive research study of the detection of US traffic signs. Until now, the research in Traffic Sign Recognition systems has been centered on European traffic signs, but signs can look very different across different parts of the world, and a system which works well in Europe may indeed not work in the US. We go over the recent advances in traffic sign detection and discuss the differences in signs across the world. Then we present a comprehensive extension to the publicly available LISA-TS traffic sign dataset, almost doubling its size, now with HD-quality footage. The extension is made with testing of tracking sign detection systems in mind, providing videos of traffic sign passes. We apply the Integral Channel Features and Aggregate Channel Features detection methods to US traffic signs and show performance numbers outperforming all previous research on US signs (while also performing similarly to the state of the art on European signs). Integral Channel Features have previously been used successfully for European signs, while Aggregate Channel Features have never been applied to the field of traffic signs. We take a look at the performance differences between the two methods and analyze how they perform on very distinctive signs, as well as white, rectangular signs, which tend to blend into their environment.",
"title": ""
},
{
"docid": "d066c07fc64cf91f32be6ccf83761789",
"text": "This study tests the hypothesis that chewing gum leads to cognitive benefits through improved delivery of glucose to the brain, by comparing the cognitive performance effects of gum and glucose administered separately and together. Participants completed a battery of cognitive tests in a fully related 2 x 2 design, where one factor was Chewing Gum (gum vs. mint sweet) and the other factor was Glucose Co-administration (consuming a 25 g glucose drink vs. consuming water). For four tests (AVLT Immediate Recall, Digit Span, Spatial Span and Grammatical Transformation), beneficial effects of chewing and glucose were found, supporting the study hypothesis. However, on AVLT Delayed Recall, enhancement due to chewing gum was not paralleled by glucose enhancement, suggesting an alternative mechanism. The glucose delivery model is supported with respect to the cognitive domains: working memory, immediate episodic long-term memory and language-based attention and processing speed. However, some other mechanism is more likely to underlie the facilitatory effect of chewing gum on delayed episodic long-term memory.",
"title": ""
},
{
"docid": "dd0f24e898523f7f218fc0a2a7ba6210",
"text": "In this article, we present results on the identification and behavioral analysis of social bots in a sample of 542,584 Tweets, collected before and after Japan's 2014 general election. Typical forms of bot activity include massive Retweeting and repeated posting of (nearly) the same message, sometimes used in combination. We focus on the second method and present (1) a case study on several patterns of bot activity, (2) methodological considerations on the automatic identification of such patterns and the prerequisite near-duplicate detection, and (3) we give qualitative insights into the purposes behind the usage of social/political bots. We argue that it was in the latency of the semi-public sphere of social media-and not in the visible or manifest public sphere (official campaign platform, mass media)-where Shinzō Abe's hidden nationalist agenda interlocked and overlapped with the one propagated by organizations such as Nippon Kaigi and Internet right-wingers (netto uyo) during the election campaign, the latter potentially forming an enormous online support army of Abe's agenda.",
"title": ""
},
{
"docid": "c6e29402f386e466254d99b677b9e18b",
"text": "A planar Yagi-Uda antenna with a single director, a meandered driven dipole, and a concave parabolic reflector on a thin dielectric substrate is proposed. Through this design, the high directivity of 7.3 dBi, front-to-back ratio of 14.7 dB, cross-polarization level of −39.1 dB, bandwidth of 5.8%, and the radiation efficiency of 87.5%, which is better than −1 dBi in terms of the 3D average gain, can be achieved. Besides, the area of this antenna is much smaller than that of the previously proposed one by about 78%. Therefore, the proposed antenna is suitable for the GPS (Global Positioning System) application in mobile devices whose volumes are usually not sufficient for embedded antennas with a good RHCP (right hand circular polarization) AR (axial ratio) values and not enough angular coverage of the designed AR values.",
"title": ""
},
{
"docid": "3cc7d58006376b14fbb175262840b185",
"text": "We live and operate in the world of computing and computers. The Internet has drastically changed the computing world from the concept of parallel computing to distributed computing to grid computing and now to cloud computing. Cloud computing is a new wave in the field of information technology. Some see it as an emerging field in computer science. It consists of a set of resources and services offered through the Internet. Hence, \"cloud computing\" is also called \"Internet computing.\" The word \"cloud\" is a metaphor for describing the Web as a space where computing has been preinstalled and exists as a service. Operating systems, applications, storage, data, and processing capacity all exist on the Web, ready to be shared among users. Figure 1 shows a conceptual diagram of cloud computing.",
"title": ""
},
{
"docid": "8bb5acdafefc35f6c1adf00cfa47ac2c",
"text": "A general method is introduced for separating points in multidimensional spaces through the use of stochastic processes. This technique is called stochastic discrimination.",
"title": ""
},
{
"docid": "08844c98f9d6b92f84d272516af64281",
"text": "This paper describes the synthesis of Dynamic Differential Logic to increase the resistance of FPGA implementations against Differential Power Analysis. The synthesis procedure is developed and a detailed description is given of how EDA tools should be used appropriately to implement a secure digital design flow. Compared with an existing technique to implement Dynamic Differential Logic on FPGA, the technique saves a factor 2 in slice utilization. Experimental results also indicate that a secure version of the AES encryption algorithm can now be implemented with a mere 50% increase in time delay and 90% increase in slice utilization when compared with a normal non-secure single ended implementation.",
"title": ""
},
{
"docid": "15d1d22af97a2e71f7bc92b7e8c1d76c",
"text": "This paper presents a new methodological approach for selection of appropriate type and number of Membership function (MF's) for the effective control of Double Inverted Pendulum (DIP). A Matlab-Simulink model of the system is built using governing mathematical equations. The relation between error tolerance of successive approximations and the number of MF's for controllers is also shown. Stabilization is done using Fuzzy and Adaptive Neuro Fuzzy Inference System (ANFIS) controllers having triangular and gbell MF's respectively. The proposed ANFIS and fuzzy controller stabilizes DIP system within 2.5 and 3.0 seconds respectively. All the three controllers have shown almost zero amount of steady state error. Both the controllers gives excellent result which proves the validity of the proposed model. ANFIS controller provides better results as compared to fuzzy controller. Results for Settling time (s), Steady state error and Maximum overshoot (degrees) for each input and output are elaborated with the help of graphs and tables.",
"title": ""
},
{
"docid": "8bda640f73c3941272739a57a5d02353",
"text": "Researchers strive to understand eating behavior as a means to develop diets and interventions that can help people achieve and maintain a healthy weight, recover from eating disorders, or manage their diet and nutrition for personal wellness. A major challenge for eating-behavior research is to understand when, where, what, and how people eat. In this paper, we evaluate sensors and algorithms designed to detect eating activities, more specifically, when people eat. We compare two popular methods for eating recognition (based on acoustic and electromyography (EMG) sensors) individually and combined. We built a data-acquisition system using two off-the-shelf sensors and conducted a study with 20 participants. Our preliminary results show that the system we implemented can detect eating with an accuracy exceeding 90.9% while the crunchiness level of food varies. We are developing a wearable system that can capture, process, and classify sensor data to detect eating in real-time.",
"title": ""
},
{
"docid": "9fba2674a28d38fab56d687f1904e592",
"text": "Two Ka-band power amplifier MMICs, 4W and 6W, with high power density and gain are presented. Each amplifier was designed using a 5-stage topology to demonstrate over 30dB of gain. The 4W design exhibited a peak saturated output power of 37.2dBm and a chip output power density of 532 mW/mm/sup 2/. This is the highest recorded power density for a Ka-band power amplifier design to date. The high gain and power density make them ideal for low-cost Ka-band transmit systems.",
"title": ""
},
{
"docid": "3550dbe913466a675b621d476baba219",
"text": "Successful implementing and managing of change is urgently necessary for each adult educational organization. During the process, leading of the staff is becoming a key condition and the most significant factor. Beside certain personal traits of the leader, change management demands also certain leadership knowledges, skills, versatilities and behaviour which may even border on changing the organizational culture. The paper finds the significance of certain values and of organizational climate and above all the significance of leadership style which a leader will adjust to the staff and to the circumstances. The author presents a multiple qualitative case study of managing change in three adult educational organizations. The paper finds that factors of successful leading of change exist which represent an adequate approach to leading the staff during the introduction of changes in educational organizations. Its originality/value is in providing information on the important relationship between culture, leadership styles and leader’s behaviour as preconditions for successful implementing and managing of strategic change.",
"title": ""
},
{
"docid": "cc0a9028b6680bd0c2a4a30528d2c613",
"text": "In 3 studies, the authors tested the hypothesis that discrimination targets' worldview moderates the impact of perceived discrimination on self-esteem among devalued groups. In Study 1, perceiving discrimination against the ingroup was negatively associated with self-esteem among Latino Americans who endorsed a meritocracy worldview (e.g., believed that individuals of any group can get ahead in America and that success stems from hard work) but was positively associated with self-esteem among those who rejected this worldview. Study 2 showed that exposure to discrimination against their ingroup (vs. a non-self-relevant group) led to lower self-esteem, greater feelings of personal vulnerability, and ingroup blame among Latino Americans who endorsed a meritocracy worldview but to higher self-esteem and decreased ingroup blame among Latino Americans who rejected it. Study 3 showed that compared with women informed that prejudice against their ingroup is pervasive, women informed that prejudice against their ingroup is rare had higher self-esteem if they endorsed a meritocracy worldview but lower self-esteem if they rejected this worldview. Findings support the idea that perceiving discrimination against one's ingroup threatens the worldview of individuals who believe that status in society is earned but confirms the worldview of individuals who do not.",
"title": ""
},
{
"docid": "73ded3dd5e6b5abe5e882beb12312ea9",
"text": "As deep learning methods form a critical part in commercially important applications such as autonomous driving and medical diagnostics, it is important to reliably detect out-of-distribution (OOD) inputs while employing these algorithms. In this work, we propose an OOD detection algorithm which comprises of an ensemble of classifiers. We train each classifier in a self-supervised manner by leaving out a random subset of training data as OOD data and the rest as in-distribution (ID) data. We propose a novel margin-based loss over the softmax output which seeks to maintain at least a margin m between the average entropy of the OOD and in-distribution samples. In conjunction with the standard cross-entropy loss, we minimize the novel loss to train an ensemble of classifiers. We also propose a novel method to combine the outputs of the ensemble of classifiers to obtain OOD detection score and class prediction. Overall, our method convincingly outperforms Hendrycks et al. [7] and the current state-of-the-art ODIN [13] on several OOD detection benchmarks.",
"title": ""
},
{
"docid": "c8cb32e37aa01b712c7e6921800fbe60",
"text": "Risky families are characterized by conflict and aggression and by relationships that are cold, unsupportive, and neglectful. These family characteristics create vulnerabilities and/or interact with genetically based vulnerabilities in offspring that produce disruptions in psychosocial functioning (specifically emotion processing and social competence), disruptions in stress-responsive biological regulatory systems, including sympathetic-adrenomedullary and hypothalamic-pituitary-adrenocortical functioning, and poor health behaviors, especially substance abuse. This integrated biobehavioral profile leads to consequent accumulating risk for mental health disorders, major chronic diseases, and early mortality. We conclude that childhood family environments represent vital links for understanding mental and physical health across the life span.",
"title": ""
}
] |
scidocsrr
|
82bfc26f22616644da9692a1a53a738c
|
Critical periods in speech perception: new directions.
|
[
{
"docid": "5b21b248dc51b027fa3919514c346b94",
"text": "How will we view schizophrenia in 2030? Schizophrenia today is a chronic, frequently disabling mental disorder that affects about one per cent of the world’s population. After a century of studying schizophrenia, the cause of the disorder remains unknown. Treatments, especially pharmacological treatments, have been in wide use for nearly half a century, yet there is little evidence that these treatments have substantially improved outcomes for most people with schizophrenia. These current unsatisfactory outcomes may change as we approach schizophrenia as a neurodevelopmental disorder with psychosis as a late, potentially preventable stage of the illness. This ‘rethinking’ of schizophrenia as a neurodevelopmental disorder, which is profoundly different from the way we have seen this illness for the past century, yields new hope for prevention and cure over the next two decades.",
"title": ""
}
] |
[
{
"docid": "193042bd07d5e9672b04ede9160d406c",
"text": "We report on the flip chip packaging of Micro-Electro-Mechanical System (MEMS)-based digital silicon photonic switching device and the characterization results of 12 × 12 switching ports. The challenges in packaging N<sup> 2</sup> electrical and 2N optical interconnections are addressed with single-layer electrical redistribution lines of 25 <italic>μ</italic>m line width and space on aluminum nitride interposer and 13° polished 64-channel lidless fiber array (FA) with a pitch of 127 <italic>μ</italic>m. 50 <italic>μ</italic>m diameter solder spheres are laser-jetted onto the electrical bond pads surrounded by suspended MEMS actuators on the device before fluxless flip-chip bonding. A lidless FA is finally coupled near-vertically onto the device gratings using a 6-degree-of-freedom (6-DOF) alignment system. Fiber-to-grating coupler loss of 4.25 dB/facet, 10<sup>–11 </sup> bit error rate (BER) through the longest optical path, and 0.4 <italic>μ</italic>s switch reconfiguration time have been demonstrated using 10 Gb/s Ethernet data stream.",
"title": ""
},
{
"docid": "b6cd4612ce96528b42d68aa8e4bfc10f",
"text": "Real-time, high-quality, 3D scanning of large-scale scenes is key to mixed reality and robotic applications. However, scalability brings challenges of drift in pose estimation, introducing significant errors in the accumulated model. Approaches often require hours of offline processing to globally correct model errors. Recent online methods demonstrate compelling results but suffer from (1) needing minutes to perform online correction, preventing true real-time use; (2) brittle frame-to-frame (or frame-to-model) pose estimation, resulting in many tracking failures; or (3) supporting only unstructured point-based representations, which limit scan quality and applicability. We systematically address these issues with a novel, real-time, end-to-end reconstruction framework. At its core is a robust pose estimation strategy, optimizing per frame for a global set of camera poses by considering the complete history of RGB-D input with an efficient hierarchical approach. We remove the heavy reliance on temporal tracking and continually localize to the globally optimized frames instead. We contribute a parallelizable optimization framework, which employs correspondences based on sparse features and dense geometric and photometric matching. Our approach estimates globally optimized (i.e., bundle adjusted) poses in real time, supports robust tracking with recovery from gross tracking failures (i.e., relocalization), and re-estimates the 3D model in real time to ensure global consistency, all within a single framework. Our approach outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness. Our framework leads to a comprehensive online scanning solution for large indoor environments, enabling ease of use and high-quality results.1",
"title": ""
},
{
"docid": "4054cdba0d2eefe879cc663a23e6164f",
"text": "Automatic detection and classification of lesions in medical images remains one of the most important and challenging problems. In this paper, we present a new multi-task convolutional neural network (CNN) approach for detection and semantic description of lesions in diagnostic images. The proposed CNN-based architecture is trained to generate and rank rectangular regions of interests (ROI’s) surrounding suspicious areas. The highest score candidates are fed into the subsequent network layers. These layers are trained to generate semantic description of suspicious findings. During the training stage, our method uses rectangular ground truth boxes; it does not require accurately delineated lesion contours. It has a clear advantage for supervised training on large datasets. Our system learns discriminative features which are shared in the detection and the description stages. This eliminates the need for hand-crafted features, and allows a minimal-overhead application of the method to new modalities and organs. The proposed approach estimates values of the standard radiological lexicon descriptors from the image data. These descriptors represent the basis for a diagnosis. Based on the descriptor values, the method generates a structured medical report. The proposed approach should help radiologists to understand a diagnostic decision of a computer aided diagnosis (CADx) system. We test the proposed method on proprietary and publicly available breast databases, and show that our method outperforms the competing approaches.",
"title": ""
},
{
"docid": "566a0de590689dd333c06a3b37e115e9",
"text": "The design and implementation of a low power high speed differential signaling input/output (I/O) interface in 0.18um CMOS technology is presented. The motivations for smaller signal swings in transmission are discussed. The prototype chip supports 4 Gbps data rate with less than 10mA current at 1.8V supply according to Cadence Spectre post-layout simulations. Performance comparisons between the proposed device and other signaling technologies reported recently are given.",
"title": ""
},
{
"docid": "f829820706687c186e998bfed5be9c42",
"text": "As deep learning systems are widely adopted in safetyand securitycritical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences. In this paper, we initiate the first study of leveraging physical fault injection attacks on Deep Neural Networks (DNNs), by using laser injection technique on embedded systems. In particular, our exploratory study targets four widely used activation functions in DNNs development, that are the general main building block of DNNs that creates non-linear behaviors – ReLu, softmax, sigmoid, and tanh. Our results show that by targeting these functions, it is possible to achieve a misclassification by injecting faults into the hidden layer of the network. Such result can have practical implications for realworld applications, where faults can be introduced by simpler means (such as altering the supply voltage).",
"title": ""
},
{
"docid": "f86e3894a6c61c3734e1aabda3500ef0",
"text": "We perform sensitivity analyses on a mathematical model of malaria transmission to determine the relative importance of model parameters to disease transmission and prevalence. We compile two sets of baseline parameter values: one for areas of high transmission and one for low transmission. We compute sensitivity indices of the reproductive number (which measures initial disease transmission) and the endemic equilibrium point (which measures disease prevalence) to the parameters at the baseline values. We find that in areas of low transmission, the reproductive number and the equilibrium proportion of infectious humans are most sensitive to the mosquito biting rate. In areas of high transmission, the reproductive number is again most sensitive to the mosquito biting rate, but the equilibrium proportion of infectious humans is most sensitive to the human recovery rate. This suggests strategies that target the mosquito biting rate (such as the use of insecticide-treated bed nets and indoor residual spraying) and those that target the human recovery rate (such as the prompt diagnosis and treatment of infectious individuals) can be successful in controlling malaria.",
"title": ""
},
{
"docid": "5235680a37ab34b07109bf1841696873",
"text": "We describe a novel convolutional neural network architecture with k-max pooling layer that is able to successfully recover the structure of Chinese sentences. This network can capture active features for unseen segments of a sentence to measure how likely the segments are merged to be the constituents. Given an input sentence, after all the scores of possible segments are computed, an efficient dynamic programming parsing algorithm is used to find the globally optimal parse tree. A similar network is then applied to predict syntactic categories for every node in the parse tree. Our networks archived competitive performance to existing benchmark parsers on the CTB-5 dataset without any task-specific feature engineering.",
"title": ""
},
{
"docid": "9882c528dce5e9bb426d057ee20a520c",
"text": "The use of herbal medicinal products and supplements has increased tremendously over the past three decades with not less than 80% of people worldwide relying on them for some part of primary healthcare. Although therapies involving these agents have shown promising potential with the efficacy of a good number of herbal products clearly established, many of them remain untested and their use are either poorly monitored or not even monitored at all. The consequence of this is an inadequate knowledge of their mode of action, potential adverse reactions, contraindications, and interactions with existing orthodox pharmaceuticals and functional foods to promote both safe and rational use of these agents. Since safety continues to be a major issue with the use of herbal remedies, it becomes imperative, therefore, that relevant regulatory authorities put in place appropriate measures to protect public health by ensuring that all herbal medicines are safe and of suitable quality. This review discusses toxicity-related issues and major safety concerns arising from the use of herbal medicinal products and also highlights some important challenges associated with effective monitoring of their safety.",
"title": ""
},
{
"docid": "b99c42f412408610e1bfd414f4ea6b9f",
"text": "ADPfusion combines the usual high-level, terse notation of Haskell with an underlying fusion framework. The result is a parsing library that allows the user to write algorithms in a style very close to the notation used in formal languages and reap the performance benefits of automatic program fusion. Recent developments in natural language processing and computational biology have lead to a number of works that implement algorithms that process more than one input at the same time. We provide an extension of ADPfusion that works on extended index spaces and multiple input sequences, thereby increasing the number of algorithms that are amenable to implementation in our framework. This allows us to implement even complex algorithms with a minimum of overhead, while enjoying all the guarantees that algebraic dynamic programming provides to the user.",
"title": ""
},
{
"docid": "829a5f2abb59d2d90b7665a4d6f75530",
"text": "Neurocristic cutaneous hamartomas (NCHs) result from aberrant development of the neuromesenchyme. In addition to a dermal melanocytic component, these tumors can contain neuro sustentacular and fibrogenic components. The clinical importance of these lesions includes the potential for misdiagnosis as well as the development of malignant melanomas over a poorly described period of time. We present a rare case of NCH of the scalp in a 1-year-old female.",
"title": ""
},
{
"docid": "af8ddeeee74489b566b4500c6fb6c471",
"text": "This paper discusses the effect of inductive coil shape on the sensing performance of a linear displacement sensor. The linear displacement sensor consists of a thin type inductive coil with a thin pattern guide, thus being suitable for tiny space applications. The position can be detected by measuring the inductance of the inductive coil. At each position due to the change in inductive coil area facing the pattern guide the value of inductance is different. Therefore, the objective of this research is to study various inductive coil pattern shapes and to propose the pattern that can achieve good sensing performance. Various shapes of meander, triangular type meander, square and circle shape with different turn number of inductive coils are examined in this study. The inductance is measured with the sensor sensitivity and linearity as a performance evaluation parameter of the sensor. In conclusion, each inductive coil shape has its own advantages and disadvantages. For instance, the circle shape inductive coil produces high sensitivity with a low linearity response. Meanwhile, the square shape inductive coil has a medium sensitivity with higher linearity.",
"title": ""
},
{
"docid": "456b7ad01115d9bc04ca378f1eb6d7f2",
"text": "Article history: Received 13 October 2007 Received in revised form 12 June 2008 Accepted 31 July 2008",
"title": ""
},
{
"docid": "e9b7e99f8f5f60305056ca0f7855c626",
"text": "This paper is an attempt to bridge the conceptual gaps between researchers working on the two widely used approaches based on positive definite kernels: Bayesian learning or inference using Gaussian processes on the one side, and frequentist kernel methods based on reproducing kernel Hilbert spaces on the other. It is widely known in machine learning that these two formalisms are closely related; for instance, the estimator of kernel ridge regression is identical to the posterior mean of Gaussian process regression. However, they have been studied and developed almost independently by two essentially separate communities, and this makes it difficult to seamlessly transfer results between them. Our aim is to overcome this potential difficulty. To this end, we review several old and new results and concepts from either side, and juxtapose algorithmic quantities from each framework to highlight close similarities. We also provide discussions on subtle philosophical and theoretical differences between the two approaches.",
"title": ""
},
{
"docid": "56a8d24e4335841cf488373e79cdeaef",
"text": "Weather forecasting is a canonical predictive challenge that has depended primarily on model-based methods. We explore new directions with forecasting weather as a data-intensive challenge that involves inferences across space and time. We study specifically the power of making predictions via a hybrid approach that combines discriminatively trained predictive models with a deep neural network that models the joint statistics of a set of weather-related variables. We show how the base model can be enhanced with spatial interpolation that uses learned long-range spatial dependencies. We also derive an efficient learning and inference procedure that allows for large scale optimization of the model parameters. We evaluate the methods with experiments on real-world meteorological data that highlight the promise of the approach.",
"title": ""
},
{
"docid": "80336a3bba9c0d7fd692b1321c0739f6",
"text": "Fine-grained image classification is to recognize hundreds of subcategories in each basic-level category. Existing methods employ discriminative localization to find the key distinctions among similar subcategories. However, existing methods generally have two limitations: (1) Discriminative localization relies on region proposal methods to hypothesize the locations of discriminative regions, which are time-consuming and the bottleneck of classification speed. (2) The training of discriminative localization depends on object or part annotations, which are heavily labor-consuming and the obstacle of marching towards practical application. It is highly challenging to address the two key limitations simultaneously, and existing methods only focus on one of them. Therefore, we propose a weakly supervised discriminative localization approach (WSDL) for fast fine-grained image classification to address the two limitations at the same time, and its main advantages are: (1) n-pathway end-to-end discriminative localization network is designed to improve classification speed, which simultaneously localizes multiple different discriminative regions for one image to boost classification accuracy, and shares full-image convolutional features generated by region proposal network to accelerate the process of generating region proposals as well as reduce the computation of convolutional operation. (2) Multi-level attention guided localization learning is proposed to localize discriminative regions with different focuses automatically, without using object and part annotations, avoiding the labor consumption. Different level attentions focus on different characteristics of the image, which are complementary and boost the classification accuracy. Both are jointly employed to simultaneously improve classification speed and eliminate dependence on object and part annotations. Compared with state-of-theart methods on 2 widely-used fine-grained image classification datasets, our WSDL approach achieves both the best accuracy and efficiency of classification.",
"title": ""
},
{
"docid": "89e6313014ad29e191e06aee8cf5f964",
"text": "Currently, mobile operating systems are dominated by the duopoly of iOS and Android. App projects that intend to reach a high number of customers need to target these two platforms foremost. However, iOS and Android do not have an officially supported common development framework. Instead, different development approaches are available for multi-platform development.\n The standard taxonomy for different development approaches of mobile applications is: Web Apps, Native Apps, Hybrid Apps. While this made perfect sense for iPhone development, it is not accurate for Android or cross-platform development, for example.\n In this paper, a new taxonomy is proposed. Based on the fundamental difference in the tools and programming languages used for the task, six different categories are proposed for everyday use: Endemic Apps, Web Apps, Hybrid Web Apps, Hybrid Bridged Apps, System Language Apps, and Foreign Language Apps. In addition, when a more precise distinction is necessary, a total of three main categories and seven subcategories are defined.\n The paper also contains a short overview of the advantages and disadvantages of the approaches mentioned.",
"title": ""
},
{
"docid": "697491cc059e471f0c97a840a2a9fca7",
"text": "This paper presents a virtual reality (VR) simulator for four-arm disaster response robot OCTOPUS, which has high capable of both mobility and workability. OCTOPUS has 26 degrees of freedom (DOF) and is currently teleoperated by two operators, so it is quite difficult to operate OCTOPUS. Thus, we developed a VR simulator for training operation, developing operator support system and control strategy. Compared with actual robot and environment, VR simulator can reproduce them at low cost and high efficiency. The VR simulator consists of VR environment and human-machine interface such as operation-input and video- and sound-output, based on robot operation system (ROS) and Gazebo. To enhance work performance, we implement indicators and data collection functions. Four tasks such as rough terrain passing, high-step climbing, obstacle stepping over, and object transport were conducted to evaluate OCTOPUS itself and our VR simulator. The results indicate that operators could complete all the tasks but the success rate differed in tasks. Smooth and stable operations increased the work performance, but sudden change and oscillation of operation degraded it. Cooperating multi-joint adequately is quite important to execute task more efficiently.",
"title": ""
},
{
"docid": "d2e494f109ea50504298abae04e102e0",
"text": "A widely cited result asserts that experts' superiority over novices in recalling meaningful material from their domain of expertise vanishes when they are confronted with random material. A review of recent chess experiments in which random positions served as control material (presentation time between 3 and 10 sec) shows, however, that strong players generally maintain some superiority over weak players even with random positions, although the relative difference between skill levels is much smaller than with game positions. The implications of this finding for expertise in chess are discussed and the question of the recall of random material in other domains is raised.",
"title": ""
},
{
"docid": "271893ebedf72b82778f1a026ad858ff",
"text": "OBJECTIVE\nTo report on a case of a pathological burst fracture in the cervical spine where typical core red flag tests failed to identify a significant lesion, and to remind chiropractors to be vigilant in the recognition of subtle signs and symptoms of disease processes.\n\n\nCLINICAL FEATURES\nA 61-year-old man presented to a chiropractic clinic with neck pain that began earlier that morning. After a physical exam that was relatively unremarkable, imaging identified a burst fracture in the cervical spine.\n\n\nINTERVENTION & OUTCOMES\nThe patient was sent by ambulance to the hospital where he was diagnosed with multiple myeloma. No medical intervention was performed on the fracture.\n\n\nSUMMARY\nThe patient's initial physical examination was largely unremarkable, with an absence of clinical red flags. The screening tools were non-diagnostic. Pain with traction and the sudden onset of symptoms prompted further investigation with plain film imaging of the cervical spine. This identified a pathological burst fracture in the C4 vertebrae.",
"title": ""
},
{
"docid": "91e9f4d67c89aea99299966492648300",
"text": "In safety critical domains, system test cases are often derived from functional requirements in natural language (NL) and traceability between requirements and their corresponding test cases is usually mandatory. The definition of test cases is therefore time-consuming and error prone, especially so given the quickly rising complexity of embedded systems in many critical domains. Though considerable research has been devoted to automatic generation of system test cases from NL requirements, most of the proposed approaches re- quire significant manual intervention or additional, complex behavioral modelling. This significantly hinders their applicability in practice. In this paper, we propose Use Case Modelling for System Tests Generation (UMTG), an approach that automatically generates executable system test cases from use case spec- ifications and a domain model, the latter including a class diagram and constraints. Our rationale and motivation are that, in many environments, including that of our industry partner in the reported case study, both use case specifica- tions and domain modelling are common and accepted prac- tice, whereas behavioural modelling is considered a difficult and expensive exercise if it is to be complete and precise. In order to extract behavioral information from use cases and enable test automation, UMTG employs Natural Language Processing (NLP), a restricted form of use case specifica- tions, and constraint solving.",
"title": ""
}
] |
scidocsrr
|
8320c98d92a8ccf0ff82aba45ee724a6
|
Word cloud of online hotel reviews in Chiang Mai for customer satisfaction analysis
|
[
{
"docid": "3eeacf0fb315910975e5ff0ffc4fe800",
"text": "Social networks are rich in various kinds of contents such as text and multimedia. The ability to apply text mining algorithms effectively in the context of text data is critical for a wide variety of applications. Social networks require text mining algorithms for a wide variety of applications such as keyword search, classi cation, and clustering. While search and classi cation are well known applications for a wide variety of scenarios, social networks have a much richer structure both in terms of text and links. Much of the work in the area uses either purely the text content or purely the linkage structure. However, many recent algorithms use a combination of linkage and content information for mining purposes. In many cases, it turns out that the use of a combination of linkage and content information provides much more effective results than a system which is based purely on either of the two. This paper provides a survey of such algorithms, and the advantages observed by using such algorithms in different scenarios. We also present avenues for future research in this area.",
"title": ""
}
] |
[
{
"docid": "822e37a65bc226c2de9ed323d4ecdaa9",
"text": "Rainfall is one of the major source of freshwater for all the organism around the world. Rainfall prediction model provides the information regarding various climatological variables on the amount of rainfall. In recent days, Deep Learning enabled the self-learning data labels which allows to create a data-driven model for a time series dataset. It allows to make the anomaly/change detection from the time series data and also predicts the future event's data with respect to the events occurred in the past. This paper deals with obtaining models of the rainfall precipitation by using Deep Learning Architectures (LSTM and ConvNet) and determining the better architecture with RMSE of LSTM as 2.55 and RMSE of ConvNet as 2.44 claiming that for any time series dataset, Deep Learning models will be effective and efficient for the modellers.",
"title": ""
},
{
"docid": "6917d77e52481785e89211d61f6bbe09",
"text": "Recent studies show that disk-based graph computation on just a single PC can be as highly competitive as cluster-based computing systems on large-scale problems. Inspired by this remarkable progress, we develop VENUS, a disk-based graph computation system which is able to handle billion-scale problems efficiently on a commodity PC. VENUS adopts a novel computing architecture that features vertex-centric “streamlined” processing - the graph is sequentially loaded and the update functions are executed in parallel on the fly. VENUS deliberately avoids loading batch edge data by separating read-only structure data from mutable vertex data on disk. Furthermore, it minimizes random IOs by caching vertex data in main memory. The streamlined processing is realized with efficient sequential scan over massive structure data and fast feeding a large number of update functions. Extensive evaluation on large real-world and synthetic graphs has demonstrated the efficiency of VENUS. For example, VENUS takes just 8 minutes with hard disk for PageRank on the Twitter graph with 1.5 billion edges. In contrast, Spark takes 8.1 minutes with 50 machines and 100 CPUs, and GraphChi takes 13 minutes using fast SSD drive.",
"title": ""
},
{
"docid": "eb83f7367ba11bb5582864a08bb746ff",
"text": "Probabilistic inference algorithms for find ing the most probable explanation, the max imum aposteriori hypothesis, and the maxi mum expected utility and for updating belief are reformulated as an elimination-type al gorithm called bucket elimination. This em phasizes the principle common to many of the algorithms appearing in that literature and clarifies their relationship to nonserial dynamic programming algorithms. We also present a general way of combining condition ing and elimination within this framework. Bounds on complexity are given for all the al gorithms as a function of the problem's struc ture.",
"title": ""
},
{
"docid": "e03139945eb01b44b15980f9ed570cf8",
"text": "A cloud customer’s inability to verifiably trust an infrastructure provider with the security of its data inhibits adoption of cloud computing. Customers could establish trust with secure runtime integrity measurements of their virtual machines (VMs). The runtime state of a VM, captured via a snapshot, is used for integrity measurement, migration, malware detection, correctness validation, and other purposes. However, commodity virtualized environments operate the snapshot service from a privileged VM. In public cloud environments, a compromised privileged VM or its potentially malicious administrators can easily subvert the integrity of a customer VMs snapshot. To this end, we present HyperShot, a hypervisor-based system that captures VM snapshots whose integrity cannot be compromised by a rogue privileged VM or its administrators. HyperShot additionally generates trusted snapshots of the privileged VM itself, thus contributing to the increased security and trustworthiness of the entire cloud infrastructure.",
"title": ""
},
{
"docid": "9afc04ce0ddde03789f4eaa4eab39e09",
"text": "In this paper we propose a novel method for recognizing human actions by exploiting a multi-layer representation based on a deep learning based architecture. A first level feature vector is extracted and then a high level representation is obtained by taking advantage of a Deep Belief Network trained using a Restricted Boltzmann Machine. The classification is finally performed by a feed-forward neural network. The main advantage behind the proposed approach lies in the fact that the high level representation is automatically built by the system exploiting the regularities in the dataset; given a suitably large dataset, it can be expected that such a representation can outperform a hand-design description scheme. The proposed approach has been tested on two standard datasets and the achieved results, compared with state of the art algorithms, confirm its effectiveness.",
"title": ""
},
{
"docid": "805f445952a94a0e068966998b486db4",
"text": "Narcissistic personality disorder (NPD) is a trait-based disorder that can be understood as a pathological amplification of narcissistic traits. While temperamental vulnerability and psychological adversity are risk factors for NPD, sociocultural factors are also important. This review hypothesizes that increases in narcissistic traits and cultural narcissism could be associated with changes in the prevalence of NPD. These shifts seem to be a relatively recent phenomenon, driven by social changes associated with modernity. While the main treatment for NPD remains psychotherapy, that form of treatment is itself a product of modernity and individualism. The hypothesis is presented that psychological treatment, unless modified to address the specific problems associated with NPD, could run the risk of supporting narcissism.",
"title": ""
},
{
"docid": "889b4dabf8d9e9dbc6e3ae9e6dd9759f",
"text": "Neuroscience is undergoing faster changes than ever before. Over 100 years our field qualitatively described and invasively manipulated single or few organisms to gain anatomical, physiological, and pharmacological insights. In the last 10 years neuroscience spawned quantitative datasets of unprecedented breadth (e.g., microanatomy, synaptic connections, and optogenetic brain-behavior assays) and size (e.g., cognition, brain imaging, and genetics). While growing data availability and information granularity have been amply discussed, we direct attention to a less explored question: How will the unprecedented data richness shape data analysis practices? Statistical reasoning is becoming more important to distill neurobiological knowledge from healthy and pathological brain measurements. We argue that large-scale data analysis will use more statistical models that are non-parametric, generative, and mixing frequentist and Bayesian aspects, while supplementing classical hypothesis testing with out-of-sample predictions.",
"title": ""
},
{
"docid": "aa1071a3b5b720922fc254e1e4b9d70d",
"text": "This paper presents a zero-voltage-switching (ZVS) full-bridge dc-dc converter combing resonant and pulse-width-modulation (PWM) power conversions for electric vehicle battery chargers. In the proposed converter, a half-bridge LLC resonant circuit shares the lagging leg with a phase-shift full-bridge (PSFB) dc-dc circuit to guarantee ZVS of the lagging-leg switches from zero to full load. A secondary-side hybrid-switching circuit, which is formed by the leakage inductance, output inductor of the PSFB dc-dc circuit, a small additional resonant capacitor, and two additional diodes, is integrated at the secondary side of the PSFB dc-dc circuit. With the clamp path of a hybrid-switching circuit, the voltage overshoots that arise during the turn off of the rectifier diodes are eliminated and the voltage of bridge rectifier is clamped to the minimal achievable value, which is equal to secondary-reflected input voltage of the transformer. The sum of the output voltage of LLC resonant circuit and the resonant capacitor voltage of the hybrid-switching circuit is applied between the bridge rectifier and the output inductor of the PSFB dc-dc circuit during the freewheeling phases. As a result, the primary-side circulating current of the PSFB dc-dc circuit is instantly reset to zero, achieving minimized circulating losses. The effectiveness of the proposed converter was experimentally verified using a 4-kW prototype circuit. The experimental results show 98.6% peak efficiency and high efficiency over wide load and output voltage ranges.",
"title": ""
},
{
"docid": "01d77c925c62a7d26ff294231b449e95",
"text": "Al~tmd--We refer to Model Predictive Control (MPC) as that family of controllers in which there is a direct use of an explicit and separately identifiable model. Control design methods based on the MPC concept have found wide acceptance in industrial applications and have been studied by academia. The reason for such popularity is the ability of MPC designs to yield high performance control systems capable of operating without expert intervention for long periods of time. In this paper the issues of importance that any control system should address are stated. MPC techniques are then reviewed in the light of these issues in order to point out their advantages in design and implementation. A number of design techniques emanating from MPC, namely Dynamic Matrix Control, Model Algorithmic Control, Inferential Control and Internal Model Control, are put in perspective with respect to each other and the relation to more traditional methods like Linear Quadratic Control is examined. The flexible constraint handling capabilities of MPC are shown to be a significant advantage in the context of the overall operating objectives of the process industries and the 1-, 2-, and oo-norm formulations of the performance objective are discussed. The application of MPC to non-linear systems is examined and it is shown that its main attractions carry over. Finally, it is explained that though MPC is not inherently more or less robust than classical feedback, it can be adjusted more easily for robustness.",
"title": ""
},
{
"docid": "9c8c4180950cc54d859feb1e14a73989",
"text": "Traditionally, motor learning has been studied as an implicit learning process, one in which movement errors are used to improve performance in a continuous, gradual manner. The cerebellum figures prominently in this literature given well-established ideas about the role of this system in error-based learning and the production of automatized skills. Recent developments have brought into focus the relevance of multiple learning mechanisms for sensorimotor learning. These include processes involving repetition, reinforcement learning, and strategy utilization. We examine these developments, considering their implications for understanding cerebellar function and how this structure interacts with other neural systems to support motor learning. Converging lines of evidence from behavioral, computational, and neuropsychological studies suggest a fundamental distinction between processes that use error information to improve action execution or action selection. While the cerebellum is clearly linked to the former, its role in the latter remains an open question.",
"title": ""
},
{
"docid": "6bbb75137cee4cd173e2f7d082da6a2c",
"text": "Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features. However, in most existing approaches, the extracted shared features are prone to be contaminated by task-specific features or the noise brought by other tasks. In this paper, we propose an adversarial multi-task learning framework, alleviating the shared and private latent feature spaces from interfering with each other. We conduct extensive experiments on 16 different text classification tasks, which demonstrates the benefits of our approach. Besides, we show that the shared knowledge learned by our proposed model can be regarded as off-the-shelf knowledge and easily transferred to new tasks. The datasets of all 16 tasks are publicly available at http://nlp.fudan.",
"title": ""
},
{
"docid": "77749f228ebcadfbff9202ee17225752",
"text": "Temporal object detection has attracted significant attention, but most popular detection methods cannot leverage rich temporal information in videos. Very recently, many algorithms have been developed for video detection task, yet very few approaches can achieve real-time online object detection in videos. In this paper, based on the attention mechanism and convolutional long short-term memory (ConvLSTM), we propose a temporal single-shot detector (TSSD) for real-world detection. Distinct from the previous methods, we take aim at temporally integrating pyramidal feature hierarchy using ConvLSTM, and design a novel structure, including a low-level temporal unit as well as a high-level one for multiscale feature maps. Moreover, we develop a creative temporal analysis unit, namely, attentional ConvLSTM, in which a temporal attention mechanism is specially tailored for background suppression and scale suppression, while a ConvLSTM integrates attention-aware features across time. An association loss and a multistep training are designed for temporal coherence. Besides, an online tubelet analysis (OTA) is exploited for identification. Our framework is evaluated on ImageNet VID dataset and 2DMOT15 dataset. Extensive comparisons on the detection and tracking capability validate the superiority of the proposed approach. Consequently, the developed TSSD-OTA achieves a fast speed and an overall competitive performance in terms of detection and tracking. Finally, a real-world maneuver is conducted for underwater object grasping.",
"title": ""
},
{
"docid": "eb0a5d496dd9a427ab7d52416f70aab3",
"text": "Progress in habit theory can be made by distinguishing habit from frequency of occurrence, and using independent measures for these constructs. This proposition was investigated in three studies using a longitudinal, cross-sectional and experimental design on eating, mental habits and word processing, respectively. In Study 1, snacking habit and past snacking frequency independently predicted later snacking behaviour, while controlling for the theory of planned behaviour variables. Habit fully mediated the effect of past on later behaviour. In Study 2, habitual negative self-thinking and past frequency of negative self-thoughts independently predicted self-esteem and the presence of depressive and anxiety symptoms. In Study 3, habit varied as a function of experimentally manipulated task complexity, while behavioural frequency was held constant. Taken together, while repetition is necessary for habits to develop, these studies demonstrate that habit should not be equated with frequency of occurrence, but rather should be considered as a mental construct involving features of automaticity, such as lack of awareness, difficulty to control and mental efficiency.",
"title": ""
},
{
"docid": "c78e0662b9679a70f1ec4416b3abd2b4",
"text": "This article offers possibly the first peer-reviewed study on the training routines of elite eathletes, with special focus on the subjects’ physical exercise routines. The study is based on a sample of 115 elite e-athletes. According to their responses, e-athletes train approximately 5.28 hours every day around the year on the elite level. Approximately 1.08 hours of that training is physical exercise. More than half (55.6%) of the elite e-athletes believe that integrating physical exercise in their training programs has a positive effect on esport performance; however, no less than 47.0% of the elite e-athletes do their physical exercise chiefly to maintain overall health. Accordingly, the study indicates that elite e-athletes are active athletes as well, those of age 18 and older exercising physically more than three times the daily 21-minute activity recommendation given by World Health Organization.",
"title": ""
},
{
"docid": "7014a3c3fa78e0d610388dc08733478e",
"text": "The demands placed on today’s organizations and their managers suggest that we have to develop pedagogies combining analytic reasoning with a more exploratory skill set that design practitioners have embraced and business schools have traditionally neglected. Design thinking is an iterative, exploratory process involving visualizing, experimenting, creating, and prototyping of models, and gathering feedback. It is a particularly apt method for addressing innovation and messy, ill-structured situations. We discuss key characteristics of design thinking, link design-thinking characteristics to recent studies of cognition, and note how the repertoire of skills and methods that embody design thinking can address deficits in business school education. ........................................................................................................................................................................",
"title": ""
},
{
"docid": "b0a0ad5f90d849696e3431373db6b4a5",
"text": "A comparative study of the structure of the flower in three species of Robinia L., R. pseudoacacia, R. × ambigua, and R. neomexicana, was carried out. The widely naturalized R. pseudoacacia, as compared to the two other species, has the smallest sizes of flower organs at all stages of development. Qualitative traits that describe each phase of the flower development were identified. A set of microscopic morphological traits of the flower (both quantitative and qualitative) was analyzed. Additional taxonomic traits were identified: shape of anthers, size and shape of pollen grains, and the extent of pollen fertility.",
"title": ""
},
{
"docid": "c09f3698f350ef749d3ef3e626c86788",
"text": "The te rm \"reactive system\" was introduced by David Harel and Amir Pnueli [HP85], and is now commonly accepted to designate permanent ly operating systems, and to distinguish them from \"trans]ormational systems\" i.e, usual programs whose role is to terminate with a result, computed from an initial da ta (e.g., a compiler). In synchronous programming, we understand it in a more restrictive way, distinguishing between \"interactive\" and \"reactive\" systems: Interactive systems permanent ly communicate with their environment, but at their own speed. They are able to synchronize with their environment, i.e., making it wait. Concurrent processes considered in operat ing systems or in data-base management , are generally interactive. Reactive systems, in our meaning, have to react to an environment which cannot wait. Typical examples appear when the environment is a physical process. The specific features of reactive systems have been pointed out many times [Ha193,BCG88,Ber89]:",
"title": ""
},
{
"docid": "f6333ab767879cf1673bb50aeeb32533",
"text": "Github facilitates the pull-request mechanism as an outstanding social coding paradigm by integrating with social media. The review process of pull-requests is a typical crowd sourcing job which needs to solicit opinions of the community. Recommending appropriate reviewers can reduce the time between the submission of a pull-request and the actual review of it. In this paper, we firstly extend the traditional Machine Learning (ML) based approach of bug triaging to reviewer recommendation. Furthermore, we analyze social relations between contributors and reviewers, and propose a novel approach to recommend highly relevant reviewers by mining comment networks (CN) of given projects. Finally, we demonstrate the effectiveness of these two approaches with quantitative evaluations. The results show that CN-based approach achieves a significant improvement over the ML-based approach, and on average it reaches a precision of 78% and 67% for top-1 and top-2 recommendation respectively, and a recall of 77% for top-10 recommendation.",
"title": ""
},
{
"docid": "bb5c4d59f598427ea1e2946ae74a7cc8",
"text": "In a nutshell: This course comprehensively covers important user experience (UX) evaluation methods as well as opportunities and challenges of UX evaluation in the area of entertainment and games. The course is an ideal forum for attendees to gain insight into state-of-the art user experience evaluation methods going way-beyond standard usability and user experience evaluation approaches in the area of human-computer interaction. It surveys and assesses the efforts of user experience evaluation of the gaming and human computer interaction communities during the last 15 years.",
"title": ""
}
] |
scidocsrr
|
7d084eb2d4d018b4de420ce0a6d13758
|
Review of the BCI Competition IV
|
[
{
"docid": "e021eeb2edff46128f224cefd8206c92",
"text": "OBJECTIVE\nA fully automated method for reducing EOG artifacts is presented and validated.\n\n\nMETHODS\nThe correction method is based on regression analysis and was applied to 18 recordings with 22 channels and approx. 6 min each. Two independent experts scored the original and corrected EEG in a blinded evaluation.\n\n\nRESULTS\nThe expert scorers identified in 5.9% of the raw data some EOG artifacts; 4.7% were corrected. After applying the EOG correction, the expert scorers identified in another 1.9% of the data some EOG artifacts, which were not recognized in the uncorrected data.\n\n\nCONCLUSIONS\nThe advantage of a fully automated reduction of EOG artifacts justifies the small additional effort of the proposed method and is a viable option for reducing EOG artifacts. The method has been implemented for offline and online analysis and is available through BioSig, an open source software library for biomedical signal processing.\n\n\nSIGNIFICANCE\nVisual identification and rejection of EOG-contaminated EEG segments can miss many EOG artifacts, and is therefore not sufficient for removing EOG artifacts. The proposed method was able to reduce EOG artifacts by 80%.",
"title": ""
}
] |
[
{
"docid": "3f5f7b099dff64deca2a265c89ff481e",
"text": "We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We evaluate several different regression methods: ridge regression, relevance vector machine (RVM) regression, and support vector machine (SVM) regression over both linear and kernel bases. The RVMs provide much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. The loss of depth and limb labeling information often makes the recovery of 3D pose from single silhouettes ambiguous. To handle this, the method is embedded in a novel regressive tracking framework, using dynamics from the previous state estimate together with a learned regression value to disambiguate the pose. We show that the resulting system tracks long sequences stably. For realism and good generalization over a wide range of viewpoints, we train the regressors on images resynthesized from real human motion capture data. The method is demonstrated for several representations of full body pose, both quantitatively on independent but similar test data and qualitatively on real image sequences. Mean angular errors of 4-6/spl deg/ are obtained for a variety of walking motions.",
"title": ""
},
{
"docid": "c4183c8b08da8d502d84a650d804cac8",
"text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>",
"title": ""
},
{
"docid": "b596be97699686e5e37cab71bee8fe4a",
"text": "The task of selecting project portfolios is an important and recurring activity in many organizations. There are many techniques available to assist in this process, but no integrated framework for carrying it out. This paper simpli®es the project portfolio selection process by developing a framework which separates the work into distinct stages. Each stage accomplishes a particular objective and creates inputs to the next stage. At the same time, users are free to choose the techniques they ®nd the most suitable for each stage, or in some cases to omit or modify a stage if this will simplify and expedite the process. The framework may be implemented in the form of a decision support system, and a prototype system is described which supports many of the related decision making activities. # 1999 Published by Elsevier Science Ltd and IPMA. All rights reserved",
"title": ""
},
{
"docid": "054fcf065915118bbfa3f12759cb6912",
"text": "Automatization of the diagnosis of any kind of disease is of great importance and its gaining speed as more and more deep learning solutions are applied to different problems. One of such computer-aided systems could be a decision support tool able to accurately differentiate between different types of breast cancer histological images – normal tissue or carcinoma (benign, in situ or invasive). In this paper authors present a deep learning solution, based on convolutional capsule network, for classification of four types of images of breast tissue biopsy when hematoxylin and eosin staining is applied. The crossvalidation accuracy, averaged over four classes, was achieved to be 87 % with equally high sensitivity.",
"title": ""
},
{
"docid": "859e8a5fedd8376210e85aeb19b16d42",
"text": "Virtual communities have become ubiquitous and vital in nearly all professions. These communities typically involve voluntary participation, and given that these communities are not organization specific, community members' motivations for participation vary. In this study, we investigate the motivations of the members of a professional virtual community to engage in knowledge exchange. We synthesize social exchange theory and the theory of reasoned action to identify critical determinants of attitudes toward knowledge exchange in virtual communities, namely, trust among participants, anticipated reciprocal relationships, and the relevance of the community to participants' jobs. Additionally, we posit that attitudes will influence the intention to use the virtual community and that this relationship will be moderated by the perceived quality of the information exchanged within the community. We test our research model using data compiled from a community of research scientists in South Korea. Our findings indicate that trust among participants has a positive influence on attitudes toward both sharing and acquiring knowledge. The anticipation of a reciprocal relationship has a positive effect on attitudes toward knowledge acquisition, and job relevance has a positive effect on attitudes toward knowledge sharing. Furthermore, attitudes toward knowledge acquisition affect attitudes toward knowledge sharing, and attitudes toward knowledge sharing positively influence intentions to use a virtual community. We also find that perceived information quality negatively moderates the relationship between attitudes toward knowledge sharing and user intentions to use a virtual community. We interpret and discuss these findings, and their implications for research and practice.",
"title": ""
},
{
"docid": "23677c0107696de3cc630f424484284a",
"text": "With the development of expressway, the vehicle path recognition based on RFID is designed and an Electronic Toll Collection system of expressway will be implemented. It uses a passive RFID tag as carrier to identify Actual vehicle path in loop road. The ETC system will toll collection without parking, also census traffic flow and audit road maintenance fees. It is necessary to improve expressway management.",
"title": ""
},
{
"docid": "efc6c423fa98c012543352db8fb0688a",
"text": "Wireless sensor networks consist of sensor nodes with sensing and communication capabilities. We focus on data aggregation problems in energy constrained sensor networks. The main goal of data aggregation algorithms is to gather and aggregate data in an energy efficient manner so that network lifetime is enhanced. In this paper, we present a survey of data aggregation algorithms in wireless sensor networks. We compare and contrast different algorithms on the basis of performance measures such as lifetime, latency and data accuracy. We conclude with possible future research directions.",
"title": ""
},
{
"docid": "d991f2ecffd6ddb045a7917ac5e99011",
"text": "Human intervention trials have provided evidence for protective effects of various (poly)phenol-rich foods against chronic disease, including cardiovascular disease, neurodegeneration, and cancer. While there are considerable data suggesting benefits of (poly)phenol intake, conclusions regarding their preventive potential remain unresolved due to several limitations in existing studies. Bioactivity investigations using cell lines have made an extensive use of both (poly)phenolic aglycones and sugar conjugates, these being the typical forms that exist in planta, at concentrations in the low-μM-to-mM range. However, after ingestion, dietary (poly)phenolics appear in the circulatory system not as the parent compounds, but as phase II metabolites, and their presence in plasma after dietary intake rarely exceeds nM concentrations. Substantial quantities of both the parent compounds and their metabolites pass to the colon where they are degraded by the action of the local microbiota, giving rise principally to small phenolic acid and aromatic catabolites that are absorbed into the circulatory system. This comprehensive review describes the different groups of compounds that have been reported to be involved in human nutrition, their fate in the body as they pass through the gastrointestinal tract and are absorbed into the circulatory system, the evidence of their impact on human chronic diseases, and the possible mechanisms of action through which (poly)phenol metabolites and catabolites may exert these protective actions. It is concluded that better performed in vivo intervention and in vitro mechanistic studies are needed to fully understand how these molecules interact with human physiological and pathological processes.",
"title": ""
},
{
"docid": "f1b32219b6cd38cf8514d3ae2e926612",
"text": "Creativity refers to the potential to produce novel ideas that are task-appropriate and high in quality. Creativity in a societal context is best understood in terms of a dialectical relation to intelligence and wisdom. In particular, intelligence forms the thesis of such a dialectic. Intelligence largely is used to advance existing societal agendas. Creativity forms the antithesis of the dialectic, questioning and often opposing societal agendas, as well as proposing new ones. Wisdom forms the synthesis of the dialectic, balancing the old with the new. Wise people recognize the need to balance intelligence with creativity to achieve both stability and change within a societal context.",
"title": ""
},
{
"docid": "67f7337add485873b45e1712e49e19cf",
"text": "Through the rapid expansion worldwide of impervious areas and habitat fragmentation, urbanization has strong consequences that must be understood to efficiently manage biodiversity. We studied the effects of urbanization on flower-feeding insects by using data from a citizen science program in the Parisian region. We analysed the occurrence of insects from 46 different families on flowers of different morphologies, using landscape indices in buffer areas from a 100-m to a 4000-m radius around 1194 sampled sites. Our aims were to determine (i) how the proportion of impervious area around sampled sites affected the occurrence of flower-feeding insect families and at which landscape scales impervious area calculations best predicted these occurrences; (ii) the effect of corolla shape variables on insect family occurrences. Twenty-one families were negatively impacted by increasing proportion of impervious areas (urbanophobic) and 3 were positively impacted (urbanophilic). Urbanophobic families were most affected by the proportion of impervious areas when it was estimated within buffers of 200-m to 1400-m radii, depending on the family. Notable losses of urbanophobic families were detected at less than 50% of impervious areas, which highlights the threat to the diversity of flower-feeding insects posed by urban sprawl. Corolla shape variables were the variables most often significantly implicated in the occurrence of insect families. Urbanophobic families were negatively affected by the tubular shape of flowers, and tubular corollas were found more often in urbanized areas. These results suggest that flora management might be a key component for the conservation of insect diversity in cities.",
"title": ""
},
{
"docid": "040ff124b5e7e491ca94844b5b9fa36c",
"text": "Spam is one of the main problems in emails communications. As the volume of non-english language spam increases, little work is done in this area. For example, in Arab world users receive spam written mostly in arabic, english or mixed Arabic and english. To filter this kind of messages, this research applied several machine learning techniques. Many researchers have used machine learning techniques to filter spam email messages. This study compared six supervised machine learning classifiers which are maximum entropy, decision trees, artificial neural nets, naïve bayes, support system machines and k-nearest neighbor. The experiments suggested that words in Arabic messages should be stemmed before applying classifier. In addition, in most cases, experiments showed that classifiers using feature selection techniques can achieve comparable or better performance than filters do not used them.",
"title": ""
},
{
"docid": "a0f5d3cf110c8631747eb93b3392609c",
"text": "We use a fully timing-driven experimental flow [4] [15] in which a set of benchmark circuits are synthesized into different cluster-based [2] [3] [15] logic block architectures, which contain groups of LUTs and flip-flops. We look across all architectures with LUT sizes in the range of 2 inputs to 7 inputs, and cluster size from 1 to 10 LUTs. In order to judge the quality of the architecture we do both detailed circuit level design and measure the demand of routing resources for every circuit in each architecture.\nThese experiments have resulted in several key contributions. First, we have experimentally determined the relationship between the number of inputs required for a cluster as a function of the LUT size (K) and cluster size (N). Second, contrary to previous results, we have shown that when the cluster size is greater than four, that smaller LUTs (size 2 and 3) are almost as area efficient as 4-input LUTs, as suggested in [11]. However, our results also show that the performance of FPGAs with these small LUT sizes is significantly worse (by almost a factor of 2) than larger LUTs. Hence, as measured by area-delay product, or by performance, these would be a bad choice. Also, we have discovered that LUT sizes of 5 and 6 produce much better area results than were previously believed. Finally, our results show that a LUT size of 4 to 6 and cluster size of between 4 and 10 provides the best area-delay product for an FPGA.",
"title": ""
},
{
"docid": "ff3359fe51ed275de1f3b61eee833045",
"text": "Opinion target extraction is a fundamental task in opinion mining. In recent years, neural network based supervised learning methods have achieved competitive performance on this task. However, as with any supervised learning method, neural network based methods for this task cannot work well when the training data comes from a different domain than the test data. On the other hand, some rule-based unsupervised methods have shown to be robust when applied to different domains. In this work, we use rule-based unsupervised methods to create auxiliary labels and use neural network models to learn a hidden representation that works well for different domains. When this hidden representation is used for opinion target extraction, we find that it can outperform a number of strong baselines with a large margin.",
"title": ""
},
{
"docid": "263258f766344104fdd98c6810371460",
"text": "Ethernet's plug-&-play feature is built on its use of flat (location independent) addresses and use of broadcasts to resolve unknown MAC addresses. While plug-&-play is one of Ethernet's most attractive features, it also affects its scalability. As the number of active MAC addresses in the network grows beyond the capacity of forwarding caches in bridges, the odds of \"cache-misses,\" each triggering a broadcast, grow as well. The resulting increase in broadcast bandwidth consumption affects scalability. To address this problem, we propose a simple address resolution scheme based on an adaptation of distributed hash tables where a single query suffices in the steady state. The new scheme is implemented on advanced bridges maintaining backward compatibility with legacy bridges and eliminating reliance on broadcasts for address discovery. Comparisons with a legacy, broadcast-based scheme are carried out along several metrics that demonstrate the new scheme's robustness and ability to improve scalability.",
"title": ""
},
{
"docid": "22cb22b6a3f46b4ca3325be08ad9f077",
"text": "The purpose of this study was to evaluate setup accuracy and quantify random and systematic errors of the BrainLAB stereotactic immobilization mask and localization system using kV on-board imaging. Nine patients were simulated and set up with the BrainLAB stereotactic head immobilization mask and localizer to be treated for brain lesions using single and hypofractions. Orthogonal pairs of projections were acquired using a kV on-board imager mounted on a Varian Trilogy machine. The kV projections were then registered with digitally-reconstructed radiographs (DRR) obtained from treatment planning. Shifts between the kV images and reference DRRs were calculated in the different directions: anterior-posterior (A-P), medial-lateral (R-L) and superior-inferior (S-I). If the shifts were larger than 2mm in any direction, the patient was reset within the immobilization mask until satisfying setup accuracy based on image guidance has been achieved. Shifts as large as 4.5 mm, 5.0 mm, 8.0 mm in the A-P, R-L and S-I directions, respectively, were measured from image registration of kV projections and DRRs. These shifts represent offsets between the treatment and simulation setup using immobilization mask. The mean offsets of 0.1 mm, 0.7 mm, and -1.6 mm represent systematic errors of the BrainLAB localizer in the A-P, R-L and S-I directions, respectively. The mean of the radial shifts is about 1.7 mm. The standard deviations of the shifts were 2.2 mm, 2.0 mm, and 2.6 mm in A-P, R-L and S-I directions, respectively, which represent random patient setup errors with the BrainLAB mask. The Brain-LAB mask provides a noninvasive, practical and flexible immobilization system that keeps the patients in place during treatment. Relying on this system for patient setup might be associated with significant setup errors. Image guidance with the kV on-board imager provides an independent verification technique to ensure accuracy of patient setup. Since the patient may relax or move during treatment, uncontrolled and undetected setup errors may be produced with patients that are not well-immobilized. Therefore, the combination of stereotactic immobilization and image guidance achieves more controlled and accurate patient setup within 2mm in A-P, R-L and S-I directions.",
"title": ""
},
{
"docid": "59c2e1dcf41843d859287124cc655b05",
"text": "Atherosclerotic cardiovascular disease (ASCVD) is the most common cause of death in most Western countries. Nutrition factors contribute importantly to this high risk for ASCVD. Favourable alterations in diet can reduce six of the nine major risk factors for ASCVD, i.e. high serum LDL-cholesterol levels, high fasting serum triacylglycerol levels, low HDL-cholesterol levels, hypertension, diabetes and obesity. Wholegrain foods may be one the healthiest choices individuals can make to lower the risk for ASCVD. Epidemiological studies indicate that individuals with higher levels (in the highest quintile) of whole-grain intake have a 29 % lower risk for ASCVD than individuals with lower levels (lowest quintile) of whole-grain intake. It is of interest that neither the highest levels of cereal fibre nor the highest levels of refined cereals provide appreciable protection against ASCVD. Generous intake of whole grains also provides protection from development of diabetes and obesity. Diets rich in wholegrain foods tend to decrease serum LDL-cholesterol and triacylglycerol levels as well as blood pressure while increasing serum HDL-cholesterol levels. Whole-grain intake may also favourably alter antioxidant status, serum homocysteine levels, vascular reactivity and the inflammatory state. Whole-grain components that appear to make major contributions to these protective effects are: dietary fibre; vitamins; minerals; antioxidants; phytosterols; other phytochemicals. Three servings of whole grains daily are recommended to provide these health benefits.",
"title": ""
},
{
"docid": "c15fdbcd454a2293a6745421ad397e04",
"text": "The amount of research related to Internet marketing has grown rapidly since the dawn of the Internet Age. A review of the literature base will help identify the topics that have been explored as well as identify topics for further research. This research project collects, synthesizes, and analyses both the research strategies (i.e., methodologies) and content (e.g., topics, focus, categories) of the current literature, and then discusses an agenda for future research efforts. We analyzed 411 articles published over the past eighteen years (1994-present) in thirty top Information Systems (IS) journals and 22 articles in the top 5 Marketing journals. The results indicate an increasing level of activity during the 18-year period, a biased distribution of Internet marketing articles focused on exploratory methodologies, and several research strategies that were either underrepresented or absent from the pool of Internet marketing research. We also identified several subject areas that need further exploration. The compilation of the methodologies used and Internet marketing topics being studied can serve to motivate researchers to strengthen current research and explore new areas of this research.",
"title": ""
},
{
"docid": "a34825f20b645a146857c1544c08e66e",
"text": "1. The midterm will have about 5-6 long questions, and about 8-10 short questions. Space will be provided on the actual midterm for you to write your answers. 2. The midterm is meant to be educational, and as such some questions could be quite challenging. Use your time wisely to answer as much as you can! 3. For additional practice, please see CS 229 extra problem sets available at 1. [13 points] Generalized Linear Models Recall that generalized linear models assume that the response variable y (conditioned on x) is distributed according to a member of the exponential family: p(y; η) = b(y) exp(ηT (y) − a(η)), where η = θ T x. For this problem, we will assume η ∈ R. (a) [10 points] Given a training set {(x (i) , y (i))} m i=1 , the loglikelihood is given by (θ) = m i=1 log p(y (i) | x (i) ; θ). Give a set of conditions on b(y), T (y), and a(η) which ensure that the loglikelihood is a concave function of θ (and thus has a unique maximum). Your conditions must be reasonable, and should be as weak as possible. (E.g., the answer \" any b(y), T (y), and a(η) so that (θ) is concave \" is not reasonable. Similarly, overly narrow conditions, including ones that apply only to specific GLMs, are also not reasonable.) (b) [3 points] When the response variable is distributed according to a Normal distribution (with unit variance), we have b(y) = 1 √ 2π e −y 2 2 , T (y) = y, and a(η) = η 2 2. Verify that the condition(s) you gave in part (a) hold for this setting.",
"title": ""
},
{
"docid": "578d40b5c82fcc59fa2333e47a99d84c",
"text": "Brain tumor is one of the major causes of death among people. It is evident that the chances of survival can be increased if the tumor is detected and classified correctly at its early stage. Conventional methods involve invasive techniques such as biopsy, lumbar puncture and spinal tap method, to detect and classify brain tumors into benign (non cancerous) and malignant (cancerous). A computer aided diagnosis algorithm has been designed so as to increase the accuracy of brain tumor detection and classification, and thereby replace conventional invasive and time consuming techniques. This paper introduces an efficient method of brain tumor classification, where, the real Magnetic Resonance (MR) images are classified into normal, non cancerous (benign) brain tumor and cancerous (malignant) brain tumor. The proposed method follows three steps, (1) wavelet decomposition, (2) textural feature extraction and (3) classification. Discrete Wavelet Transform is first employed using Daubechies wavelet (db4), for decomposing the MR image into different levels of approximate and detailed coefficients and then the gray level co-occurrence matrix is formed, from which the texture statistics such as energy, contrast, correlation, homogeneity and entropy are obtained. The results of co-occurrence matrices are then fed into a probabilistic neural network for further classification and tumor detection. The proposed method has been applied on real MR images, and the accuracy of classification using probabilistic neural network is found to be nearly 100%.",
"title": ""
},
{
"docid": "4d36b2d77713a762040fd4ebc68e0d54",
"text": "Diversification and fragmentation of scientific exploration brings an increasing need for integration, for example through interdisciplinary research. The field of nanoscience and nanotechnology appears to exhibit strong interdisciplinary characteristics. Our objective was to explore the structure of the field and ascertain how different research areas within this field reflect interdisciplinarity through citation patterns. The complex relations between the citing and cited articles were examined through schematic visualization. Examination of WOS categories assigned to journals shows the scatter of nano studies across a wide range of research topics. We identified four distinctive groups of categories each showing some detectable shared characteristics. Three alternative measures of similarity were employed to delineate these groups. These distinct groups enabled us to assess interdisciplinarity within the groups and relationships between the groups. Some measurable levels of interdisciplinarity exist in all groups. However, one of the groups indicated that certain categories of both citing as well as cited articles aggregate mostly in the framework of physics, chemistry, and materials. This may suggest that the nanosciences show characteristics of a distinct discipline. The similarity in citing articles is most evident inside the respective groups, though, some subgroups within larger groups are also related to each other through the similarity of cited articles.",
"title": ""
}
] |
scidocsrr
|
f9b1713812b981804bb5b1cbc11a4672
|
A model for warehouse layout
|
[
{
"docid": "dbafd5f4efa7fd372ca5db119624ee56",
"text": "In many distribution centers, there is a constant pressure to reduce the order throughput times. One such distribution center is the DC of De Bijenkorf, a retail organization in The Netherlands with 7 subsidiaries and a product assortment of about 300,000 SKUs (stock keeping units). The orders for the subsidiaries are picked manually in this warehouse, which is very labor intensive. Furthermore many shipments have to be finished at about the same time, which leads to peak loads in the picking process. The picking process is therefore a costly operation. In this study we have investigated the possibilities to pick the orders more efficiently, without altering the storage or material handling equipment used or the storage strategies. It appeared to be possible to obtain a reduction between 17 and 34% in walking time, by simply routing the pickers more efficiently. The amount of walking time reduction depends on the routing algorithm used. The largest saving is obtained by using an optimal routing algorithm that has been developed for De Bijenkorf. The main reason for this substantial reduction in walking time, is the change from one-sided picking to two-sided picking in the narrow aisles. It is even possible to obtain a further reduction in walking time by clustering the orders. Small orders can be combined on one pick cart and can be picked in a single route. The combined picking of several orders (constrained by the size of the orders and the cart capacity) leads to a total reduction of about 60% in walking time, using a simple order clustering strategy in combination with a newly developed routing strategy. The reduction in total order picking time and hence the reduction in the number of pickers is about 19%.",
"title": ""
}
] |
[
{
"docid": "48303e0519f6fe8e2106318329b84b46",
"text": "Endowing an intelligent agent with an episodic memory affords it a multitude of cognitive capabilities. However, providing efficient storage and retrieval in a task-independent episodic memory presents considerable theoretical and practical challenges. We characterize the computational issues bounding an episodic memory. We explore whether even with intractable asymptotic growth, it is possible to develop efficient algorithms and data structures for episodic memory systems that are practical for real-world tasks. We present and evaluate formal and empirical results using Soar-EpMem: a task-independent integration of episodic memory with Soar 9, providing a baseline for graph-based, taskindependent episodic memory systems.",
"title": ""
},
{
"docid": "2e016f935e8795fe1e470ff945b63646",
"text": "We address the problem of segmenting multiple object instances in complex videos. Our method does not require manual pixel-level annotation for training, and relies instead on readily-available object detectors or visual object tracking only. Given object bounding boxes at input, we cast video segmentation as a weakly-supervised learning problem. Our proposed objective combines (a) a discriminative clustering term for background segmentation, (b) a spectral clustering one for grouping pixels of same object instances, and (c) linear constraints enabling instance-level segmentation. We propose a convex relaxation of this problem and solve it efficiently using the Frank-Wolfe algorithm. We report results and compare our method to several baselines on a new video dataset for multi-instance person segmentation.",
"title": ""
},
{
"docid": "7f9640bc22241bb40154bedcfda33655",
"text": "This project aims to detect possible anomalies in the resource consumption of radio base stations within the 4G LTE Radio architecture. This has been done by analyzing the statistical data that each node generates every 15 minutes, in the form of \"performance maintenance counters\". In this thesis, we introduce methods that allow resources to be automatically monitored after software updates, in order to detect any anomalies in the consumption patterns of the different resources compared to the reference period before the update. Additionally, we also attempt to narrow down the origin of anomalies by pointing out parameters potentially linked to the issue.",
"title": ""
},
{
"docid": "d0a765968e7cc4cf8099f66e0c3267da",
"text": "We explore the lattice sphere packing representation of a multi-antenna system and the algebraic space-time (ST) codes. We apply the sphere decoding (SD) algorithm to the resulted lattice code. For the uncoded system, SD yields, with small increase in complexity, a huge improvement over the well-known V-BLAST detection algorithm. SD of algebraic ST codes exploits the full diversity of the coded multi-antenna system, and makes the proposed scheme very appealing to take advantage of the richness of the multi-antenna environment. The fact that the SD does not depend on the constellation size, gives rise to systems with very high spectral efficiency, maximum-likelihood performance, and low decoding complexity.",
"title": ""
},
{
"docid": "6fd71fe20e959bfdde866ff54b2b474b",
"text": "The IETF developed the RPL routing protocol for Low power and Lossy Networks (LLNs). RPL allows for automated setup and maintenance of the routing tree for a meshed network using a common objective, such as energy preservation or most stable routes. To handle failing nodes and other communication disturbances, RPL includes a number of error correction functions for such situations. These error handling mechanisms, while maintaining a functioning routing tree, introduce an additional complexity to the routing process. Being a relatively new protocol, the effect of the error handling mechanisms within RPL needs to be analyzed. This paper presents an experimental analysis of RPL’s error correction mechanisms by using the Contiki RPL implementation along with an SNMP agent to monitor the performance of RPL.",
"title": ""
},
{
"docid": "16e2ba731973bfdad051b775078e08be",
"text": "I examine the phenomenon of implicit learning, the process by which knowledge about the ralegoverned complexities of the stimulus environment is acquired independently of conscious attempts to do so. Our research with the two, seemingly disparate experimental paradigms of synthetic grammar learning and probability learning is reviewed and integrated with other approaches to the general problem of unconscious cognition. The conclusions reached are as follows: (a) Implicit learning produces a tacit knowledge base that is abstract and representative of the structure of the environment; (b) such knowledge is optimally acquired independently of conscious efforts to learn; and (c) it can be used implicitly to solve problems and make accurate decisions about novel stimulus circumstances. Various epistemological issues and related prob1 lems such as intuition, neuroclinical disorders of learning and memory, and the relationship of evolutionary processes to cognitive science are also discussed.",
"title": ""
},
{
"docid": "0ef3d7b26feba199df7d466d14740a57",
"text": "A parsing algorithm visualizer is a tool that visualizes the construction of a parser for a given context-free grammar and then illustrates the use of that parser to parse a given string. Parsing algorithm visualizers are used to teach the course on compiler construction which in invariably included in all undergraduate computer science curricula. This paper presents a new parsing algorithm visualizer that can visualize six parsing algorithms, viz. predictive parsing, simple LR parsing, canonical LR parsing, look-ahead LR parsing, Earley parsing and CYK parsing. The tool logically explains the process of parsing showing the calculations involved in each step. The output of the tool has been structured to maximize the learning outcomes and contains important constructs like FIRST and FOLLOW sets, item sets, parsing table, parse tree and leftmost or rightmost derivation depending on the algorithm being visualized. The tool has been used to teach the course on compiler construction at both undergraduate and graduate levels. An overall positive feedback was received from the students with 89% of them saying that the tool helped them in understanding the parsing algorithms. The tool is capable of visualizing multiple parsing algorithms and 88% students used it to compare the algorithms.",
"title": ""
},
{
"docid": "bd3feae3ff8f8546efc1290e325b5a4e",
"text": "A bond pad failure mechanism of galvanic corrosion was studied. Analysis results showed that over-etch process, EKC and DI water over cleaning revealed more pitting with Cu seed due to galvanic corrosion. To control and eliminate galvanic corrosion, the etch recipe was optimized and etch time was reduced about 15% to prevent damaging the native oxide. EKC cleaning time was remaining unchanged in order to maintain bond pad F level at minimum level. In this study, the PRS process was also optimized and CF4 gas ratio was reduced about 45%. Moreover, 02 process was added after PRS process so as to increase the native oxide layer on Al bondpads to prevent galvanic corrosion.",
"title": ""
},
{
"docid": "bde7c16585b284ed9b6b0e54110deeee",
"text": "BACKGROUND\nEpidemiological reports suggest that Asians consuming a diet high in soy have a low incidence of prostate cancer. In animal models, soy and genistein have been demonstrated to suppress the development of prostate cancer. In this study, we investigate the mechanism of action, bioavailability, and potential for toxicity of dietary genistein in a rodent model.\n\n\nMETHODS\nLobund-Wistar rats were fed a 0.025-1.0-mg genistein/g AIN-76A diet. The dorsolateral prostate was subjected to Western blot analysis for expression of tyrosine-phosphorylated proteins, and of the EGF and ErbB2/Neu receptors. Genistein concentrations were measured from serum and prostate using HPLC-mass spectrometry. Body and prostate weights, and circulating testosterone levels, were measured.\n\n\nRESULTS\nIncreasing concentrations of genistein in the diet inhibited tyrosine-phosphorylated proteins with molecular weights of 170,000 and 85,000 in the dorsolateral prostate. Western blot analysis revealed that the 1-mg genistein/g AIN-76A diet inhibited by 50% the expression of the EGF receptor and its phosphorylation. In rats fed this diet, serum-free and total genistein concentrations were 137 and 2,712 pmol/ml, respectively. The free and total genistein IC50 values for the EGF receptor were 150 and 600 pmol/g prostate tissue, respectively. Genistein in the diet also inhibited the ErbB2/Neu receptor. Body and dorsolateral prostate weights, and circulating testosterone concentrations, were not adversely effected from exposure to genistein in the diet for 3 weeks.\n\n\nCONCLUSIONS\nWe conclude that genistein in the diet can downregulate the EGF and ErbB2/Neu receptors in the rat prostate with no apparent adverse toxicity to the host. The concentration needed to achieve a 50% reduction in EGF receptor expression can be achieved by eating a diet high in soy products or with genistein supplementation. Genistein inhibition of the EGF signaling pathway suggests that this phytoestrogen may be useful in both protecting against and treating prostate cancer.",
"title": ""
},
{
"docid": "d5d9e13025662f25b337ccf37ab03f03",
"text": "Music genre is getting complex from time to time. As the size of digital media grows along with amount of data, manual search of digital audio files according to its genre is considered impractical and inefficient; therefore a classification mechanism is needed to improve searching. Zero Crossing Rate (ZCR), Average Energy (E) and Silent Ratio (SR) are a few of features that can be extracted from digital audio files to classify its genre. This research is conducted to classify music from digital audio (songs) into 12 genres: Ballad, Blues, Classic, Harmony, Hip Hop, Jazz, Keroncong, Latin, Pop, Electronic, Reggae and Rock using above mentioned features, extracted from WAV audio files. Classification is performed several times using selected 3, 6, 9 and 12 genres respectively. The result shows that classification of 3 music genres (Ballad, Blues, Classic) has the highest accuracy (96.67%), followed by 6 genres (Ballad, Blues, Classic, Harmony, Hip Hop, Jazz) with 70%, and 9 genres (Ballad, Blues, Classic, Harmony, Hip Hop, Jazz, Keroncong, Latin, Pop) with 53.33% accuracy. Classification of all 12 music genres yields the lowest accuracy of 33.33%. The test results with the k-Nearest Neighbours algorithm to 120 songs for k = 3 accuracy reaches 22.5%, k = 5 accuracy reaches 22.5%, k = 7 accuracy reaching 26.7% and k = 9 accuracy reaches 26.7 %. Results showed that genre classification by matching the shortest distance through the centre of the class, yields better results than using the k-NN algorithm.",
"title": ""
},
{
"docid": "99e71a45374284cbcb28b3dbe69e175d",
"text": "Spatial event detection is an important and challenging problem. Unlike traditional event detection that focuses on the timing of global urgent event, the task of spatial event detection is to detect the spatial regions (e.g. clusters of neighboring cities) where urgent events occur. In this paper, we focus on the problem of spatial event detection using textual information in social media. We observe that, when a spatial event occurs, the topics relevant to the event are often discussed more coherently in cities near the event location than those far away. In order to capture this pattern, we propose a new method called Graph Topic Scan Statistic (Graph-TSS) that corresponds to a generalized log-likelihood ratio test based on topic modeling. We first demonstrate that the detection of spatial event regions under Graph-TSS is NP-hard due to a reduction from classical node-weighted prize-collecting Steiner tree problem (NW-PCST). We then design an efficient algorithm that approximately maximizes the graph topic scan statistic over spatial regions of arbitrary form. As a case study, we consider three applications using Twitter data, including Argentina civil unrest event detection, Chile earthquake detection, and United States influenza disease outbreak detection. Empirical evidence demonstrates that the proposed Graph-TSS performs superior over state-of-the-art methods on both running time and accuracy.",
"title": ""
},
{
"docid": "58119c2fc5e4b9d57d1f1e8f0e525e06",
"text": "OBJECTIVES\nDetecting hints to public health threats as early as possible is crucial to prevent harm from the population. However, many disease surveillance strategies rely upon data whose collection requires explicit reporting (data transmitted from hospitals, laboratories or physicians). Collecting reports takes time so that the reaction time grows. Moreover, context information on individual cases is often lost in the collection process. This paper describes a system that tries to address these limitations by processing social media for identifying information on public health threats. The primary objective is to study the usefulness of the approach for supporting the monitoring of a population's health status.\n\n\nMETHODS\nThe developed system works in three main steps: Data from Twitter, blogs, and forums as well as from TV and radio channels are continuously collected and filtered by means of keyword lists. Sentences of relevant texts are classified relevant or irrelevant using a binary classifier based on support vector machines. By means of statistical methods known from biosurveillance, the relevant sentences are further analyzed and signals are generated automatically when unexpected behavior is detected. From the generated signals a subset is selected for presentation to a user by matching with user queries or profiles. In a set of evaluation experiments, public health experts assessed the generated signals with respect to correctness and relevancy. In particular, it was assessed how many relevant and irrelevant signals are generated during a specific time period.\n\n\nRESULTS\nThe experiments show that the system provides information on health events identified in social media. Signals are mainly generated from Twitter messages posted by news agencies. Personal tweets, i.e. tweets from persons observing some symptoms, only play a minor role for signal generation given a limited volume of relevant messages. Relevant signals referring to real world outbreaks were generated by the system and monitored by epidemiologists for example during the European football championship. But, the number of relevant signals among generated signals is still very small: The different experiments yielded a proportion between 5 and 20% of signals regarded as \"relevant\" by the users. Vaccination or education campaigns communicated via Twitter as well as use of medical terms in other contexts than for outbreak reporting led to the generation of irrelevant signals.\n\n\nCONCLUSIONS\nThe aggregation of information into signals results in a reduction of monitoring effort compared to other existing systems. Against expectations, only few messages are of personal nature, reporting on personal symptoms. Instead, media reports are distributed over social media channels. Despite the high percentage of irrelevant signals generated by the system, the users reported that the effort in monitoring aggregated information in form of signals is less demanding than monitoring huge social-media data streams manually. It remains for the future to develop strategies for reducing false alarms.",
"title": ""
},
{
"docid": "185f9e66a467f449d299a4fbbb69bcb9",
"text": "Social media is becoming popular for news consumption due to its fast dissemination, easy access, and low cost. However, it also enables the wide propagation of fake news, i.e., news with intentionally false information. Detecting fake news is an important task, which not only ensures users receive authentic information but also helps maintain a trustworthy news ecosystem. The majority of existing detection algorithms focus on finding clues from news contents, which are generally not effective because fake news is often intentionally written to mislead users by mimicking true news. Therefore, we need to explore auxiliary information to improve detection. The social context during news dissemination process on social media forms the inherent tri-relationship, the relationship among publishers, news pieces, and users, which has the potential to improve fake news detection. For example, partisan-biased publishers are more likely to publish fake news, and low-credible users are more likely to share fake news. In this paper, we study the novel problem of exploiting social context for fake news detection. We propose a tri-relationship embedding framework TriFN, which models publisher-news relations and user-news interactions simultaneously for fake news classification. We conduct experiments on two real-world datasets, which demonstrate that the proposed approach significantly outperforms other baseline methods for fake news detection.",
"title": ""
},
{
"docid": "a839d9e4a80d9a8715119bc53eddbce1",
"text": "Reliable and comprehensive measurement data from large-scale fire tests is needed for validation of computer fire models, but is subject to various uncertainties, including radiation errors in temperature measurement. Here, a simple method for post-processing thermocouple data is demonstrated, within the scope of a series of large-scale fire tests, in order to establish a well characterised dataset of physical parameter values which can be used with confidence in model validation. Sensitivity analyses reveal the relationship of the correction uncertainty to the assumed optical properties and the thermocouple distribution. The analysis also facilitates the generation of maps of an equivalent radiative flux within the fire compartment, a quantity which usefully characterises the thermal exposures of structural components. Large spatial and temporal variations are found, with regions of most severe exposures not being collocated with the peak gas temperatures; this picture is at variance with the assumption of uniform heating conditions often adopted for post-flashover fires.",
"title": ""
},
{
"docid": "19f08f2e9dd22bb2779ded2ad9cd19d4",
"text": "In this paper, a new algorithm for Vehicle Logo Recognition is proposed, on the basis of an enhanced Scale Invariant Feature Transform (Merge-SIFT or M-SIFT). The algorithm is assessed on a set of 1500 logo images that belong to 10 distinctive vehicle manufacturers. A series of experiments are conducted, splitting the 1500 images to a training set (database) and to a testing set (query). It is shown that the MSIFT approach, which is proposed in this paper, boosts the recognition accuracy compared to the standard SIFT method. The reported results indicate an average of 94.6% true recognition rate in vehicle logos, while the processing time remains low (~0.8sec).",
"title": ""
},
{
"docid": "ab6238c3fc84540f124ebdb7390882b7",
"text": "ImageCLEF is the image retrieval task of the Conference and Labs of the Evaluation Forum (CLEF). ImageCLEF has historically focused on the multimodal and language-independent retrieval of images. Many tasks are related to image classification and the annotation of image data as well as the retrieval of images. The tuberculosis task was held for the first time in 2017 and had a very encouraging participation with 9 groups submitting results to these very challenging tasks. Two tasks were proposed around tuberculosis: (1) the classification of the cases into five types of tuberculosis and (2) the detection of drug resistances among tuberculosis cases. Many different techniques were used by the participants ranging from Deep Learning to graph-based approaches and best results were obtained by a large variety of approaches. The prediction of tuberculosis types had relatively good performance but the detection of drug resistances remained a very difficult task. More research into this seems necessary.",
"title": ""
},
{
"docid": "fb70a65b6ddbc7507460753910b1b6ec",
"text": "BACKGROUND\nWhile most youth report positive experiences and activities online, little is known about experiences of Internet victimization and associated correlates of youth, specifically in regards to Internet harassment.\n\n\nMETHODS\nThe Youth Internet Safety Survey is a cross-sectional, nationally representative telephone survey of young regular Internet users in the United States. Interviews were conducted between the fall of 1999 and the spring of 2000 and examined characteristics of Internet harassment, unwanted exposure to sexual material, and sexual solicitation that had occurred on the Internet in the previous year. One thousand, five hundred and one regular Internet users between the ages of 10 and 17 years were interviewed, along with one parent or guardian. To assess the characteristics surrounding Internet harassment, four groups of youth were compared: 1) targets of aggression (having been threatened or embarrassed by someone; or feeling worried or threatened by someone's actions); 2) online aggressors (making rude or nasty comments; or harassing or embarrassing someone with whom the youth was mad at); 3) aggressor/targets (youth who report both being an aggressor as well as a target of Internet harassment); and 4) non-harassment involved youth (being neither a target nor an aggressor online).\n\n\nRESULTS\nOf the 19% of young regular Internet users involved in online aggression, 3% were aggressor/targets, 4% reported being targets only, and 12% reported being online aggressors only. Youth aggressor/targets reported characteristics similar to conventional bully/victim youth, including many commonalities with aggressor-only youth, and significant psychosocial challenge.\n\n\nCONCLUSIONS\nYouth aggressor/targets are intense users of the Internet who view themselves as capable web users. Beyond this, however, these youth report significant psychosocial challenge, including depressive symptomatology, problem behavior, and targeting of traditional bullying. Implications for intervention are discussed.",
"title": ""
},
{
"docid": "af3faaf203d771bd7fae3363b8ec8060",
"text": "Recent advances on biometrics, information forensics, and security have improved the accuracy of biometric systems, mainly those based on facial information. However, an ever-growing challenge is the vulnerability of such systems to impostor attacks, in which users without access privileges try to authenticate themselves as valid users. In this work, we present a solution to video-based face spoofing to biometric systems. Such type of attack is characterized by presenting a video of a real user to the biometric system. To the best of our knowledge, this is the first attempt of dealing with video-based face spoofing based in the analysis of global information that is invariant to video content. Our approach takes advantage of noise signatures generated by the recaptured video to distinguish between fake and valid access. To capture the noise and obtain a compact representation, we use the Fourier spectrum followed by the computation of the visual rhythm and extraction of the gray-level co-occurrence matrices, used as feature descriptors. Results show the effectiveness of the proposed approach to distinguish between valid and fake users for video-based spoofing with near-perfect classification results.",
"title": ""
},
{
"docid": "6f13503bf65ff58b7f0d4f3282f60dec",
"text": "Body centric wireless communication is now accepted as an important part of 4th generation (and beyond) mobile communications systems, taking the form of human to human networking incorporating wearable sensors and communications. There are also a number of body centric communication systems for specialized occupations, such as paramedics and fire-fighters, military personnel and medical sensing and support. To support these developments there is considerable ongoing research into antennas and propagation for body centric communications systems, and this paper will summarise some of it, including the characterisation of the channel on the body, the optimisation of antennas for these channels, and communications to medical implants where advanced antenna design and characterisation and modelling of the internal body channel are important research needs. In all of these areas both measurement and simulation pose very different and challenging issues to be faced by the researcher.",
"title": ""
},
{
"docid": "acefbbb42607f2d478a16448644bd6e6",
"text": "The time complexity of incremental structure from motion (SfM) is often known as O(n^4) with respect to the number of cameras. As bundle adjustment (BA) being significantly improved recently by preconditioned conjugate gradient (PCG), it is worth revisiting how fast incremental SfM is. We introduce a novel BA strategy that provides good balance between speed and accuracy. Through algorithm analysis and extensive experiments, we show that incremental SfM requires only O(n) time on many major steps including BA. Our method maintains high accuracy by regularly re-triangulating the feature matches that initially fail to triangulate. We test our algorithm on large photo collections and long video sequences with various settings, and show that our method offers state of the art performance for large-scale reconstructions. The presented algorithm is available as part of VisualSFM at http://homes.cs.washington.edu/~ccwu/vsfm/.",
"title": ""
}
] |
scidocsrr
|
21d4139eba13e645375c017caacb1d85
|
Using graded implicit feedback for bayesian personalized ranking
|
[
{
"docid": "13b887760a87bc1db53b16eb4fba2a01",
"text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"title": ""
},
{
"docid": "f8ea6c873594b0971989cc462527ca97",
"text": "Recommender system aim at providing a personalized list of items ranked according to the preferences of the user, as such ranking methods are at the core of many recommendation algorithms. The topic of this tutorial focuses on the cutting-edge algorithmic development in the area of recommender systems. This tutorial will provide an in depth picture of the progress of ranking models in the field, summarizing the strengths and weaknesses of existing methods, and discussing open issues that could be promising for future research in the community. A qualitative and quantitative comparison between different models will be provided while we will also highlight recent developments in the areas of Reinforcement Learning.",
"title": ""
},
{
"docid": "f1d11ef2739e02af2a95cbc93036bf43",
"text": "Extended Collaborative Less-is-More Filtering xCLiMF is a learning to rank model for collaborative filtering that is specifically designed for use with data where information on the level of relevance of the recommendations exists, e.g. through ratings. xCLiMF can be seen as a generalization of the Collaborative Less-is-More Filtering (CLiMF) method that was proposed for top-N recommendations using binary relevance (implicit feedback) data. The key contribution of the xCLiMF algorithm is that it builds a recommendation model by optimizing Expected Reciprocal Rank, an evaluation metric that generalizes reciprocal rank in order to incorporate user feedback with multiple levels of relevance. Experimental results on real-world datasets show the effectiveness of xCLiMF, and also demonstrate its advantage over CLiMF when more than two levels of relevance exist in the data.",
"title": ""
}
] |
[
{
"docid": "7448defe73a531018b11ac4b4b38b4cb",
"text": "Calcium oxalate crystalluria is a problem of growing concern in dogs. A few reports have discussed acute kidney injury by oxalates in dogs, describing ultrastructural findings in particular. We evaluated the possibility of deposition of calcium oxalate crystals in renal tissue and its probable consequences. Six dogs were intravenously injected with 0.5 M potassium oxalate (KOx) for seven consecutive days. By the end of the experiment, ultrasonography revealed a significant increase in the renal mass and renal parenchymal echogenicity. Serum creatinine and blood urea nitrogen levels were gradually increased. The histopathological features of the kidneys were assessed by both light and electron microscopy, which showed CaOx crystal deposition accompanied by morphological changes in the renal tissue of KOx injected dogs. Canine renal oxalosis provides a good model to study the biological and pathological changes induced upon damage of renal tissue by KOx injection.",
"title": ""
},
{
"docid": "2476e67447d873c0698fce0b032e6d90",
"text": "The emerging paradigm of the Internet of Everything, along with the increasing demand of Internet services everywhere, results in a remarkable and continuous growth of the global Internet traffic. As a cost-effective Internet access solution, WiFi networks currently generate a major portion of the global Internet traffic. Furthermore, the number of WiFi public hotspots worldwide is expected to increase by more than sevenfold by 2018. To face this huge increase in the number of densely deployed WiFi networks, and the massive amount of data to be supported by these networks in indoor and outdoor environments, it is necessary to improve the current WiFi standard and define specifications for high efficiency wireless local area networks (HEWs). This paper presents potential techniques that can be applied for HEWs, in order to achieve the required performance in dense HEW deployment scenarios, as expected in the near future. The HEW solutions under consideration includes physical layer techniques, medium access control layer strategies, spatial frequency reuse schemes, and power saving mechanisms. To accurately assess a newly proposed HEW scheme, we discuss suitable evaluation methodologies, by defining simulation scenarios that represent future HEW usage models, performance metrics that reflect HEW user experience, traffic models for dominant HEW applications, and channel models for indoor and outdoor HEW deployments. Finally, we highlight open issues for future HEW research and development.",
"title": ""
},
{
"docid": "012bcbc6b5e7b8aaafd03f100489961c",
"text": "DNA is an attractive medium to store digital information. Here we report a storage strategy, called DNA Fountain, that is highly robust and approaches the information capacity per nucleotide. Using our approach, we stored a full computer operating system, movie, and other files with a total of 2.14 × 106 bytes in DNA oligonucleotides and perfectly retrieved the information from a sequencing coverage equivalent to a single tile of Illumina sequencing. We also tested a process that can allow 2.18 × 1015 retrievals using the original DNA sample and were able to perfectly decode the data. Finally, we explored the limit of our architecture in terms of bytes per molecule and obtained a perfect retrieval from a density of 215 petabytes per gram of DNA, orders of magnitude higher than previous reports.",
"title": ""
},
{
"docid": "cb26bb277afc6d521c4c5960b35ed77d",
"text": "We propose a novel algorithm for the segmentation and prerecognition of offline handwritten Arabic text. Our character segmentation method over-segments each word, and then removes extra breakpoints using knowledge of letter shapes. On a test set of 200 images, 92.3% of the segmentation points were detected correctly, with 5.1% instances of over-segmentation. The prerecognition component annotates each detected letter with shape information, to be used for recognition in future work.",
"title": ""
},
{
"docid": "5cba55a67ba27c39ad72e82608052ae1",
"text": "This letter presents a novel dual-band rectifier with extended power range (EPR) and an optimal incident RF power strategy in the settings where the available RF energy fluctuates considerably. It maintains high power conversion efficiency (PCE) in an ultra-wide input power range by adopting a pHEMT in the proposed topology. Simultaneous RF power incident mode is proposed and preferred to the traditional independent mode for multi-band harvesting. Measured results show that more than 30% PCE is obtained with input power ranging from -15 dBm to 20 dBm and peak PCE of 60% is maintained from 5 to 15 dBm. Positive power gain is achieved from -20 dBm to more than 10 dBm. Investigation about the effect of RF power incident ratio on dual-band harvesting's performance is presented and it provides a good reference for future multi-band harvesting system design.",
"title": ""
},
{
"docid": "eba769c6246b44d8ed7e5f08aac17731",
"text": "One hundred men, living in three villages in a remote region of the Eastern Highlands of Papua New Guinea were asked to judge the attractiveness of photographs of women who had undergone micrograft surgery to reduce their waist-to-hip ratios (WHRs). Micrograft surgery involves harvesting adipose tissue from the waist and reshaping the buttocks to produce a low WHR and an \"hourglass\" female figure. Men consistently chose postoperative photographs as being more attractive than preoperative photographs of the same women. Some women gained, and some lost weight, postoperatively, with resultant changes in body mass index (BMI). However, changes in BMI were not related to men's judgments of attractiveness. These results show that the hourglass female figure is rated as attractive by men living in a remote, indigenous community, and that when controlling for BMI, WHR plays a crucial role in their attractiveness judgments.",
"title": ""
},
{
"docid": "14d480e4c9256d0ef5e5684860ae4d7f",
"text": "Changes in land use and land cover (LULC) as well as climate are likely to affect the geographic distribution of malaria vectors and parasites in the coming decades. At present, malaria transmission is concentrated mainly in the Amazon basin where extensive agriculture, mining, and logging activities have resulted in changes to local and regional hydrology, massive loss of forest cover, and increased contact between malaria vectors and hosts. Employing presence-only records, bioclimatic, topographic, hydrologic, LULC and human population data, we modeled the distribution of malaria and two of its dominant vectors, Anopheles darlingi, and Anopheles nuneztovari s.l. in northern South America using the species distribution modeling platform Maxent. Results from our land change modeling indicate that about 70,000 km2 of forest land would be lost by 2050 and 78,000 km2 by 2070 compared to 2010. The Maxent model predicted zones of relatively high habitat suitability for malaria and the vectors mainly within the Amazon and along coastlines. While areas with malaria are expected to decrease in line with current downward trends, both vectors are predicted to experience range expansions in the future. Elevation, annual precipitation and temperature were influential in all models both current and future. Human population mostly affected An. darlingi distribution while LULC changes influenced An. nuneztovari s.l. distribution. As the region tackles the challenge of malaria elimination, investigations such as this could be useful for planning and management purposes and aid in predicting and addressing potential impediments to elimination.",
"title": ""
},
{
"docid": "ee9f21361d01a8c678fece3c425f35c2",
"text": "Probabilistic model-based clustering, based on nite mixtures of multivariate models, is a useful framework for clustering data in a statistical context. This general framework can be directly extended to clustering of sequential data, based on nite mixtures of sequential models. In this paper we consider the problem of tting mixture models where both multivariate and sequential observations are present. A general EM algorithm is discussed and experimental results demonstrated on simulated data. The problem is motivated by the practical problem of clustering individuals into groups based on both their static characteristics and their dynamic behavior.",
"title": ""
},
{
"docid": "69f4dc7729dd74642c7b66276c26a971",
"text": "Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 â 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/eâ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design.",
"title": ""
},
{
"docid": "b10447097f8d513795b4f4e08e1838d8",
"text": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.",
"title": ""
},
{
"docid": "7531be3af1285a4c1c0b752d1ee45f52",
"text": "Given an undirected graph with weight for each vertex, the maximum weight clique problem is to find the clique of the maximum weight. Östergård proposed a fast exact algorithm for solving this problem. We show his algorithm is not efficient for very dense graphs. We propose an exact algorithm for the problem, which is faster than Östergård’s algorithm in case the graph is dense. We show the efficiency of our algorithm with some experimental results.",
"title": ""
},
{
"docid": "65d3d020ee63cdeb74cb3da159999635",
"text": "We investigated the effects of format of an initial test and whether or not students received corrective feedback on that test on a final test of retention 3 days later. In Experiment 1, subjects studied four short journal papers. Immediately after reading each paper, they received either a multiple choice (MC) test, a short answer (SA) test, a list of statements to read, or a filler task. The MC test, SA test, and list of statements tapped identical facts from the studied material. No feedback was provided during the initial tests. On a final test 3 days later (consisting of MC and SA questions), having had an intervening MC test led to better performance than an intervening SA test, but the intervening MC condition did not differ significantly from the read statements condition. To better equate exposure to test-relevant information, corrective feedback during the initial tests was introduced in Experiment 2. With feedback provided, having had an intervening SA test led to the best performance on the final test, suggesting that the more demanding the retrieval processes engendered by the intervening test, the greater the benefit to final retention. The practical application of these findings is that regular SA quizzes with feedback may be more effective in enhancing student learning than repeated presentation of target facts or taking an MC quiz.",
"title": ""
},
{
"docid": "c02a55b5a3536f3ab12c65dd0d3037ef",
"text": "The emergence of large-scale receptor-based systems has enabled applications to execute complex business logic over data generated from monitoring the physical world. An important functionality required by these applications is the detection and response to complex events, often in real-time. Bridging the gap between low-level receptor technology and such high-level needs of applications remains a significant challenge.We demonstrate our solution to this problem in the context of HiFi, a system we are building to solve the data management problems of large-scale receptor-based systems. Specifically, we show how HiFi generates simple events out of receptor data at its edges and provides high-functionality complex event processing mechanisms for sophisticated event detection using a real-world library scenario.",
"title": ""
},
{
"docid": "78e2311b0c40d055abc144d11926c831",
"text": "Intrusion Detection System is used to detect suspicious activities is one form of defense. However, the sheer size of the network logs makes human log analysis intractable. Furthermore, traditional intrusion detection methods based on pattern matching techniques cannot cope with the need for faster speed to manually update those patterns. Anomaly detection is used as a part of the intrusion detection system, which in turn use certain data mining techniques. Data mining techniques can be applied to the network data to detect possible intrusions. The foremost step in application of data mining techniques is the selection of appropriate features from the data. This paper aims to build an Intrusion Detection System that can detect known and unknown intrusion automatically. Under a data mining framework, the IDS are trained with statistical algorithm, named Chi-Square statistics. This study shows the plan, implementation and the analyze of these threats by using a Chi-Square statistic technique, in order to prevent these attacks and to make a Network Intrusion detection system (NIDS). This proposed model is used to detect anomaly-based network to see how effective this statistical technique in detecting intrusions.",
"title": ""
},
{
"docid": "73973ae6c858953f934396ab62276e0d",
"text": "The unsolicited bulk messages are widespread in the applications of short messages. Although the existing spam filters have satisfying performance, they are facing the challenge of an adversary who misleads the spam filters by manipulating samples. Until now, the vulnerability of spam filtering technique for short messages has not been investigated. Different from the other spam applications, a short message only has a few words and its length usually has an upper limit. The current adversarial learning algorithms may not work efficiently in short message spam filtering. In this paper, we investigate the existing good word attack and its counterattack method, i.e. the feature reweighting, in short message spam filtering in an effort to understand whether, and to what extent, they can work efficiently when the length of a message is limited. This paper proposes a good word attack strategy which maximizes the influence to a classifier with the least number of inserted characters based on the weight values and also the length of words. On the other hand, we also proposes the feature reweighting method with a new rescaling function which minimizes the importance of the feature representing a short word in order to require more inserted characters for a successful evasion. The methods are evaluated experimentally by using the SMS and the comment spam dataset. The results confirm that the length of words is a critical factor of the robustness of short message spam filtering to good word attack. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9223330ceb0b0575379c238672b8afc2",
"text": "Contact networks are often used in epidemiological studies to describe the patterns of interactions within a population. Often, such networks merely indicate which individuals interact, without giving any indication of the strength or intensity of interactions. Here, we use weighted networks, in which every connection has an associated weight, to explore the influence of heterogeneous contact strengths on the effectiveness of control measures. We show that, by using contact weights to evaluate an individual's influence on an epidemic, individual infection risk can be estimated and targeted interventions such as preventative vaccination can be applied effectively. We use a diary study of social mixing behaviour to indicate the patterns of contact weights displayed by a real population in a range of different contexts, including physical interactions; we use these data to show that considerations of link weight can in some cases lead to improved interventions in the case of infections that spread through close contact interactions. However, we also see that simpler measures, such as an individual's total number of social contacts or even just their number of contacts during a single day, can lead to great improvements on random vaccination. We therefore conclude that, for many infections, enhanced social contact data can be simply used to improve disease control but that it is not necessary to have full social mixing information in order to enhance interventions.",
"title": ""
},
{
"docid": "d5130b0353dd05e6a0e6e107c9b863e0",
"text": "We study Euler–Poincaré systems (i.e., the Lagrangian analogue of LiePoisson Hamiltonian systems) defined on semidirect product Lie algebras. We first give a derivation of the Euler–Poincaré equations for a parameter dependent Lagrangian by using a variational principle of Lagrange d’Alembert type. Then we derive an abstract Kelvin-Noether theorem for these equations. We also explore their relation with the theory of Lie-Poisson Hamiltonian systems defined on the dual of a semidirect product Lie algebra. The Legendre transformation in such cases is often not invertible; thus, it does not produce a corresponding Euler–Poincaré system on that Lie algebra. We avoid this potential difficulty by developing the theory of Euler–Poincaré systems entirely within the Lagrangian framework. We apply the general theory to a number of known examples, including the heavy top, ideal compressible fluids and MHD. We also use this framework to derive higher dimensional Camassa-Holm equations, which have many potentially interesting analytical properties. These equations are Euler-Poincaré equations for geodesics on diffeomorphism groups (in the sense of the Arnold program) but where the metric is H rather than L. ∗Research partially supported by NSF grant DMS 96–33161. †Research partially supported by NSF Grant DMS-9503273 and DOE contract DE-FG0395ER25245-A000.",
"title": ""
},
{
"docid": "0ed8212399f2e93017fde1c5819acb61",
"text": "This study examines the acceptance of technology and behavioral intention to use learning management systems (LMS). In specific, the aim of the research reported in this paper is to examine whether students ultimately accept LMSs such as eClass and the impact of behavioral intention on their decision to use them. An extended version of technology acceptance model has been proposed and used by employing one of the most reliable measures of perceived eased of use, the System Usability Scale. 345 university students participated in the study. The data analysis was based on partial least squares method. The majority of the research hypotheses were confirmed. In particular, social norm, system access and self-efficacy were found to significantly affect behavioral intention to use. As a result, it is suggested that e-learning developers and stakeholders should focus on these factors to increase acceptance and effectiveness of learning management systems.",
"title": ""
},
{
"docid": "1f4c0407c8da7b5fe685ad9763be937b",
"text": "As the dominant mobile computing platform, Android has become a prime target for cyber-security attacks. Many of these attacks are manifested at the application level, and through the exploitation of vulnerabilities in apps downloaded from the popular app stores. Increasingly, sophisticated attacks exploit the vulnerabilities in multiple installed apps, making it extremely difficult to foresee such attacks, as neither the app developers nor the store operators know a priori which apps will be installed together. This paper presents an approach that allows the end-users to safeguard a given bundle of apps installed on their device from such attacks. The approach, realized in a tool, called DROIDGUARD, combines static code analysis with lightweight formal methods to automatically infer security-relevant properties from a bundle of apps. It then uses a constraint solver to synthesize possible security exploits, from which fine-grained security policies are derived and automatically enforced to protect a given device. In our experiments with over 4,000 Android apps, DROIDGUARD has proven to be highly effective at detecting previously unknown vulnerabilities as well as preventing their exploitation.",
"title": ""
},
{
"docid": "e31ea6b8c4a5df049782b463abc602ea",
"text": "Nature plays a very important role to solve problems in a very effective and well-organized way. Few researchers are trying to create computational methods that can assist human to solve difficult problems. Nature inspired techniques like swarm intelligence, bio-inspired, physics/chemistry and many more have helped in solving difficult problems and also provide most favourable solution. Nature inspired techniques are wellmatched for soft computing application because parallel, dynamic and self organising behaviour. These algorithms motivated from the working group of social agents like ants, bees and insect. This paper is a complete survey of nature inspired techniques.",
"title": ""
}
] |
scidocsrr
|
d43d993b29a40b6ab04751046e3ccc6d
|
Polarity Loss for Zero-shot Object Detection
|
[
{
"docid": "88cf953ba92b54f89cdecebd4153bee3",
"text": "In this paper, we propose a novel object detection framework named \"Deep Regionlets\" by establishing a bridge between deep neural networks and conventional detection schema for accurate generic object detection. Motivated by the abilities of regionlets for modeling object deformation and multiple aspect ratios, we incorporate regionlets into an end-to-end trainable deep learning framework. The deep regionlets framework consists of a region selection network and a deep regionlet learning module. Specifically, given a detection bounding box proposal, the region selection network provides guidance on where to select regions to learn the features from. The regionlet learning module focuses on local feature selection and transformation to alleviate local variations. To this end, we first realize non-rectangular region selection within the detection framework to accommodate variations in object appearance. Moreover, we design a “gating network\" within the regionlet leaning module to enable soft regionlet selection and pooling. The Deep Regionlets framework is trained end-to-end without additional efforts. We perform ablation studies and conduct extensive experiments on the PASCAL VOC and Microsoft COCO datasets. The proposed framework outperforms state-of-theart algorithms, such as RetinaNet and Mask R-CNN, even without additional segmentation labels.",
"title": ""
},
{
"docid": "a26717cb49e3886c2b2eaab4c9694183",
"text": "Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations.",
"title": ""
}
] |
[
{
"docid": "ff3867a1c0ee1d3f1e61cb306af37bb1",
"text": "Introduction: The mucocele is one of the most common benign soft tissue masses that occur in the oral cavity. Mucoceles (mucus and coele cavity), by definition, are cavities filled with mucus. Two types of mucoceles can appear – extravasation type and retention type. Diagnosis is mostly based on clinical findings. The common location of the extravasation mucocele is the lower lip and the treatment of choice is surgical removal. This paper gives an insight into the phenomenon and a case report has been presented. Case report: Twenty five year old femalepatient reported with chief complaint of small swelling on the left side of the lower lip since 2 months. The swelling was diagnosed as extravasation mucocele after history and clinical examination. The treatment involved surgical excision of tissue and regular follow up was done to check for recurrence. Conclusion: The treatment of lesion such as mucocele must be planned taking into consideration the various clinical parameters and any oral habits as these lesions have a propensity of recurrence.",
"title": ""
},
{
"docid": "23305a36194ad3c9b6b3f667c79bd273",
"text": "Evidence used to reconstruct the morphology and function of the brain (and the rest of the central nervous system) in fossil hominin species comes from the fossil and archeological records. Although the details provided about human brain evolution are scarce, they benefit from interpretations informed by interspecific comparative studies and, in particular, human pathology studies. In recent years, new information has come to light about fossil DNA and ontogenetic trajectories, for which pathology research has significant implications. We briefly describe and summarize data from the paleoarcheological and paleoneurological records about the evolution of fossil hominin brains, including behavioral data most relevant to brain research. These findings are brought together to characterize fossil hominin taxa in terms of brain structure and function and to summarize brain evolution in the human lineage.",
"title": ""
},
{
"docid": "7ce1646e0fe1bd83f9feb5ec20233c93",
"text": "An emerging class of theories concerning the functional structure of the brain takes the reuse of neural circuitry for various cognitive purposes to be a central organizational principle. According to these theories, it is quite common for neural circuits established for one purpose to be exapted (exploited, recycled, redeployed) during evolution or normal development, and be put to different uses, often without losing their original functions. Neural reuse theories thus differ from the usual understanding of the role of neural plasticity (which is, after all, a kind of reuse) in brain organization along the following lines: According to neural reuse, circuits can continue to acquire new uses after an initial or original function is established; the acquisition of new uses need not involve unusual circumstances such as injury or loss of established function; and the acquisition of a new use need not involve (much) local change to circuit structure (e.g., it might involve only the establishment of functional connections to new neural partners). Thus, neural reuse theories offer a distinct perspective on several topics of general interest, such as: the evolution and development of the brain, including (for instance) the evolutionary-developmental pathway supporting primate tool use and human language; the degree of modularity in brain organization; the degree of localization of cognitive function; and the cortical parcellation problem and the prospects (and proper methods to employ) for function to structure mapping. The idea also has some practical implications in the areas of rehabilitative medicine and machine interface design.",
"title": ""
},
{
"docid": "27d7f7935c235a3631fba6e3df08f623",
"text": "We investigate the task of Named Entity Recognition (NER) in the domain of biomedical text. There is little published work employing modern neural network techniques in this domain, probably due to the small sizes of human-labeled data sets, as non-trivial neural models would have great difficulty avoiding overfitting. In this work we follow a semi-supervised learning approach: We first train state-of-the art (deep) neural networks on a large corpus of noisy machine-labeled data, then “transfer” and fine-tune the learned model on two higher-quality humanlabeled data sets. This approach yields higher performance than the current best published systems for the class DISEASE. It trails but is not far from the currently best systems for the class CHEM.",
"title": ""
},
{
"docid": "04859c652e62bfcb4c68cfc547b36e42",
"text": "BACKGROUND\nWhether the use of sevelamer rather than a calcium-containing phosphate binder improves cardiovascular (CV) survival in patients receiving dialysis remains to be elucidated.\n\n\nSTUDY DESIGN\nOpen-label randomized controlled trial with parallel groups.\n\n\nSETTINGS & PARTICIPANTS\n466 incident hemodialysis patients recruited from 18 centers in Italy.\n\n\nINTERVENTION\nStudy participants were randomly assigned in a 1:1 fashion to receive either sevelamer or a calcium-containing phosphate binder (although not required by the protocol, all patients in this group received calcium carbonate) for 24 months.\n\n\nOUTCOMES\nAll individuals were followed up until completion of 36 months of follow-up or censoring. CV death due to cardiac arrhythmias was regarded as the primary end point.\n\n\nMEASUREMENTS\nBlind event adjudication.\n\n\nRESULTS\nAt baseline, patients allocated to sevelamer had higher serum phosphorus (mean, 5.6 ± 1.7 [SD] vs 4.8 ± 1.4 mg/dL) and C-reactive protein levels (mean, 8.8 ± 13.4 vs 5.9 ± 6.8 mg/dL) and lower coronary artery calcification scores (median, 19 [IQR, 0-30] vs 30 [IQR, 7-180]). At study completion, serum phosphate levels were lower in the sevelamer arm (median dosages, 4,800 and 2,000 mg/d for sevelamer and calcium carbonate, respectively). After a mean follow-up of 28 ± 10 months, 128 deaths were recorded (29 and 88 due to cardiac arrhythmias and all-cause CV death). Sevelamer-treated patients experienced lower CV mortality due to cardiac arrhythmias compared with patients treated with calcium carbonate (HR, 0.06; 95% CI, 0.01-0.25; P < 0.001). Similar results were noted for all-cause CV mortality and all-cause mortality, but not for non-CV mortality. Adjustments for potential confounders did not affect results.\n\n\nLIMITATIONS\nOpen-label design, higher baseline coronary artery calcification burden in calcium carbonate-treated patients, different mineral metabolism control in sevelamer-treated patients, overall lower than expected mortality.\n\n\nCONCLUSIONS\nThese results show that sevelamer compared to a calcium-containing phosphate binder improves survival in a cohort of incident hemodialysis patients. However, the better outcomes in the sevelamer group may be due to better phosphate control rather than reduction in calcium load.",
"title": ""
},
{
"docid": "c1ee9109435a6535e1512669b632e490",
"text": "The theory of structural holes suggests that individuals would benefit from filling the \"holes\" (called as structural hole spanners) between people or groups that are otherwise disconnected. A few empirical studies have verified that structural hole spanners play a key role in the information diffusion. However, there is still lack of a principled methodology to detect structural hole spanners from a given social network.\n In this work, we precisely define the problem of mining top-k structural hole spanners in large-scale social networks and provide an objective (quality) function to formalize the problem. Two instantiation models have been developed to implement the objective function. For the first model, we present an exact algorithm to solve it and prove its convergence. As for the second model, the optimization is proved to be NP-hard, and we design an efficient algorithm with provable approximation guarantees.\n We test the proposed models on three different networks: Coauthor, Twitter, and Inventor. Our study provides evidence for the theory of structural holes, e.g., 1% of Twitter users who span structural holes control 25% of the information diffusion on Twitter. We compare the proposed models with several alternative methods and the results show that our models clearly outperform the comparison methods. Our experiments also demonstrate that the detected structural hole spanners can help other social network applications, such as community kernel detection and link prediction. To the best of our knowledge, this is the first attempt to address the problem of mining structural hole spanners in large social networks.",
"title": ""
},
{
"docid": "ecabfcbb40fc59f1d1daa02502164b12",
"text": "We present a generalized line histogram technique to compute global rib-orientation for detecting rotated lungs in chest radiographs. We use linear structuring elements, such as line seed filters, as kernels to convolve with edge images, and extract a set of lines from the posterior rib-cage. After convolving kernels in all possible orientations in the range [0, π], we measure the angle for which the line histogram has maximum magnitude. This measure provides a good approximation of the global chest rib-orientation for each lung. A chest radiograph is said to be upright if the difference between the orientation angles of both lungs with respect to the horizontal axis, is negligible. We validate our method on sets of normal and abnormal images and argue that rib orientation can be used for rotation detection in chest radiographs as aid in quality control during image acquisition, and to discard images from training and testing data sets. In our test, we achieve a maximum accuracy of 90%.",
"title": ""
},
{
"docid": "d8aae877405d95d592b7460bb10d8ebd",
"text": "People sometimes choose word-like abbreviations to refer to items with a long description. These abbreviations usually come from the descriptive text of the item and are easy to remember and pronounce, while preserving the key idea of the item. Coming up with a nice abbreviation is not an easy job, even for human. Previous assistant naming systems compose names by applying hand-written rules, which may not perform well. In this paper, we propose to view the naming task as an artificial intelligence problem and create a data set in the domain of academic naming. To generate more delicate names, we propose a three-step framework, including description analysis, candidate generation and abbreviation ranking, each of which is parameterized and optimizable. We conduct experiments to compare different settings of our framework with several analysis approaches from different perspectives. Compared to online or baseline systems, our framework could achieve the best results.",
"title": ""
},
{
"docid": "5253ff017c3fb6d2ea9d7162a563b1cb",
"text": "This paper presents an analysis of the human biomechanical considerations related to the development of lower limb exoskeletons. Factors such as kinematic alignment and compatibility, joint range of motion, maximum torque, and joint bandwidth are discussed in the framework of a review of the design specifications for exoskeleton prototypes discussed in the literature. From this analysis, we discuss major gaps in the research related to the topic and how those might be filled.",
"title": ""
},
{
"docid": "811454b2fae8bb4720d703f2dc1b1fe0",
"text": "Cybersecurity risks and malware threats are becoming increasingly dangerous and common. Despite the severity of the problem, there has been few NLP efforts focused on tackling cybersecurity. In this paper, we discuss the construction of a new database for annotated malware texts. An annotation framework is introduced based around the MAEC vocabulary for defining malware characteristics, along with a database consisting of 39 annotated APT reports with a total of 6,819 sentences. We also use the database to construct models that can potentially help cybersecurity researchers in their data collection and analytics efforts.",
"title": ""
},
{
"docid": "4a0756bffc50e11a0bcc2ab88502e1a2",
"text": "The interest in attribute weighting for soft subspace clustering have been increasing in the last years. However, most of the proposed approaches are designed for dealing only with numeric data. In this paper, our focus is on soft subspace clustering for categorical data. In soft subspace clustering, the attribute weighting approach plays a crucial role. Due to this, we propose an entropy-based approach for measuring the relevance of each categorical attribute in each cluster. Besides that, we propose the EBK-modes (entropy-based k-modes), an extension of the basic k-modes that uses our approach for attribute weighting. We performed experiments on five real-world datasets, comparing the performance of our algorithms with four state-of-the-art algorithms, using three well-known evaluation metrics: accuracy, f-measure and adjusted Rand index. According to the experiments, the EBK-modes outperforms the algorithms that were considered in the evaluation, regarding the considered metrics.",
"title": ""
},
{
"docid": "4074b8cd9b869a7a57f2697b97139308",
"text": "The highly influential framework of conceptual spaces provides a geometric way of representing knowledge. Instances are represented by points in a similarity space and concepts are represented by convex regions in this space. After pointing out a problem with the convexity requirement, we propose a formalization of conceptual spaces based on fuzzy star-shaped sets. Our formalization uses a parametric definition of concepts and extends the original framework by adding means to represent correlations between different domains in a geometric way. Moreover, we define various operations for our formalization, both for creating new concepts from old ones and for measuring relations between concepts. We present an illustrative toy-example and sketch a research project on concept formation that is based on both our formalization and its implementation.",
"title": ""
},
{
"docid": "4973ce25e2a638c3923eda62f92d98b2",
"text": "About 20 ethnic groups reside in Mongolia. On the basis of genetic and anthropological studies, it is believed that Mongolians have played a pivotal role in the peopling of Central and East Asia. However, the genetic relationships among these ethnic groups have remained obscure, as have their detailed relationships with adjacent populations. We analyzed 16 binary and 17 STR polymorphisms of human Y chromosome in 669 individuals from nine populations, including four indigenous ethnic groups in Mongolia (Khalkh, Uriankhai, Zakhchin, and Khoton). Among these four Mongolian populations, the Khalkh, Uriankhai, and Zakhchin populations showed relatively close genetic affinities to each other and to Siberian populations, while the Khoton population showed a closer relationship to Central Asian populations than to even the other Mongolian populations. These findings suggest that the major Mongolian ethnic groups have a close genetic affinity to populations in northern East Asia, although the genetic link between Mongolia and Central Asia is not negligible.",
"title": ""
},
{
"docid": "fc9b4cb8c37ffefde9d4a7fa819b9417",
"text": "Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform gradient based optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods with a significantly reduction of computational resources. Specifically we obtain 2.11% test set error rate for CIFAR-10 image classification task and 56.0 test set perplexity of PTB language modeling task. The best discovered architectures on both tasks are successfully transferred to other tasks such as CIFAR-100 and WikiText-2. Furthermore, combined with the recent proposed weight sharing mechanism, we discover powerful architecture on CIFAR-10 (with error rate 3.53%) and on PTB (with test set perplexity 56.6), with very limited computational resources (less than 10 GPU hours) for both tasks.",
"title": ""
},
{
"docid": "1022d96690f759a350295ce4eb1c217f",
"text": "This paper provides an overview of current types of CNTFETs and of some compact models. Using the available models, the influence of the parameters on the device characteristics was simulated and analyzed. The conclusion is that the tube diameter influences not only the current level, but also the threshold voltage of the CNTFET, while the contact resistance influences only the current level. From a designer's point of view, taking care of the parameter variations and in particular of the nanotube diameters is crucial to achieve reliable circuits",
"title": ""
},
{
"docid": "8a3d11e9b8c145145210bf32f0a199c2",
"text": "Specific emitter identification (SEI) techniques are often used in civilian and military spectrum-management operations, and they are also applied to support the security and authentication of wireless communication. In this letter, a new SEI method based on the natural measure of the one-dimensional component of the chaotic system is proposed. We find that the natural measures of the one-dimensional components of higher dimensional systems exist and that they are quite diverse for different systems. Based on this principle, the natural measure is used as an RF fingerprint in this letter. The natural measure can solve the problems caused by a small amount of data and a low sample rate. The Kullback–Leibler divergence is used to quantify the difference between the natural measures obtained from diverse emitters and classify them. The data obtained from real application are exploited to test the validity of the proposed method. Experimental results show that the proposed method is not only easy to operate, but also quite effective, even though the amount of data is small and the sample rate is low.",
"title": ""
},
{
"docid": "48c78545d402b5eed80e705feb45f8f2",
"text": "With advances in data collection technologies, tensor data is assuming increasing prominence in many applications and the problem of supervised tensor learning has emerged as a topic of critical significance in the data mining and machine learning community. Conventional methods for supervised tensor learning mainly focus on learning kernels by flattening the tensor into vectors or matrices, however structural information within the tensors will be lost. In this paper, we introduce a new scheme to design structure-preserving kernels for supervised tensor learning. Specifically, we demonstrate how to leverage the naturally available structure within the tensorial representation to encode prior knowledge in the kernel. We proposed a tensor kernel that can preserve tensor structures based upon dual-tensorial mapping. The dual-tensorial mapping function can map each tensor instance in the input space to another tensor in the feature space while preserving the tensorial structure. Theoretically, our approach is an extension of the conventional kernels in the vector space to tensor space. We applied our novel kernel in conjunction with SVM to real-world tensor classification problems including brain fMRI classification for three different diseases (i.e., Alzheimer's disease, ADHD and brain damage by HIV). Extensive empirical studies demonstrate that our proposed approach can effectively boost tensor classification performances, particularly with small sample sizes.",
"title": ""
},
{
"docid": "5c716fbdc209d5d9f703af1e88f0d088",
"text": "Protecting visual secrets is an important problem due to the prevalence of cameras that continuously monitor our surroundings. Any viable solution to this problem should also minimize the impact on the utility of applications that use images. In this work, we build on the existing work of adversarial learning to design a perturbation mechanism that jointly optimizes privacy and utility objectives. We provide a feasibility study of the proposed mechanism and present ideas on developing a privacy framework based on the adversarial perturbation mechanism.",
"title": ""
},
{
"docid": "ee54c02fb1856ccf4f11fe1778f0883c",
"text": "Failure Mode, Mechanism and Effect Analysis (FMMEA) is a reliability analysis method which is used to study possible failure modes, failure mechanisms of each component, and to identify the effects of various failure modes on the components and functions. This paper introduces how to implement FMMEA on the Single Board Computer in detail, including system definition, identification of potential failure modes, analysis of failure cause, failure mechanism, and failure effect analysis. Finite element analysis is carried out for the Single Board Computer, including thermal stress analysis and vibration stress analysis. Temperature distribution and vibration modes are obtained, which are the inputs of physics of failure models. Using a variety of Physics of Failure models, the quantitative calculation of single point failure for the Single Board Computer are carried out. Results showed that the time to failure (TTF) of random access memory chip which is SOP (small outline package) is the shortest and the failure is due to solder joint fatigue failure caused by the temperature cycle. It is the weak point of the entire circuit board. Thus solder joint thermal fatigue failure is the main failure mechanism of the Single Board Computer. In the implementation process of PHM for the Single Board Computer, the failure condition of this position should be monitored.",
"title": ""
},
{
"docid": "a22ebcf11189744e7e4f15d82b1fa9d2",
"text": "Several mathematical models of epidemic cholera have recently been proposed in response to outbreaks in Zimbabwe and Haiti. These models aim to estimate the dynamics of cholera transmission and the impact of possible interventions, with a goal of providing guidance to policy makers in deciding among alternative courses of action, including vaccination, provision of clean water, and antibiotics. Here, we discuss concerns about model misspecification, parameter uncertainty, and spatial heterogeneity intrinsic to models for cholera. We argue for caution in interpreting quantitative predictions, particularly predictions of the effectiveness of interventions. We specify sensitivity analyses that would be necessary to improve confidence in model-based quantitative prediction, and suggest types of monitoring in future epidemic settings that would improve analysis and prediction.",
"title": ""
}
] |
scidocsrr
|
751976f7bd19459c099d88f666badbb5
|
Towards a fully automated 3D printability checker
|
[
{
"docid": "20d186b7db540be57492daa805b51b31",
"text": "Printability, the capability of a 3D printer to closely reproduce a 3D model, is a complex decision involving several geometrical attributes like local thickness, shape of the thin regions and their surroundings, and topology with respect to thin regions. We present a method for assessment of 3D shape printability which efficiently and effectively computes such attributes. Our method uses a simple and efficient voxel-based representation and associated computations. Using tools from multi-scale morphology and geodesic analysis, we propose several new metrics for various printability problems. We illustrate our method with results taken from a real-life application.",
"title": ""
}
] |
[
{
"docid": "36c63ad3970c7cbcf9ece1da33cf04fa",
"text": "In recent years, hashing-based methods for large-scale similarity search have sparked considerable research interests in the data mining and machine learning communities. While unsupervised hashing-based methods have achieved promising successes for metric similarity, they cannot handle semantic similarity which is usually given in the form of labeled point pairs. To overcome this limitation, some attempts have recently been made on semi-supervised hashing which aims at learning hash functions from both metric and semantic similarity simultaneously. Existing semi-supervised hashing methods can be regarded as passive hashing since they assume that the labeled pairs are provided in advance. In this paper, we propose a novel framework, called active hashing, which can actively select the most informative labeled pairs for hash function learning. Specifically, it identifies the most informative points to label and constructs labeled pairs accordingly. Under this framework, we use data uncertainty as a measure of informativeness and develop a batch mode algorithm to speed up active selection. We empirically compare our method with a state-of-the-art passive hashing method on two benchmark data sets, showing that the proposed method can reduce labeling cost as well as overcome the limitations of passive hashing.",
"title": ""
},
{
"docid": "c2b0dfb06f82541fca0d2700969cf0d9",
"text": "Magnetic resonance is an exceptionally powerful and versatile measurement technique. The basic structure of a magnetic resonance experiment has remained largely unchanged for almost 50 years, being mainly restricted to the qualitative probing of only a limited set of the properties that can in principle be accessed by this technique. Here we introduce an approach to data acquisition, post-processing and visualization—which we term ‘magnetic resonance fingerprinting’ (MRF)—that permits the simultaneous non-invasive quantification of multiple important properties of a material or tissue. MRF thus provides an alternative way to quantitatively detect and analyse complex changes that can represent physical alterations of a substance or early indicators of disease. MRF can also be used to identify the presence of a specific target material or tissue, which will increase the sensitivity, specificity and speed of a magnetic resonance study, and potentially lead to new diagnostic testing methodologies. When paired with an appropriate pattern-recognition algorithm, MRF inherently suppresses measurement errors and can thus improve measurement accuracy.",
"title": ""
},
{
"docid": "5fbdeba4f91d31a9a3555109872ff250",
"text": "Wepresent new results for the Frank–Wolfemethod (also known as the conditional gradient method). We derive computational guarantees for arbitrary step-size sequences, which are then applied to various step-size rules, including simple averaging and constant step-sizes. We also develop step-size rules and computational guarantees that depend naturally on the warm-start quality of the initial (and subsequent) iterates. Our results include computational guarantees for both duality/bound gaps and the so-calledFWgaps. Lastly,wepresent complexity bounds in the presence of approximate computation of gradients and/or linear optimization subproblem solutions. Mathematics Subject Classification 90C06 · 90C25 · 65K05",
"title": ""
},
{
"docid": "b0cd5b02bb86d1a2d7eea7738c46b2e5",
"text": "Behavioral indicators of deception and behavioral state are extremely difficult for humans to analyze. Blob analysis, a method for analyzing the movement of the head and hands based on the identification of skin color is presented. This method is validated with numerous skin tones. A proof-of-concept study is presented that uses blob analysis to explore behavioral state identification in the detection of deception.",
"title": ""
},
{
"docid": "8de4b9e9a3ba2910fe3d091f7e7f8936",
"text": "This paper demonstrates the capability of curve fitting using Artificial Neural Network (ANN), not only for a moderate set of input data but also for a coarse set of input. When appropriate number of neurons is chosen for the training purpose, accurate graphs can be obtained, despite having a coarse data. The effect of number of neurons used for curve fitting and the accuracy obtained is also studied. This aspect of ANN has been illustrated through 2 examples, Weibull distribution and another complex sinusoidal system. This curve fitting technique has been applied to a real world problem i.e. mechanism of a deep drawing press, for both slider displacement and slider velocity. Key–Words: curve fitting, ANN, accurate, coarse data, best fit, neurons",
"title": ""
},
{
"docid": "a262c272dac3b0ac86694fe738395b72",
"text": "This paper addresses how a recursive neural network model can automatically leave out useless information and emphasize important evidence, in other words, to perform “weight tuning” for higher-level representation acquisition. We propose two models, Weighted Neural Network (WNN) and Binary-Expectation Neural Network (BENN), which automatically control how much one specific unit contributes to the higher-level representation. The proposed model can be viewed as incorporating a more powerful compositional function for embedding acquisition in recursive neural networks. Experimental results demonstrate the significant improvement over standard neural models.",
"title": ""
},
{
"docid": "bebd8b3ff0430258291de91d756eeb1b",
"text": "Infection of cells by microorganisms activates the inflammatory response. The initial sensing of infection is mediated by innate pattern recognition receptors (PRRs), which include Toll-like receptors, RIG-I-like receptors, NOD-like receptors, and C-type lectin receptors. The intracellular signaling cascades triggered by these PRRs lead to transcriptional expression of inflammatory mediators that coordinate the elimination of pathogens and infected cells. However, aberrant activation of this system leads to immunodeficiency, septic shock, or induction of autoimmunity. In this Review, we discuss the role of PRRs, their signaling pathways, and how they control inflammatory responses.",
"title": ""
},
{
"docid": "c5f1d5fc5c5161bc9795cdc0362b8ca7",
"text": "Bayesian optimization has become a successful tool for optimizing the hyperparameters of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single configuration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. We construct a Bayesian optimization procedure, dubbed Fabolas, which models loss and training time as a function of dataset size and automatically trades off high information gain about the global optimum against computational cost. Experiments optimizing support vector machines and deep neural networks show that Fabolas often finds high-quality solutions 10 to 100 times faster than other state-of-the-art Bayesian optimization methods or the recently proposed bandit strategy Hyperband.",
"title": ""
},
{
"docid": "babac76166921edd1f29a2818380cc5c",
"text": "Content-Centric Networking (CCN) is an emerging (inter-)networking architecture with the goal of becoming an alternative to the IP-based Internet. To be considered a viable candidate, CCN must at least have parity with existing solutions for confidential and anonymous communication, e.g., TLS, tcpcrypt, and Tor. ANDa̅NA (Anonymous Named Data Networking Application) was the first proposed solution that addressed the lack of anonymous communication in Named Data Networking (NDN)-a variant of CCN. However, its design and implementation led to performance issues that hinder practical use. In this paper we introduce AC3N: Anonymous Communication for Content-Centric Networking. AC3N is an evolution of the ANDa̅NA system that supports high-throughput and low-latency anonymous content retrieval. We discuss the design and initial performance results of this new system.",
"title": ""
},
{
"docid": "82e823324c1717996d09b11bdfdc4a62",
"text": "Deep neural networks (DNNs) have demonstrated dominating performance in many fields; since AlexNet, the neural networks used in practice are going wider and deeper. On the theoretical side, a long line of works have been focusing on why we can train neural networks when there is only one hidden layer. The theory of multi-layer networks remains somewhat unsettled. In this work, we prove why simple algorithms such as stochastic gradient descent (SGD) can find global minima on the training objective of DNNs in polynomial time. We only make two assumptions: the inputs do not degenerate and the network is over-parameterized. The latter means the number of hidden neurons is sufficiently large: polynomial in L, the number of DNN layers and in n, the number of training samples. As concrete examples, on the training set and starting from randomly initialized weights, we show that SGD attains 100% accuracy in classification tasks, or minimizes regression loss in linear convergence speed ε ∝ e−Ω(T , with a number of iterations that only scales polynomial in n and L. Our theory applies to the widely-used but non-smooth ReLU activation, and to any smooth and possibly non-convex loss functions. In terms of network architectures, our theory at least applies to fully-connected neural networks, convolutional neural networks (CNN), and residual neural networks (ResNet). ∗V1 appears on arXiv on this date and no new result is added since then. V2 adds citations and V3/V4 polish writing. This work was done when Yuanzhi Li and Zhao Song were 2018 summer interns at Microsoft Research Redmond. When this work was performed, Yuanzhi Li was also affiliated with Princeton, and Zhao Song was also affiliated with UW and Harvard. We would like to specially thank Greg Yang for many enlightening discussions, thank Ofer Dekel, Sebastien Bubeck, and Harry Shum for very helpful conversations, and thank Jincheng Mei for carefully checking the proofs of this paper. ar X iv :1 81 1. 03 96 2v 4 [ cs .L G ] 4 F eb 2 01 9",
"title": ""
},
{
"docid": "753a4af9741cd3fec4e0e5effaf5fc67",
"text": "With the growing volume of online information, recommender systems have been an effective strategy to overcome information overload. The utility of recommender systems cannot be overstated, given their widespread adoption in many web applications, along with their potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also to the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. The field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning-based recommender systems. More concretely, we provide and devise a taxonomy of deep learning-based recommendation models, along with a comprehensive summary of the state of the art. Finally, we expand on current trends and provide new perspectives pertaining to this new and exciting development of the field.",
"title": ""
},
{
"docid": "80f9f3f12e33807e63ee5ba58916d41c",
"text": "Positivist and interpretivist researchers have different views on how their research outcomes may be evaluated. The issues of validity, reliability and generalisability, used in evaluating positivist studies, are regarded of relatively little significance by many qualitative researchers for judging the merits of their interpretive investigations. In confirming the research, those three canons need at least to be re-conceptualised in order to reflect the keys issues of concern for interpretivists. Some interpretivists address alternative issues such as credibility, dependability and transferability when determining the trustworthiness of their qualitative investigations. A strategy proposed by several authors for establishing the trustworthiness of the qualitative inquiry is the development of a research audit trail. The audit trail enables readers to trace through a researcher’s logic and determine whether the study’s findings may be relied upon as a platform for further enquiry. While recommended in theory, this strategy is rarely implemented in practice. This paper examines the role of the research audit trail in improving the trustworthiness of qualitative research. Further, it documents the development of an audit trail for an empirical qualitative research study that centred on an interpretive evaluation of a new Information and Communication Technology (ICT) student administrative system in the tertiary education sector in the Republic of Ireland. This research study examined the impact of system introduction across five Institutes of Technology (IoTs) through case study research that incorporated multiple evidence sources. The evidence collected was analysed using a grounded theory method, which was supported by qualitative data analysis software. The key concepts and categories that emerged from this process were synthesized into a cross case primary narrative; through reflection the primary narrative was reduced to a higher order narrative that presented the principle findings or key research themes. From this higher order narrative a theoretical conjecture was distilled. Both a physical and intellectual audit trail for this study are presented in this paper. The physical audit trail documents all keys stages of a research study and reflects the key research methodology decisions. The intellectual audit trail, on the other hand, outlines how a researcher’s thinking evolved throughout all phases of the study. Hence, these audit trails make transparent the key decisions taken throughout the research process. The paper concludes by discussing the value of this audit trail process in confirming a qualitative study’s findings.",
"title": ""
},
{
"docid": "22bc517b6e8e0688f72fa9737857c582",
"text": "In this work we address the problem of point-cloud denoising where we assume that a given point-cloud comprises (noisy) points that were sampled from an underlying surface that is to be denoised. We phrase the point-cloud denoising problem in terms of a dictionary learning framework. To this end, for a given point-cloud we (robustly) extract planar patches covering the entire point-cloud, where each patch contains a (noisy) description of the local structure of the underlying surface. Based on the general assumption that many of the local patches (in the noise-free point-cloud) contain redundant information (e.g. due to smoothness of the surface, or due to repetitive structures), we find a low-dimensional affine subspace that (approximately) explains the extracted (noisy) patches. Computationally, this is achieved by solving a structured low-rank matrix factorization problem, where we impose smoothness on the patch dictionary and sparsity on the coefficients. We experimentally demonstrate that our method outperforms existing denoising approaches in various noise scenarios.",
"title": ""
},
{
"docid": "3c2422e30323de7e85f5515c191c4ccf",
"text": "The feasibility and popularity of mobile healthcare are currently increasing. The advancement of modern technologies, such as wireless communication, data processing, the Internet of Things, cloud, and edge computing, makes mobile healthcare simpler than before. In addition, the deep learning approach brings a revolution in the machine learning domain. In this paper, we investigate a voice pathology detection system using deep learning on the mobile healthcare framework. A mobile multimedia healthcare framework is also designed. In the voice pathology detection system, voices are captured using smart mobile devices. Voice signals are processed before being fed to a convolutional neural network (CNN). We use a transfer learning technique to use the existing robust CNN models. In particular, the VGG-16 and CaffeNet models are investigated in the paper. The Saarbrucken voice disorder database is used in the experiments. Experimental results show that the voice pathology detection accuracy reaches up to 97.5% using the transfer learning of CNN models.",
"title": ""
},
{
"docid": "7ead5f6b374024f5153fe6f4db18a64d",
"text": "Smart mobile device usage has expanded at a very high rate all over the world. Since the mobile devices nowadays are used for a wide variety of application areas like personal communication, data storage and entertainment, security threats emerge, comparable to those which a conventional PC is exposed to. Mobile malware has been growing in scale and complexity as smartphone usage continues to rise. Android has surpassed other mobile platforms as the most popular whilst also witnessing a dramatic increase in malware targeting the platform. In this work, we have considered Android based malware for analysis and a scalable detection mechanism is designed using multifeature collaborative decision fusion (MCDF). The different features of a malicious file like the permission based features and the API call based features are considered in order to provide a better detection by training an ensemble of classifiers and combining their decisions using collaborative approach based on probability theory. The performance of the proposed model is evaluated on a collection of Android based malware comprising of different malware families and the results show that our approach give a better performance than state-of-the-art ensemble schemes available. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "edb7adc3e665aa2126be1849431c9d7f",
"text": "This study evaluated the exploitation of unprocessed agricultural discards in the form of fresh vegetable leaves as a diet for the sea urchin Paracentrotus lividus through the assessment of their effects on gonad yield and quality. A stock of wild-caught P. lividus was fed on discarded leaves from three different species (Beta vulgaris, Brassica oleracea, and Lactuca sativa) and the macroalga Ulva lactuca for 3 months under controlled conditions. At the beginning and end of the experiment, total and gonad weight were measured, while gonad and diet total carbon (C%), nitrogen (N%), δ13C, δ15N, carbohydrates, lipids, and proteins were analyzed. The results showed that agricultural discards provided for the maintenance of gonad index and nutritional value (carbohydrate, lipid, and protein content) of initial specimens. L. sativa also improved gonadic color. The results of this study suggest that fresh vegetable discards may be successfully used in the preparation of more balanced diets for sea urchin aquaculture. The use of agricultural discards in prepared diets offers a number of advantages, including an abundant resource, the recycling of discards into new organic matter, and reduced pressure on marine organisms (i.e., macroalgae) in the production of food for cultured organisms.",
"title": ""
},
{
"docid": "740e103c2f1a8283476a9e901f719be8",
"text": "The design of a novel practical 28 GHz beam steering phased array antenna for future fifth generation mobile device applications is presented in this communication. The proposed array antenna has 16 cavity-backed slot antenna elements that are implemented via the metallic back casing of the mobile device, in which two eight-element phased arrays are built on the left- and right-side edges of the mobile device. Each eight-element phased array can yield beam steering at broadside and gain of >15 dBi can be achieved at boresight. The measured 10 dB return loss bandwidth of the proposed cavity-backed slot antenna element was approximately 27.5–30 GHz. In addition, the impacts of user’s hand effects are also investigated.",
"title": ""
},
{
"docid": "7fc35d2bb27fb35b5585aad8601a0cbd",
"text": "We introduce Anita: a flexible and intelligent Text Adaptation tool for web content that provides Text Simplification and Text Enhancement modules. Anita’s simplification module features a state-of-the-art system that adapts texts according to the needs of individual users, and its enhancement module allows the user to search for a word’s definitions, synonyms, translations, and visual cues through related images. These utilities are brought together in an easy-to-use interface of a freely available web browser extension.",
"title": ""
},
{
"docid": "6859e19b3f8503869cdc8f1a77fc7526",
"text": "HyperNEAT represents a class of neuroevolutionary algorithms that captures some of the power of natural development with a computationally efficient high-level abstraction of development. This class of algorithms is intended to provide many of the desirable properties produced in biological phenotypes by natural developmental processes, such as regularity, modularity and hierarchy. While it has been previously shown that HyperNEAT produces regular artificial neural network (ANN) phenotypes, in this paper we investigated the open question of whether HyperNEAT can produce modular ANNs. We conducted such research on problems where modularity should be beneficial, and found that HyperNEAT failed to generate modular ANNs. We then imposed modularity on HyperNEAT's phenotypes and its performance improved, demonstrating that modularity increases performance on this problem. We next tested two techniques to encourage modularity in HyperNEAT, but did not observe an increase in either modularity or performance. Finally, we conducted tests on a simpler problem that requires modularity and found that HyperNEAT was able to rapidly produce modular solutions that solved the problem. We therefore present the first documented case of HyperNEAT producing a modular phenotype, but our inability to encourage modularity on harder problems where modularity would have been beneficial suggests that more work is needed to increase the likelihood that HyperNEAT and similar algorithms produce modular ANNs in response to challenging, decomposable problems.",
"title": ""
},
{
"docid": "bb01b5e24d7472ab52079dcb8a65358d",
"text": "There are plenty of classification methods that perform well when training and testing data are drawn from the same distribution. However, in real applications, this condition may be violated, which causes degradation of classification accuracy. Domain adaptation is an effective approach to address this problem. In this paper, we propose a general domain adaptation framework from the perspective of prediction reweighting, from which a novel approach is derived. Different from the major domain adaptation methods, our idea is to reweight predictions of the training classifier on testing data according to their signed distance to the domain separator, which is a classifier that distinguishes training data (from source domain) and testing data (from target domain). We then propagate the labels of target instances with larger weights to ones with smaller weights by introducing a manifold regularization method. It can be proved that our reweighting scheme effectively brings the source and target domains closer to each other in an appropriate sense, such that classification in target domain becomes easier. The proposed method can be implemented efficiently by a simple two-stage algorithm, and the target classifier has a closed-form solution. The effectiveness of our approach is verified by the experiments on artificial datasets and two standard benchmarks, a visual object recognition task and a cross-domain sentiment analysis of text. Experimental results demonstrate that our method is competitive with the state-of-the-art domain adaptation algorithms.",
"title": ""
}
] |
scidocsrr
|
120a2c9904f7568248f41aac1bd22836
|
IoT based smart home automation system using sensor node
|
[
{
"docid": "d253029f47fe3afb6465a71e966fdbd5",
"text": "With the development of the social economy, more and more appliances have been presented in a house. It comes out a problem that how to manage and control these increasing various appliances efficiently and conveniently so as to achieve more comfortable, security and healthy space at home. In this paper, a smart control system base on the technologies of internet of things has been proposed to solve the above problem. The smart home control system uses a smart central controller to set up a radio frequency 433 MHz wireless sensor and actuator network (WSAN). A series of control modules, such as switch modules, radio frequency control modules, have been developed in the WSAN to control directly all kinds of home appliances. Application servers, client computers, tablets or smart phones can communicate with the smart central controller through a wireless router via a Wi-Fi interface. Since it has WSAN as the lower control layer, a appliance can be added into or withdrawn from the control system very easily. The smart control system embraces the functions of appliance monitor, control and management, home security, energy statistics and analysis.",
"title": ""
}
] |
[
{
"docid": "01e96ffcbf3514c2a058fc30dd969731",
"text": "Fish display robust neuroendocrine and physiologic stress responses to noxious stimuli. Many anesthetic, sedative, or analgesic drugs used in other vertebrates reduce stress in fi sh, decrease handling trauma, minimize movement and physiologic changes in response to nociceptive stimuli, and can be used for euthanasia. But extrapolating from limited published anesthetic and sedative data to all fi sh species is potentially harmful because of marked anatomic, physiologic, and behavioral variations; instead, a stepwise approach to anesthetizing or sedating unfamiliar species or using unproven drugs for familiar species is advisable. Additionally, knowledge of how water quality infl uences anesthesia or sedation helps limit complications. The most common method of drug administration is through immersion, a technique analogous to gaseous inhalant anesthesia in terrestrial animals, but the use of injectable anesthetic and sedative agents (primarily intramuscularly, but also intravenously) is increasing. Regardless of the route of administration, routine preprocedural preparation is appropriate, to stage both the animals and the supplies for induction, maintenance, and recovery. Anesthetic and sedation monitoring and resuscitation are similar to those for other vertebrates. Euthanasia is most commonly performed using an overdose of an immersion drug but injectable agents are also effective. Analgesia is an area in need of signifi cant research as only a few studies exist and they provide some contrasting results. However, fi sh have μ and κ opiate receptors throughout the brain, making it reasonable to expect some effect of at least opioid treatments in fi sh experiencing noxious stimuli.",
"title": ""
},
{
"docid": "af0bfcd39271d2c6b5734c9665f758e6",
"text": "The architecture of the subterranean nests of the ant Odontomachus brunneus (Patton) (Hymenoptera: Formicidae) was studied by means of casts with dental plaster or molten metal. The entombed ants were later recovered by dissolution of plaster casts in hot running water. O. brunneus excavates simple nests, each consisting of a single, vertical shaft connecting more or less horizontal, simple chambers. Nests contained between 11 and 177 workers, from 2 to 17 chambers, and 28 to 340 cm(2) of chamber floor space and reached a maximum depth of 18 to 184 cm. All components of nest size increased simultaneously during nest enlargement, number of chambers, mean chamber size, and nest depth, making the nest shape (proportions) relatively size-independent. Regardless of nest size, all nests had approximately 2 cm(2) of chamber floor space per worker. Chambers were closer together near the top and the bottom of the nest than in the middle, and total chamber area was greater near the bottom. Colonies occasionally incorporated cavities made by other animals into their nests.",
"title": ""
},
{
"docid": "0cfda368edafe21e538f2c1d7ed75056",
"text": "This paper presents high performance speaker identification and verification systems based on Gaussian mixture speaker models: robust, statistically based representations of speaker identity. The identification system is a maximum likelihood classifier and the verification system is a likelihood ratio hypothesis tester using background speaker normalization. The systems are evaluated on four publically available speech databases: TIMIT, NTIMIT, Switchboard and YOHO. The different levels of degradations and variabilities found in these databases allow the examination of system performance for different task domains. Constraints on the speech range from vocabulary-dependent to extemporaneous and speech quality varies from near-ideal, clean speech to noisy, telephone speech. Closed set identification accuracies on the 630 speaker TIMIT and NTIMIT databases were 99.5% and 60.7%, respectively. On a 113 speaker population from the Switchboard database the identification accuracy was 82.8%. Global threshold equal error rates of 0.24%, 7.19%, 5.15% and 0.51% were obtained in verification experiments on the TIMIT, NTIMIT, Switchboard and YOHO databases, respectively.",
"title": ""
},
{
"docid": "7223f14d3ea2d10661185c8494b81438",
"text": "In 1990 the molecular basis for a hereditary disorder in humans, hyperkalemic periodic paralysis, was first genetically demonstrated to be impaired ion channel function. Since then over a dozen diseases, now termed as channelopathies, have been described. Most of the disorders affect excitable tissue such as muscle and nerve; however, kidney diseases have also been described. Basic research on structure-function relationships and physiology of excitation has benefited tremendously from the discovery of disease-causing mutations pointing to regions of special significance within the channel proteins. This course focuses mainly on the clinical and genetic features of neurological disturbances in humans caused by genetic defects in voltage-gated sodium, calcium, potassium, and chloride channels. Disorders of skeletal muscle are by far the most studied and therefore more detailed in this text than the neuronal channelopathies which have been discovered only very recently. Review literature may be found in the attached reference list [1–12]. Skeletal muscle sodium channelopathies",
"title": ""
},
{
"docid": "12ee117f58c5bd5b6794de581bfcacdb",
"text": "The visualization of complex network traffic involving a large number of communication devices is a common yet challenging task. Traditional layout methods create the network graph with overwhelming visual clutter, which hinders the network understanding and traffic analysis tasks. The existing graph simplification algorithms (e.g. community-based clustering) can effectively reduce the visual complexity, but lead to less meaningful traffic representations. In this paper, we introduce a new method to the traffic monitoring and anomaly analysis of large networks, namely Structural Equivalence Grouping (SEG). Based on the intrinsic nature of the computer network traffic, SEG condenses the graph by more than 20 times while preserving the critical connectivity information. Computationally, SEG has a linear time complexity and supports undirected, directed and weighted traffic graphs up to a million nodes. We have built a Network Security and Anomaly Visualization (NSAV) tool based on SEG and conducted case studies in several real-world scenarios to show the effectiveness of our technique.",
"title": ""
},
{
"docid": "636f5002b3ced8a541df3e0568604f71",
"text": "We report density functional theory (M06L) calculations including Poisson-Boltzmann solvation to determine the reaction pathways and barriers for the hydrogen evolution reaction (HER) on MoS2, using both a periodic two-dimensional slab and a Mo10S21 cluster model. We find that the HER mechanism involves protonation of the electron rich molybdenum hydride site (Volmer-Heyrovsky mechanism), leading to a calculated free energy barrier of 17.9 kcal/mol, in good agreement with the barrier of 19.9 kcal/mol estimated from the experimental turnover frequency. Hydronium protonation of the hydride on the Mo site is 21.3 kcal/mol more favorable than protonation of the hydrogen on the S site because the electrons localized on the Mo-H bond are readily transferred to form dihydrogen with hydronium. We predict the Volmer-Tafel mechanism in which hydrogen atoms bound to molybdenum and sulfur sites recombine to form H2 has a barrier of 22.6 kcal/mol. Starting with hydrogen atoms on adjacent sulfur atoms, the Volmer-Tafel mechanism goes instead through the M-H + S-H pathway. In discussions of metal chalcogenide HER catalysis, the S-H bond energy has been proposed as the critical parameter. However, we find that the sulfur-hydrogen species is not an important intermediate since the free energy of this species does not play a direct role in determining the effective activation barrier. Rather we suggest that the kinetic barrier should be used as a descriptor for reactivity, rather than the equilibrium thermodynamics. This is supported by the agreement between the calculated barrier and the experimental turnover frequency. These results suggest that to design a more reactive catalyst from edge exposed MoS2, one should focus on lowering the reaction barrier between the metal hydride and a proton from the hydronium in solution.",
"title": ""
},
{
"docid": "a5a4b9667996958cc591da63811e2904",
"text": "Human activity recognition (HAR) is a promising research issue in ubiquitous and wearable computing. However, there are some problems existing in traditional methods: 1) They treat HAR as a single label classification task, and ignore the information from other related tasks, which is helpful for the original task. 2) They need to predesign features artificially, which are heuristic and not tightly related to HAR task. To address these problems, we propose AROMA (human activity recognition using deep multi-task learning). Human activities can be divided into simple and complex activities. They are closely linked. Simple and complex activity recognitions are two related tasks in AROMA. For simple activity recognition task, AROMA utilizes a convolutional neural network (CNN) to extract deep features, which are task dependent and non-handcrafted. For complex activity recognition task, AROMA applies a long short-term memory (LSTM) network to learn the temporal context of activity data. In addition, there is a shared structure between the two tasks, and the object functions of these two tasks are optimized jointly. We evaluate AROMA on two public datasets, and the experimental results show that AROMA is able to yield a competitive performance in both simple and complex activity recognitions.",
"title": ""
},
{
"docid": "c37296d4b2673e69ecbe78a3fb1d4440",
"text": "Deep learning-based techniques have achieved stateof-the-art performance on a wide variety of recognition and classification tasks. However, these networks are typically computationally expensive to train, requiring weeks of computation on many GPUs; as a result, many users outsource the training procedure to the cloud or rely on pre-trained models that are then fine-tuned for a specific task. In this paper we show that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has state-of-theart performance on the user’s training and validation samples, but behaves badly on specific attacker-chosen inputs. We first explore the properties of BadNets in a toy example, by creating a backdoored handwritten digit classifier. Next, we demonstrate backdoors in a more realistic scenario by creating a U.S. street sign classifier that identifies stop signs as speed limits when a special sticker is added to the stop sign; we then show in addition that the backdoor in our US street sign detector can persist even if the network is later retrained for another task and cause a drop in accuracy of 25% on average when the backdoor trigger is present. These results demonstrate that backdoors in neural networks are both powerful and—because the behavior of neural networks is difficult to explicate— stealthy. This work provides motivation for further research into techniques for verifying and inspecting neural networks, just as we have developed tools for verifying and debugging",
"title": ""
},
{
"docid": "5980e6111c145db3e1bfc5f47df7ceaf",
"text": "Traffic signs are characterized by a wide variability in their visual appearance in real-world environments. For example, changes of illumination, varying weather conditions and partial occlusions impact the perception of road signs. In practice, a large number of different sign classes needs to be recognized with very high accuracy. Traffic signs have been designed to be easily readable for humans, who perform very well at this task. For computer systems, however, classifying traffic signs still seems to pose a challenging pattern recognition problem. Both image processing and machine learning algorithms are continuously refined to improve on this task. But little systematic comparison of such systems exist. What is the status quo? Do today's algorithms reach human performance? For assessing the performance of state-of-the-art machine learning algorithms, we present a publicly available traffic sign dataset with more than 50,000 images of German road signs in 43 classes. The data was considered in the second stage of the German Traffic Sign Recognition Benchmark held at IJCNN 2011. The results of this competition are reported and the best-performing algorithms are briefly described. Convolutional neural networks (CNNs) showed particularly high classification accuracies in the competition. We measured the performance of human subjects on the same data-and the CNNs outperformed the human test persons.",
"title": ""
},
{
"docid": "758310a8bcfcdec01b11889617f5a2c7",
"text": "1 †This paper is an extended version of the ICSCA 2017 paper “Reference scope identification for citances by classification with text similarity measures” [55]. This work is supported by the Ministry of Science and Technology (MOST), Taiwan (Grant number: MOST 104-2221-E-178-001). *Corresponding author. Tel: +886 4 23226940728; fax: +886 4 23222621. On Identifying Cited Texts for Citances and Classifying Their Discourse Facets by Classification Techniques",
"title": ""
},
{
"docid": "7140f8152de03babecf774149722ff58",
"text": "We study techniques for monitoring and understanding real-world human activities, in particular of drivers, from distributed vision sensors. Real-time and early prediction of maneuvers is emphasized, specifically overtake and brake events. Study this particular domain is motivated by the fact that early knowledge of driver behavior, in concert with the dynamics of the vehicle and surrounding agents, can help to recognize dangerous situations. Furthermore, it can assist in developing effective warning and driver assistance systems. Multiple perspectives and modalities are captured and fused in order to achieve a comprehensive representation of the scene. Temporal activities are learned from a multi-camera head pose estimation module, hand and foot tracking, ego-vehicle parameters, lane and road geometry analysis, and surround vehicle trajectories. The system is evaluated on a challenging dataset of naturalistic driving in real-world settings. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "4ce8934f295235acc2bbf03c7530842b",
"text": "— Speech recognition has found its application on various aspects of our daily lives from automatic phone answering service to dictating text and issuing voice commands to computers. In this paper, we present the historical background and technological advances in speech recognition technology over the past few decades. More importantly, we present the steps involved in the design of a speaker-independent speech recognition system. We focus mainly on the pre-processing stage that extracts salient features of a speech signal and a technique called Dynamic Time Warping commonly used to compare the feature vectors of speech signals. These techniques are applied for recognition of isolated as well as connected words spoken. We conduct experiments on MATLAB to verify these techniques. Finally, we design a simple 'Voice-to-Text' converter application using MATLAB.",
"title": ""
},
{
"docid": "d341486002f2b0f5e620f5a63873577c",
"text": "Various Internet solutions take their power processing and analysis from cloud computing services. Internet of Things (IoT) applications started discovering the benefits of computing, processing, and analysis on the device itself aiming to reduce latency for time-critical applications. However, on-device processing is not suitable for resource-constraints IoT devices. Edge computing (EC) came as an alternative solution that tends to move services and computation more closer to consumers, at the edge. In this letter, we study and discuss the applicability of merging deep learning (DL) models, i.e., convolutional neural network (CNN), recurrent neural network (RNN), and reinforcement learning (RL), with IoT and information-centric networking which is a promising future Internet architecture, combined all together with the EC concept. Therefore, a CNN model can be used in the IoT area to exploit reliably data from a complex environment. Moreover, RL and RNN have been recently integrated into IoT, which can be used to take the multi-modality of data in real-time applications into account.",
"title": ""
},
{
"docid": "9a30b1c93925d8fae0e9ba9954faffef",
"text": "The technological revolution that has taken place in recent decades, driven by advances and developments in Information and Communication Technologies (ICT) has revolutionized the way people communicate, work, travel, live, etc. Cities need to evolve towards intelligent dynamic infrastructures that serve citizens fulfilling the criteria of energy efficiency and sustainability. This article provides an overview of the main smart city applications, and their implementation status in major cities around the world. We also present a study of patents on basic smart city technologies in order to show which countries and companies are making greater efforts to register the intellectual property. The relation between patented technologies and current ongoing smart city applications is also investigated.",
"title": ""
},
{
"docid": "9e65315d4e241dc8d4ea777247f7c733",
"text": "A long-standing focus on compliance has traditionally constrained development of fundamental design changes for Electronic Health Records (EHRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EHRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing—crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain “miners”. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this paper is to expose, in preparation for field tests, a working prototype through which we analyze and discuss our approach and the potential for blockchain in health IT and research.",
"title": ""
},
{
"docid": "61d506905286fc3297622d1ac39534f0",
"text": "In this paper we present the setup of an extensive Wizard-of-Oz environment used for the data collection and the development of a dialogue system. The envisioned Perception and Interaction Assistant will act as an independent dialogue partner. Passively observing the dialogue between the two human users with respect to a limited domain, the system should take the initiative and get meaningfully involved in the communication process when required by the conversational situation. The data collection described here involves audio and video data. We aim at building a rich multi-media data corpus to be used as a basis for our research which includes, inter alia, speech and gaze direction recognition, dialogue modelling and proactivity of the system. We further aspire to obtain data with emotional content to perfom research on emotion recognition, psychopysiological and usability analysis.",
"title": ""
},
{
"docid": "c6c1ba04c8a2191f2d1b4bd970b93aff",
"text": "In this paper, a complete sensitivity analysis of the optimal parameters for the axial flux permanent magnet synchronous machines working in the field weakening region is implemented. Thanks to the presence of a parameterized accurate analytical model, it is possible to obtain all the required parameters of the machine. The two goals of the ideal design are to maximize the power density: <inline-formula><tex-math notation=\"LaTeX\">$P_{\\text{density}}$ </tex-math></inline-formula> and the ratio of maximal to rated speed: <inline-formula><tex-math notation=\"LaTeX\"> $n_{\\max}/n_r$</tex-math></inline-formula>, which is an inductance related parameter keeping the efficiency at the target speed above 90<inline-formula><tex-math notation=\"LaTeX\">$\\%$</tex-math></inline-formula>. Different slots/poles/phases combinations are studied to reveal the optimum combination for each phase. This paper has studied the effect of the ratio of number of stator slots to number of rotor poles on the <inline-formula> <tex-math notation=\"LaTeX\">$P_{\\text{density}}$</tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$n_{\\max}/n_r$</tex-math></inline-formula>. It is shown that a low value of this parameter results in a better <inline-formula><tex-math notation=\"LaTeX\">$P_{\\text{density}}$</tex-math></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$n_{\\max}/n_r$</tex-math></inline-formula>. The effect of the outer diameter, and the inner to outer diameter ratio are studied with respect to the two design goals. In addition, a comparison between the finite and the theoretical infinite speed designs is implemented. A complete 3D finite element validation has proven the robustness of the analytical model.",
"title": ""
},
{
"docid": "c76f00a8fa53c307da2d464d060a171f",
"text": "The field of speech recognition has clearly benefited from precisely defined testing conditions and objective performance measures such as word error rate. In the development and evaluation of new methods, the question arises whether the empirically observed difference in performance is due to a genuine advantage of one system over the other, or just an effect of chance. However, many publications still do not concern themselves with the statistical significance of the results reported. We present a bootstrap method for significance analysis which is, at the same time, intuitive, precise and and easy to use. Unlike some methods, we make no (possibly ill-founded) approximations and the results are immediately interpretable in terms of word error rate.",
"title": ""
}
] |
scidocsrr
|
1039bcb6e194a1a4cbf0323a03bbd872
|
Unsupervised Induction of Contingent Event Pairs from Film Scenes
|
[
{
"docid": "56d0609fe4e68abbce27124dd5291033",
"text": "Existing works indicate that the absence of explicit discourse connectives makes it difficult to recognize implicit discourse relations. In this paper we attempt to overcome this difficulty for implicit relation recognition by automatically inserting discourse connectives between arguments with the use of a language model. Then we propose two algorithms to leverage the information of these predicted connectives. One is to use these predicted implicit connectives as additional features in a supervised model. The other is to perform implicit relation recognition based only on these predicted connectives. Results on Penn Discourse Treebank 2.0 show that predicted discourse connectives help implicit relation recognition and the first algorithm can achieve an absolute average f-score improvement of 3% over a state of the art baseline system.",
"title": ""
},
{
"docid": "e6f506c3c90a15b5e4079ccb75eb3ff0",
"text": "Stories of people's everyday experiences have long been the focus of psychology and sociology research, and are increasingly being used in innovative knowledge-based technologies. However, continued research in this area is hindered by the lack of standard corpora of sufficient size and by the costs of creating one from scratch. In this paper, we describe our efforts to develop a standard corpus for researchers in this area by identifying personal stories in the tens of millions of blog posts in the ICWSM 2009 Spinn3r Dataset. Our approach was to employ statistical text classification technology on the content of blog entries, which required the creation of a sufficiently large set of annotated training examples. We describe the development and evaluation of this classification technology and how it was applied to the dataset in order to identify nearly a million",
"title": ""
}
] |
[
{
"docid": "9a515a1266a868ca5680fc5676ca4b37",
"text": "To assure that an autonomous car is driving safely on public roads, its object detection module should not only work correctly, but show its prediction confidence as well. Previous object detectors driven by deep learning do not explicitly model uncertainties in the neural network. We tackle with this problem by presenting practical methods to capture uncertainties in a 3D vehicle detector for Lidar point clouds. The proposed probabilistic detector represents reliable epistemic uncertainty and aleatoric uncertainty in classification and localization tasks. Experimental results show that the epistemic uncertainty is related to the detection accuracy, whereas the aleatoric uncertainty is influenced by vehicle distance and occlusion. The results also show that we can improve the detection performance by 1%–5% by modeling the aleatoric uncertainty.",
"title": ""
},
{
"docid": "cf7c5cd5f4caa6ded09f8b91d9f0ea16",
"text": "Covariance matrix has recently received increasing attention in computer vision by leveraging Riemannian geometry of symmetric positive-definite (SPD) matrices. Originally proposed as a region descriptor, it has now been used as a generic representation in various recognition tasks. However, covariance matrix has shortcomings such as being prone to be singular, limited capability in modeling complicated feature relationship, and having a fixed form of representation. This paper argues that more appropriate SPD-matrix-based representations shall be explored to achieve better recognition. It proposes an open framework to use the kernel matrix over feature dimensions as a generic representation and discusses its properties and advantages. The proposed framework significantly elevates covariance representation to the unlimited opportunities provided by this new representation. Experimental study shows that this representation consistently outperforms its covariance counterpart on various visual recognition tasks. In particular, it achieves significant improvement on skeleton-based human action recognition, demonstrating the state-of-the-art performance over both the covariance and the existing non-covariance representations.",
"title": ""
},
{
"docid": "631b473342cc30360626eaea0734f1d8",
"text": "Argument extraction is the task of identifying arguments, along with their components in text. Arguments can be usually decomposed into a claim and one or more premises justifying it. The proposed approach tries to identify segments that represent argument elements (claims and premises) on social Web texts (mainly news and blogs) in the Greek language, for a small set of thematic domains, including articles on politics, economics, culture, various social issues, and sports. The proposed approach exploits distributed representations of words, extracted from a large non-annotated corpus. Among the novel aspects of this work is the thematic domain itself which relates to social Web, in contrast to traditional research in the area, which concentrates mainly on law documents and scientific publications. The huge increase of social web communities, along with their user tendency to debate, makes the identification of arguments in these texts a necessity. In addition, a new manually annotated corpus has been constructed that can be used freely for research purposes. Evaluation results are quite promising, suggesting that distributed representations can contribute positively to the task of argument extraction.",
"title": ""
},
{
"docid": "b017fd773265c73c7dccad86797c17b8",
"text": "Active learning, which has a strong impact on processing data prior to the classification phase, is an active research area within the machine learning community, and is now being extended for remote sensing applications. To be effective, classification must rely on the most informative pixels, while the training set should be as compact as possible. Active learning heuristics provide capability to select unlabeled data that are the “most informative” and to obtain the respective labels, contributing to both goals. Characteristics of remotely sensed image data provide both challenges and opportunities to exploit the potential advantages of active learning. We present an overview of active learning methods, then review the latest techniques proposed to cope with the problem of interactive sampling of training pixels for classification of remotely sensed data with support vector machines (SVMs). We discuss remote sensing specific approaches dealing with multisource and spatially and time-varying data, and provide examples for high-dimensional hyperspectral imagery.",
"title": ""
},
{
"docid": "4915acc826761f950783d9d4206857c0",
"text": "The cognitive modulation of pain is influenced by a number of factors ranging from attention, beliefs, conditioning, expectations, mood, and the regulation of emotional responses to noxious sensory events. Recently, mindfulness meditation has been found attenuate pain through some of these mechanisms including enhanced cognitive and emotional control, as well as altering the contextual evaluation of sensory events. This review discusses the brain mechanisms involved in mindfulness meditation-related pain relief across different meditative techniques, expertise and training levels, experimental procedures, and neuroimaging methodologies. Converging lines of neuroimaging evidence reveal that mindfulness meditation-related pain relief is associated with unique appraisal cognitive processes depending on expertise level and meditation tradition. Moreover, it is postulated that mindfulness meditation-related pain relief may share a common final pathway with other cognitive techniques in the modulation of pain.",
"title": ""
},
{
"docid": "d2f5f5b42d732a5d27310e4f2d76116a",
"text": "This paper reports on a cluster analysis of pervasive games through a bottom-up approach based upon 120 game examples. The basis for the clustering algorithm relies on the identification of pervasive gameplay design patterns for each game from a set of 75 possible patterns. The resulting hierarchy presents a view of the design space of pervasive games, and details of clusters and novel gameplay features are described. The paper concludes with a view over how the clusters relate to existing genres and models of pervasive games.",
"title": ""
},
{
"docid": "b6005996503a6f53da5f35f90ce02548",
"text": "The performance of file systems and related software depends on characteristics of the underlying file-system image (i.e., file-system metadata and file contents). Unfortunately, rather than benchmarking with realistic file-system images, most system designers and evaluators rely on ad hoc assumptions and (often inaccurate) rules of thumb. Furthermore, the lack of standardization and reproducibility makes file-system benchmarking ineffective. To remedy these problems, we develop Impressions, a framework to generate statistically accurate file-system images with realistic metadata and content. Impressions is flexible, supporting user-specified constraints on various file-system parameters using a number of statistical techniques to generate consistent images. In this article, we present the design, implementation, and evaluation of Impressions and demonstrate its utility using desktop search as a case study. We believe Impressions will prove to be useful to system developers and users alike.",
"title": ""
},
{
"docid": "e1d8ec65a2917792c186cbc125a99368",
"text": "In recent years, artificial intelligence has made a significant breakthrough and progress in the field of humanmachine conversation. However, how to generate high-quality, emotional and subhuman conversation still a troublesome work. The key factor of man-machine dialogue is whether the chatbot can give a good response in content and emotional level. How to ensure that the robot understands the user’s emotions, and consider the user’s emotions then give a satisfactory response. In this paper, we add the emotional tags to the post and response from the dataset respectively. The emotional tags, as the emotional tags of post and response, represent the emotions expressed by this sentence. The purpose of our emotional tags is to make the chatbot understood the emotion of the input sequence more directly so that it has a recognition of the emotional dimension. In this paper, we apply the mechanism of GAN network on our conversation model. For the generator: We make full use of Encoder-Decoder structure form a seq2seq model, which is used to generate a sentence’s response. For the discriminator: distinguish between the human-generated dialogues and the machine-generated ones.The outputs from the discriminator are used as rewards for the generative model, pushing the system to generate dialogues that mostly resemble human dialogues. We cast our task as an RL(Reinforcement Learning) problem, using a policy gradient method to reward more subhuman conversational sequences, and in addition we have added an emotion tags to represent the response we want to get, which we will use as a rewarding part of it, so that the emotions of real responses can be closer to the emotions we specify. Our experiment shows that through the introduction of emotional intelligence, our model can generate responses appropriate not only in content but also in emotion, which can be used to control and adjust users emotion. Compared with our previous work, we get a better performance on the same data set, and we get less ’’safe’’ response than before, but there will be a certain degree of existence.",
"title": ""
},
{
"docid": "f213bc5b5a16b381262aefe842babc59",
"text": "Optogenetic methodology enables direct targeting of specific neural circuit elements for inhibition or excitation while spanning timescales from the acute (milliseconds) to the chronic (many days or more). Although the impact of this temporal versatility and cellular specificity has been greater for basic science than clinical research, it is natural to ask whether the dynamic patterns of neural circuit activity discovered to be causal in adaptive or maladaptive behaviors could become targets for treatment of neuropsychiatric diseases. Here, we consider the landscape of ideas related to therapeutic targeting of circuit dynamics. Specifically, we highlight optical, ultrasonic, and magnetic concepts for the targeted control of neural activity, preclinical/clinical discovery opportunities, and recently reported optogenetically guided clinical outcomes.",
"title": ""
},
{
"docid": "c92807c973f51ac56fe6db6c2bb3f405",
"text": "Machine learning relies on the availability of a vast amount of data for training. However, in reality, most data are scattered across different organizations and cannot be easily integrated under many legal and practical constraints. In this paper, we introduce a new technique and framework, known as federated transfer learning (FTL), to improve statistical models under a data federation. The federation allows knowledge to be shared without compromising user privacy, and enables complimentary knowledge to be transferred in the network. As a result, a target-domain party can build more flexible and powerful models by leveraging rich labels from a source-domain party. A secure transfer cross validation approach is also proposed to guard the FTL performance under the federation. The framework requires minimal modifications to the existing model structure and provides the same level of accuracy as the nonprivacy-preserving approach. This framework is very flexible and can be effectively adapted to various secure multi-party machine learning tasks.",
"title": ""
},
{
"docid": "b3ecb6eb53256f2a0981fdcdac8b6e42",
"text": "Explanation-Based Learning (EBL) is a widely-used technique for acquiring searchcontrol knowledge. Recently, Prieditis, van Harmelen, and Bundy pointed to the similarity between Partial Evaluation (PE) and EBL. However, EBL utilizes training examples whereas PE does not. It is natural to inquire, therefore, whether PE can be used to acquire searchcontrol knowledge, and if so at what cost? This paper answers these questions by means of a case study comparing prodigy/ebl, a state-of-the-art EBL system, and static, a PEbased analyzer of problem-space de nitions. When tested in prodigy/ebl's benchmark problem spaces, static generated search-control knowledge that was up to three times as e ective as the knowledge learned by prodigy/ebl, and did so from twenty-six to seventyseven times faster. The paper describes static's algorithms, compares its performance to prodigy/ebl's, noting when static's superior performance will scale up and when it will not. The paper concludes with several lessons for the design of EBL systems, suggesting hybrid PE/EBL systems as a promising direction for future research. static is available by sending mail to the author at etzioni@cs.washington.edu. The prodigy system, and the information necessary to replicate the experiments in this paper, is available by sending mail to prodigy@cs.cmu.edu.",
"title": ""
},
{
"docid": "4e1a8239889f95f159a086f4c2fb20c6",
"text": "Advances in machine learning have led to broad deployment of systems with impressive performance on important problems. Nonetheless, these systems can be induced to make errors on data that are surprisingly similar to examples the learned system handles correctly. The existence of these errors raises a variety of questions about out-of-sample generalization and whether bad actors might use such examples to abuse deployed systems. As a result of these security concerns, there has been a flurry of recent papers proposing algorithms to defend against such malicious perturbations of correctly handled examples. It is unclear how such misclassifications represent a different kind of security problem than other errors, or even other attacker-produced examples that have no specific relationship to an uncorrupted input. In this paper, we argue that adversarial example defense papers have, to date, mostly considered abstract, toy games that do not relate to any specific security concern. Furthermore, toy games that do not relate to any specific security concern. Furthermore, defense papers have not yet precisely described all the abilities and limitations of attackers that would be relevant in practical security. Towards this end, we establish a taxonomy of motivations, constraints, and abilities for more plausible adversaries. Finally, we provide a series of recommendations outlining a path forward for future work to more clearly articulate the threat model and perform more meaningful evaluation.",
"title": ""
},
{
"docid": "0c57dd3ce1f122d3eb11a98649880475",
"text": "Insulin resistance plays a major role in the pathogenesis of the metabolic syndrome and type 2 diabetes, and yet the mechanisms responsible for it remain poorly understood. Magnetic resonance spectroscopy studies in humans suggest that a defect in insulin-stimulated glucose transport in skeletal muscle is the primary metabolic abnormality in insulin-resistant patients with type 2 diabetes. Fatty acids appear to cause this defect in glucose transport by inhibiting insulin-stimulated tyrosine phosphorylation of insulin receptor substrate-1 (IRS-1) and IRS-1-associated phosphatidylinositol 3-kinase activity. A number of different metabolic abnormalities may increase intramyocellular and intrahepatic fatty acid metabolites; these include increased fat delivery to muscle and liver as a consequence of either excess energy intake or defects in adipocyte fat metabolism, and acquired or inherited defects in mitochondrial fatty acid oxidation. Understanding the molecular and biochemical defects responsible for insulin resistance is beginning to unveil novel therapeutic targets for the treatment of the metabolic syndrome and type 2 diabetes.",
"title": ""
},
{
"docid": "3432123018be278cb2e85892925ce4e6",
"text": "The cellular heterogeneity and complex tissue architecture of most tumor samples is a major obstacle in image analysis on standard hematoxylin and eosin-stained (H&E) tissue sections. A mixture of cancer and normal cells complicates the interpretation of their cytological profiles. Furthermore, spatial arrangement and architectural organization of cells are generally not reflected in cellular characteristics analysis. To address these challenges, first we describe an automatic nuclei segmentation of H&E tissue sections. In the task of deconvoluting cellular heterogeneity, we adopt Landmark based Spectral Clustering (LSC) to group individual nuclei in such a way that nuclei in the same group are more similar. We next devise spatial statistics for analyzing spatial arrangement and organization, which are not detectable by individual cellular characteristics. Our quantitative, spatial statistics analysis could benefit H&E section analysis by refining and complementing cellular characteristics analysis.",
"title": ""
},
{
"docid": "d4ca93d0aeabda1b90bb3f0f16df9ee8",
"text": "Smart card technology has evolved over the last few years following notable improvements in the underlying hardware and software platforms. Advanced smart card microprocessors, along with robust smart card operating systems and platforms, contribute towards a broader acceptance of the technology. These improvements have eliminated some of the traditional smart card security concerns. However, researchers and hackers are constantly looking for new issues and vulnerabilities. In this article we provide a brief overview of the main smart card attack categories and their corresponding countermeasures. We also provide examples of well-documented attacks on systems that use smart card technology (e.g. satellite TV, EMV, proximity identification) in an attempt to highlight the importance of the security of the overall system rather than just the smart card. a 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3792c6e065227cdbe8a9f87882224891",
"text": "The increasing size of workloads has led to the development of new technologies and architectures that are intended to help address the capacity limitations of DRAM main memories. The proposed solutions fall into two categories: those that re-engineer Flash-based SSDs to further improve storage system performance and those that incorporate non-volatile technology into a Hybrid main memory system. These developments have blurred the line between the storage and memory systems. In this paper, we examine the differences between these two approaches to gain insight into the types of applications and memory technologies that benefit the most from these different architectural approaches.\n In particular this work utilizes full system simulation to examine the impact of workload randomness on system performance, the impact of backing store latency on system performance, and how the different implementations utilize system resources differently. We find that the software overhead incurred by storage based implementations can account for almost 50% of the overall access latency. As a result, backing store technologies that have an access latency up to 25 microseconds tend to perform better when implemented as part of the main memory system. We also see that high degrees of random access can exacerbate the software overhead problem and lead to large performance advantages for the Hybrid main memory approach. Meanwhile, the page replacement algorithm utilized by the OS in the storage approach results in considerably better performance on highly sequential workloads at the cost of greater pressure on the cache.",
"title": ""
},
{
"docid": "37d3954ce00a1f9fd90c6adfde388ab1",
"text": "Computational personality traits assessment is one of an interesting areas in affective computing. It becomes popular because personality identification can be used in many areas and get benefits. Such areas are business, politics, education, social media, medicine, and user interface design. The famous statement \"Face is a mirror of the mind\" proves that person's appearance depends on the inner aspects of a person. Conversely, Person's behavior and appearance describe the person's personality, so an analyze on appearance and behavior gives knowledge on personality traits. There are varieties of methods have been discovered by researchers to assess personality computationally with various machine learning algorithms. In this paper reviews methods and theories involved in psychological traits assessment and evolution of computational psychological traits assessment with different machine learning algorithms and different feature sets.",
"title": ""
},
{
"docid": "103e3212f2d1302c7a901be0d3f46e31",
"text": "This article explores dominant discourses surrounding male and female genital cutting. Over a similar period of time, these genital operations have separately been subjected to scrutiny and criticism. However, although critiques of female circumcision have been widely taken up, general public opinion toward male circumcision remains indifferent. This difference cannot merely be explained by the natural attributes and effects of these practices. Rather, attitudes toward genital cutting reflect historically and culturally specific understandings of the human body. In particular, I suggest that certain problematic understandings of male and female sexuality are deeply implicated in the dominant Western discourses on genital surgery.",
"title": ""
},
{
"docid": "a3cb839b4299a50c475b2bb1b608ee91",
"text": "In this work, we present an event detection method in Twitter based on clustering of hashtags and introduce an enhancement technique by using the semantic similarities between the hashtags. To this aim, we devised two methods for tweet vector generation and evaluated their effect on clustering and event detection performance in comparison to word-based vector generation methods. By analyzing the contexts of hashtags and their co-occurrence statistics with other words, we identify their paradigmatic relationships and similarities. We make use of this information while applying a lexico-semantic expansion on tweet contents before clustering the tweets based on their similarities. Our aim is to tolerate spelling errors and capture statements which actually refer to the same concepts. We evaluate our enhancement solution on a three-day dataset of tweets with Turkish content. In our evaluations, we observe clearer clusters, improvements in accuracy, and earlier event detection times.",
"title": ""
},
{
"docid": "b315027b7db892563e16e5e371fd41f1",
"text": "In this paper we show how hardware functionalities can be misused by an attacker to extend her control over a system. The originality of our approach is that it exploits seldom used processor and chipset functionalities, such as switching to system management mode, to escalate local privileges in spite of security restrictions imposed by the operating system. As an example we present a new attack scheme against OpenBSD on x86-based architectures. On such a system the superuser is only granted limited privileges. The attack allows her to get full privileges over the system, including unrestricted access to physical memory. Our sample code shows how the superuser can lower the “secure level” from highly secure to permanently insecure mode. To the best of our knowledge, it is the first time that documented processor and chipset functionalities have been used to circumvent operating system security functions.",
"title": ""
}
] |
scidocsrr
|
59b79a9c8b199879a453f7dc697d4b57
|
Noise Estimation from a Single Image
|
[
{
"docid": "8055b2c65d5774000fe4fa81ff83efb7",
"text": "Changes in measured image irradiance have many physical causes and are the primary cue for several visual processes, such as edge detection and shape from shading. Using physical models for charged-coupled device ( C C D ) video cameras and material reflectance, we quantify the variation in digitized pixel values that is due to sensor noise and scene variation. This analysis forms the basis of algorithms for camera characterization and calibration and for scene description. Specifically, algorithms are developed for estimating the parameters of camera noise and for calibrating a camera to remove the effects of fixed pattern nonuniformity and spatial variation in dark current. While these techniques have many potential uses, we describe in particular how they can be used to estimate a measure of scene variation. This measure is independent of image irradiance and can be used to identify a surface from a single sensor band over a range of situations. Experimental results confirm that the models presented in this paper are useful for modeling the different sources of variation in real images obtained from video cameras. Index T e m s C C D cameras, computer vision, camera calibration, noise estimation, reflectance variation, sensor modeling.",
"title": ""
},
{
"docid": "c6a44d2313c72e785ae749f667d5453c",
"text": "Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti) + zi, i = 0; : : : ; n 1, ti = i=n, zi iid N(0; 1). The reconstruction f̂ n is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability f̂ n is at least as smooth as f , in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.",
"title": ""
},
{
"docid": "5497e6be671aa7b5f412590873b04602",
"text": "Since the rst shape-from-shading (SFS) technique was developed by Horn in the early 1970s, many di erent approaches have emerged. In this paper, six well-known SFS algorithms are implemented and compared. The performance of the algorithms was analyzed on synthetic images using mean and standard deviation of depth (Z) error, mean of surface gradient (p, q) error and CPU timing. Each algorithm works well for certain images, but performs poorly for others. In general, minimization approaches are more robust, while the other approaches are faster. The implementation of these algorithms in C, and images used in this paper, are available by anonymous ftp under the pub=tech paper=survey directory at eustis:cs:ucf:edu (132.170.108.42). These are also part of the electronic version of paper.",
"title": ""
}
] |
[
{
"docid": "b1dc8163cdcaefcf313d6a6155922ad6",
"text": "Light Detection and Ranging (LiDAR) is an active sensor that can effectively acquire a large number of three-dimensional (3-D) points. LiDAR systems can be equipped on different platforms for different applications, but to integrate the data, point cloud registration is needed to improve geometric consistency. The registration of airborne and terrestrial mobile LiDAR is a challenging task because the point densities and scanning directions differ. We proposed a scheme for the registration of airborne and terrestrial mobile LiDAR using the least squares 3-D surface registration technique to minimize the surfaces between two datasets. To analyze the effect of point density in registration, the simulation data simulated different conditions and estimated the theoretical errors. The test data were the point clouds of the airborne LiDAR system (ALS) and the mobile LiDAR system (MLS), which were acquired by Optech ALTM 3070 and Lynx, respectively. The resulting simulation analysis indicated that the accuracy of registration improved as the density increased. For the test dataset, the registration error of mobile LiDAR between different trajectories improved from 40 cm to 4 cm, and the registration error between ALS and MLS improved from 84 cm to 4 cm. These results indicate that the proposed methods can obtain 5 cm accuracy between ALS and MLS.",
"title": ""
},
{
"docid": "a72932cd98f425eafc19b9786da4319d",
"text": "Recommender systems are changing from novelties used by a few E-commerce sites, to serious business tools that are re-shaping the world of E-commerce. Many of the largest commerce Web sites are already using recommender systems to help their customers find products to purchase. A recommender system learns from a customer and recommends products that she will find most valuable from among the available products. In this paper we present an explanation of how recommender systems help E-commerce sites increase sales, and analyze six sites that use recommender systems including several sites that use more than one recommender system. Based on the examples, we create a taxonomy of recommender systems, including the interfaces they present to customers, the technologies used to create the recommendations, and the inputs they need from customers. We conclude with ideas for new applications of recommender systems to E-commerce.",
"title": ""
},
{
"docid": "8dc9170093a0317fff3971b18f758ff3",
"text": "In many Web applications, such as blog classification and new-sgroup classification, labeled data are in short supply. It often happens that obtaining labeled data in a new domain is expensive and time consuming, while there may be plenty of labeled data in a related but different domain. Traditional text classification ap-proaches are not able to cope well with learning across different domains. In this paper, we propose a novel cross-domain text classification algorithm which extends the traditional probabilistic latent semantic analysis (PLSA) algorithm to integrate labeled and unlabeled data, which come from different but related domains, into a unified probabilistic model. We call this new model Topic-bridged PLSA, or TPLSA. By exploiting the common topics between two domains, we transfer knowledge across different domains through a topic-bridge to help the text classification in the target domain. A unique advantage of our method is its ability to maximally mine knowledge that can be transferred between domains, resulting in superior performance when compared to other state-of-the-art text classification approaches. Experimental eval-uation on different kinds of datasets shows that our proposed algorithm can improve the performance of cross-domain text classification significantly.",
"title": ""
},
{
"docid": "7677b67bd95f05c2e4c87022c3caa938",
"text": "The semi-supervised learning usually only predict labels for unlabeled data appearing in training data, and cannot effectively predict labels for testing data never appearing in training set. To handle this outof-sample problem, many inductive methods make a constraint such that the predicted label matrix should be exactly equal to a linear model. In practice, this constraint is too rigid to capture the manifold structure of data. Motivated by this deficiency, we relax the rigid linear embedding constraint and propose to use an elastic embedding constraint on the predicted label matrix such that the manifold structure can be better explored. To solve our new objective and also a more general optimization problem, we study a novel adaptive loss with efficient optimization algorithm. Our new adaptive loss minimization method takes the advantages of both L1 norm and L2 norm, and is robust to the data outlier under Laplacian distribution and can efficiently learn the normal data under Gaussian distribution. Experiments have been performed on image classification tasks and our approach outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "809b40cd0089410592d7b7f77f04c8e4",
"text": "This paper presents a new method for segmentation and interpretation of 3D point clouds from mobile LIDAR data. The main contribution of this work is the automatic detection and classification of artifacts located at the ground level. The detection is based on Top-Hat of hole filling algorithm of range images. Then, several features are extracted from the detected connected components (CCs). Afterward, a stepwise forward variable selection by using Wilk's Lambda criterion is performed. Finally, CCs are classified in four categories (lampposts, pedestrians, cars, the others) by using a SVM machine learning method.",
"title": ""
},
{
"docid": "236bef55b95e62e3ad3d5b1de8449abb",
"text": "In this paper, we argue that an annotation scheme for argumentation mining is a function of the task requirements and the corpus properties. There is no one-sizefits-all argumentation theory to be applied to realistic data on the Web. In two annotation studies, we experiment with 80 German newspaper editorials from the Web and about one thousand English documents from forums, comments, and blogs. Our example topics are taken from the educational domain. To formalize the problem of annotating arguments, in the first case, we apply a Claim-Premise scheme, and in the second case, we modify Toulmin’s scheme. We find that the choice of the argument components to be annotated strongly depends on the register, the length of the document, and inherently on the literary devices and structures used for expressing argumentation. We hope that these findings will facilitate the creation of reliably annotated argumentation corpora for a wide range of tasks and corpus types and will help to bridge the gap between argumentation theories and actual application needs.",
"title": ""
},
{
"docid": "25921de89de837e2bcd2a815ec181564",
"text": "Satellite-based Global Positioning Systems (GPS) have enabled a variety of location-based services such as navigation systems, and become increasingly popular and important in our everyday life. However, GPS does not work well in indoor environments where walls, floors and other construction objects greatly attenuate satellite signals. In this paper, we propose an Indoor Positioning System (IPS) based on widely deployed indoor WiFi systems. Our system uses not only the Received Signal Strength (RSS) values measured at the current location but also the previous location information to determine the current location of a mobile user. We have conducted a large number of experiments in the Schorr Center of the University of Nebraska-Lincoln, and our experiment results show that our proposed system outperforms all other WiFi-based RSS IPSs in the comparison, and is 5% more accurate on average than others. iii ACKNOWLEDGMENTS Firstly, I would like to express my heartfelt gratitude to my advisor and committee chair, Professor Lisong Xu and the co-advisor Professor Zhigang Shen for their constant encouragement and guidance throughout the course of my master's study and all the stages of the writing of this thesis. Without their consistent and illuminating instruction, this thesis work could not have reached its present form. Their technical and editorial advice and infinite patience were essential for the completion of this thesis. I feel privileged to have had the opportunity to study under them. I thank Professor Ziguo Zhong and Professor Mehmet Vuran for serving on my Master's Thesis defense committee, and their involvement has greatly improved and clarified this work. I specially thank Prof Ziguo Zhong again, since his support has always been very generous in both time and research resources. I thank all the CSE staff and friends, for their friendship and for all the memorable times in UNL. I would like to thank everyone who has helped me along the way. At last, I give my deepest thanks go to my parents for their self-giving love and support throughout my life.",
"title": ""
},
{
"docid": "4c87f3fb470cb01781b563889b1261d2",
"text": "Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset (Antol et al., ICCV 2015) by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at http://visualqa.org/ as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.",
"title": ""
},
{
"docid": "7e74cc21787c1e21fd64a38f1376c6a9",
"text": "The Bidirectional Reflectance Distribution Function (BRDF) describes the appearance of a material by its interaction with light at a surface point. A variety of analytical models have been proposed to represent BRDFs. However, analysis of these models has been scarce due to the lack of high-resolution measured data. In this work we evaluate several well-known analytical models in terms of their ability to fit measured BRDFs. We use an existing high-resolution data set of a hundred isotropic materials and compute the best approximation for each analytical model. Furthermore, we have built a new setup for efficient acquisition of anisotropic BRDFs, which allows us to acquire anisotropic materials at high resolution. We have measured four samples of anisotropic materials (brushed aluminum, velvet, and two satins). Based on the numerical errors, function plots, and rendered images we provide insights into the performance of the various models. We conclude that for most isotropic materials physically-based analytic reflectance models can represent their appearance quite well. We illustrate the important difference between the two common ways of defining the specular lobe: around the mirror direction and with respect to the half-vector. Our evaluation shows that the latter gives a more accurate shape for the reflection lobe. Our analysis of anisotropic materials indicates current parametric reflectance models cannot represent their appearances faithfully in many cases. We show that using a sampled microfacet distribution computed from measurements improves the fit and qualitatively reproduces the measurements.",
"title": ""
},
{
"docid": "3dcfcaa97fcc1bce04ce515027e64927",
"text": "Abs t rac t . RoboCup is an attempt to foster AI and intelligent robotics research by providing a standard problem where wide range of technologies can be integrated and exaznined. The first R o b o C u p competition was held at IJCAI-97, Nagoya. In order for a robot team to actually perform a soccer game, various technologies must be incorporated including: design principles of autonomous agents, multi-agent collaboration, strategy acquisition, real-time reasoning, robotics, and sensorfllsion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. Although RoboCup's final target is a world cup with real robots, RoboCup offers a softwaxe platform for research on the software aspects of RoboCup. This paper describes technical chalhmges involw~d in RoboCup, rules, and simulation environment.",
"title": ""
},
{
"docid": "5f17432d235a991a5544ad794875a919",
"text": "We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially observable Markov decision processes (POMDPs) provide a rich mathematical model to handle such environments but require a known model to be solved by most approaches. This is a limitation in practice as the exact model parameters are often difficult to specify exactly. We adopt a Bayesian approach where a posterior distribution over the model parameters is maintained and updated through experience with the environment. We propose a particle filter algorithm to maintain the posterior distribution and an online planning algorithm, based on trajectory sampling, to plan the best action to perform under the current posterior. The resulting approach selects control actions which optimally trade-off between 1) exploring the environment to learn the model, 2) identifying the system's state, and 3) exploiting its knowledge in order to maximize long-term rewards. Our preliminary results on a simulated robot navigation problem show that our approach is able to learn good models of the sensors and actuators, and performs as well as if it had the true model.",
"title": ""
},
{
"docid": "62efef8af7d8393e697eb56e85471347",
"text": "This paper presents using of JSBSim library for flight dynamics modelling of a mini-UAV (Unmanned Aerial Vehicle). The first part of the paper is about general information of UAVs and about the fundamentals of airplane flight mechanics, forces, moments, and the main components of typical aircraft. The main section briefly describes a flight dynamics model and summarizes the information about JSBSim library. Then, a way of using the library for the modelling of a mini-UAV is shown. A basic script for lifting and stabilization of the UAV has been developed and described. Finally, the results of JSBSim test are discussed.",
"title": ""
},
{
"docid": "d74131a431ca54f45a494091e576740c",
"text": "In today’s highly competitive business environments with shortened product and technology life cycle, it is critical for software industry to continuously innovate. This goal can be achieved by developing a better understanding and control of the activities and determinants of innovation. Innovation measurement initiatives assess innovation capability, output and performance to help develop such an understanding. This study explores various aspects relevant to innovation measurement ranging from definitions, measurement frameworks and metrics that have been proposed in literature and used in practice. A systematic literature review followed by an online questionnaire and interviews with practitioners and academics were employed to identify a comprehensive definition of innovation that can be used in software industry. The metrics for the evaluation of determinants, inputs, outputs and performance were also aggregated and categorised. Based on these findings, a conceptual model of the key measurable elements of innovation was constructed from the findings of the systematic review. The model was further refined after feedback from academia and industry through interviews.",
"title": ""
},
{
"docid": "281b0a108c1e8507f26381cc905ce9d1",
"text": "Extraction–Transform–Load (ETL) processes comprise complex data workflows, which are responsible for the maintenance of a Data Warehouse. A plethora of ETL tools is currently available constituting a multi-million dollar market. Each ETL tool uses its own technique for the design and implementation of an ETL workflow, making the task of assessing ETL tools extremely difficult. In this paper, we identify common characteristics of ETL workflows in an effort of proposing a unified evaluation method for ETL. We also identify the main points of interest in designing, implementing, and maintaining ETL workflows. Finally, we propose a principled organization of test suites based on the TPC-H schema for the problem of experimenting with ETL workflows.",
"title": ""
},
{
"docid": "8ad0cd1f03db395a9918bbdfdf9a3268",
"text": "Commercial anti-virus software are unable to provide protection against newly launched (a.k.a \"zero-day\") malware. In this paper, we propose a novel malware detection technique which is based on the analysis of byte-level file content. The novelty of our approach, compared with existing content based mining schemes, is that it does not memorize specific byte-sequences or strings appearing in the actual file content. Our technique is non-signature based and therefore has the potential to detect previously unknown and zero-day malware. We compute a wide range of statistical and information-theoretic features in a block-wise manner to quantify the byte-level file content. We leverage standard data mining algorithms to classify the file content of every block as normal or potentially malicious. Finally, we correlate the block-wise classification results of a given file to categorize it as benign or malware. Since the proposed scheme operates at the byte-level file content; therefore, it does not require any a priori information about the filetype. We have tested our proposed technique using a benign dataset comprising of six different filetypes --- DOC, EXE, JPG, MP3, PDF and ZIP and a malware dataset comprising of six different malware types --- backdoor, trojan, virus, worm, constructor and miscellaneous. We also perform a comparison with existing data mining based malware detection techniques. The results of our experiments show that the proposed nonsignature based technique surpasses the existing techniques and achieves more than 90% detection accuracy.",
"title": ""
},
{
"docid": "18480c92c48df7318d0c7317bc63ff40",
"text": "For digital rights management (drm) software implementations incorporating cryptography, white-box cryptography (cryptographic implementation designed to withstand the white-box attack context) is more appropriate than traditional black-box cryptography. In the whitebox context, the attacker has total visibility into software implementation and execution. Our objective is to prevent extraction of secret keys from the program. We present methods to make such key extraction difficult, with focus on symmetric block ciphers implemented by substitution boxes and linear transformations. A des implementation (useful also for triple-des) is presented as a concrete example.",
"title": ""
},
{
"docid": "b4f19048d26c0620793da5f5422a865f",
"text": "Interest in supply chain management has steadily increased since the 1980s when firms saw the benefits of collaborative relationships within and beyond their own organization. Firms are finding that they can no longer compete effectively in isolation of their suppliers or other entities in the supply chain. A number of definitions of supply chain management have been proposed in the literature and in practice. This paper defines the concept of supply chain management and discusses its historical evolution. The term does not replace supplier partnerships, nor is it a description of the logistics function. The competitive importance of linking a firm’s supply chain strategy to its overall business strategy and some practical guidelines are offered for successful supply chain management. Introduction to supply chain concepts Firms can no longer effectively compete in isolation of their suppliers and other entities in the supply chain. Interest in the concept of supply chain management has steadily increased since the 1980s when companies saw the benefits of collaborative relationships within and beyond their own organization. A number of definitions have been proposed concerning the concept of “the supply chain” and its management. This paper defines the concept of the supply chain and discusses the evolution of supply chain management. The term does not replace supplier partnerships, nor is it a description of the logistics function. Industry groups are now working together to improve the integrative processes of supply chain management and accelerate the benefits available through successful implementation. The competitive importance of linking a firm’s supply chain strategy to its overall business strategy and some practical guidelines are offered for successful supply chain management. Definition of supply chain Various definitions of a supply chain have been offered in the past several years as the concept has gained popularity. The APICS Dictionary describes the supply chain as: 1 the processes from the initial raw materials to the ultimate consumption of the finished product linking across supplieruser companies; and 2 the functions within and outside a company that enable the value chain to make products and provide services to the customer (Cox et al., 1995). Another source defines supply chain as, the network of entities through which material flows. Those entities may include suppliers, carriers, manufacturing sites, distribution centers, retailers, and customers (Lummus and Alber, 1997). The Supply Chain Council (1997) uses the definition: “The supply chain – a term increasingly used by logistics professionals – encompasses every effort involved in producing and delivering a final product, from the supplier’s supplier to the customer’s customer. Four basic processes – plan, source, make, deliver – broadly define these efforts, which include managing supply and demand, sourcing raw materials and parts, manufacturing and assembly, warehousing and inventory tracking, order entry and order management, distribution across all channels, and delivery to the customer.” Quinn (1997) defines the supply chain as “all of those activities associated with moving goods from the raw-materials stage through to the end user. This includes sourcing and procurement, production scheduling, order processing, inventory management, transportation, warehousing, and customer service. Importantly, it also embodies the information systems so necessary to monitor all of those activities.” In addition to defining the supply chain, several authors have further defined the concept of supply chain management. As defined by Ellram and Cooper (1993), supply chain management is “an integrating philosophy to manage the total flow of a distribution channel from supplier to ultimate customer”. Monczka and Morgan (1997) state that “integrated supply chain management is about going from the external customer and then managing all the processes that are needed to provide the customer with value in a horizontal way”. They believe that supply chains, not firms, compete and that those who will be the strongest competitors are those that “can provide management and leadership to the fully integrated supply chain including external customer as well as prime suppliers, their suppliers, and their suppliers’ suppliers”. From these definitions, a summary definition of the supply chain can be stated as: all the activities involved in delivering a product from raw material through to the customer including sourcing raw materials and parts, manufacturing and assembly, warehousing and inventory tracking, order entry and order management, distribution across all channels, delivery to the customer, and the information systems necessary to monitor all of these activities. Supply chain management coordinates and integrates all of these activities into a seamless process. It links all of the partners in the chain including departments",
"title": ""
},
{
"docid": "ab98f6dc31d080abdb06bb9b4dba798e",
"text": "In TEFL, it is often stated that communication presupposes comprehension. The main purpose of readability studies is thus to measure the comprehensibility of a piece of writing. In this regard, different readability measures were initially devised to help educators select passages suitable for both children and adults. However, readability formulas can certainly be extremely helpful in the realm of EFL reading. They were originally designed to assess the suitability of books for students at particular grade levels or ages. Nevertheless, they can be used as basic tools in determining certain crucial EFL text-characteristics instrumental in the skill of reading and its related issues. The aim of the present paper is to familiarize the readers with the most frequently used readability formulas as well as the pros and cons views toward the use of such formulas. Of course, this part mostly illustrates studies done on readability formulas with the results obtained. The main objective of this part is to help readers to become familiar with the background of the formulas, the theory on which they stand, what they are good for and what they are not with regard to a number of studies cited in this section.",
"title": ""
},
{
"docid": "197f4782bc11e18b435f4bc568b9de79",
"text": "Protected-module architectures (PMAs) have been proposed to provide strong isolation guarantees, even on top of a compromised system. Unfortunately, Intel SGX – the only publicly available highend PMA – has been shown to only provide limited isolation. An attacker controlling the untrusted page tables, can learn enclave secrets by observing its page access patterns. Fortifying existing protected-module architectures in a realworld setting against side-channel attacks is an extremely difficult task as system software (hypervisor, operating system, . . . ) needs to remain in full control over the underlying hardware. Most stateof-the-art solutions propose a reactive defense that monitors for signs of an attack. Such approaches unfortunately cannot detect the most novel attacks, suffer from false-positives, and place an extraordinary heavy burden on enclave-developers when an attack is detected. We present Heisenberg, a proactive defense that provides complete protection against page table based side channels. We guarantee that any attack will either be prevented or detected automatically before any sensitive information leaks. Consequently, Heisenberg can always securely resume enclave execution – even when the attacker is still present in the system. We present two implementations. Heisenberg-HW relies on very limited hardware features to defend against page-table-based attacks. We use the x86/SGX platform as an example, but the same approach can be applied when protected-module architectures are ported to different platforms as well. Heisenberg-SW avoids these hardware modifications and can readily be applied. Unfortunately, it’s reliance on Intel Transactional Synchronization Extensions (TSX) may lead to significant performance overhead under real-life conditions.",
"title": ""
},
{
"docid": "3840b8c709a8b2780b3d4a1b56bd986b",
"text": "A new scheme to resolve the intra-cell pilot collision for machine-to-machine (M2M) communication in crowded massive multiple-input multiple-output (MIMO) systems is proposed. The proposed scheme permits those failed user equipments (UEs), judged by a strongest-user collision resolution (SUCR) protocol, to contend for the idle pilots, i.e., the pilots that are not selected by any UE in the initial step. This scheme is called as SUCR combined idle pilots access (SUCR-IPA). To analyze the performance of the SUCR-IPA scheme, we develop a simple method to compute the access success probability of the UEs in each random access slot. The simulation results coincide well with the analysis. It is also shown that, compared with the SUCR protocol, the proposed SUCR-IPA scheme increases the throughput of the system significantly, and thus decreases the number of access attempts dramatically.",
"title": ""
}
] |
scidocsrr
|
0e4901c0aabb3647fdd2456a5bc86ca8
|
Discrete laplace operator on meshed surfaces
|
[
{
"docid": "da168a94f6642ee92454f2ea5380c7f3",
"text": "One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.",
"title": ""
}
] |
[
{
"docid": "2ab32a04c2d0af4a76ad29ce5a3b2748",
"text": "The future of solid-state lighting relies on how the performance parameters will be improved further for developing high-brightness light-emitting diodes. Eventually, heat removal is becoming a crucial issue because the requirement of high brightness necessitates high-operating current densities that would trigger more joule heating. Here we demonstrate that the embedded graphene oxide in a gallium nitride light-emitting diode alleviates the self-heating issues by virtue of its heat-spreading ability and reducing the thermal boundary resistance. The fabrication process involves the generation of scalable graphene oxide microscale patterns on a sapphire substrate, followed by its thermal reduction and epitaxial lateral overgrowth of gallium nitride in a metal-organic chemical vapour deposition system under one-step process. The device with embedded graphene oxide outperforms its conventional counterpart by emitting bright light with relatively low-junction temperature and thermal resistance. This facile strategy may enable integration of large-scale graphene into practical devices for effective heat removal.",
"title": ""
},
{
"docid": "47ae087577e4baa461d17780a2282d1d",
"text": "Search engine click logs provide an invaluable source of relevance information but this information is biased because we ignore which documents from the result list the users have actually seen before and after they clicked. Otherwise, we could estimate document relevance by simple counting. In this paper, we propose a set of assumptions on user browsing behavior that allows the estimation of the probability that a document is seen, thereby providing an unbiased estimate of document relevance. To train, test and compare our model to the best alternatives described in the Literature, we gather a large set of real data and proceed to an extensive cross-validation experiment. Our solution outperforms very significantly all previous models. As a side effect, we gain insight into the browsing behavior of users and we can compare it to the conclusions of an eye-tracking experiments by Joachims et al. [12]. In particular, our findings confirm that a user almost always see the document directly after a clicked document. They also explain why documents situated just after a very relevant document are clicked more often.",
"title": ""
},
{
"docid": "debb6ac09ab841987733ef83e4620d52",
"text": "One of the traditional problems in the walking and climbing robot moving in the 3D environment is how to negotiate the boundary of two plain surfaces such as corners, which may be convex or concave. In this paper a practical gait planning algorithm in the transition region of the boundary is proposed in terms of a geometrical view. The trajectory of the body is derived from the geometrical analysis of the relationship between the robot and the environment. And the position of each foot is determined by using parameters associated with the hip and the ankle of the robot. In each case of concave or convex boundaries, the trajectory that the robot moves along is determined in advance and the foot positions of the robot associated with the trajectory are computed, accordingly. The usefulness of the proposed method is confirmed through simulations and demonstrations with a walking and climbing robot.",
"title": ""
},
{
"docid": "862cf233879ef1887a3ddfe33144a067",
"text": "Data Quality has many dimensions one of which is accuracy. Accuracy is usually compromised by errors accidentally or intensionally introduced in a database system. These errors result in inconsistent, incomplete, or erroneous data elements. For example, a small variation in the representation of a data object, produces a unique instantiation of the object being represented. In order to improve the accuracy of the data stored in a database system, we need to compare them either with real-world counterparts or with other data stored in the same or a di erent system. In this paper we address the problem of matching records which refer to the same entity by computing their similarity. Exact record matching has limited applicability in this context since even simple errors like character transpositions cannot be captured in the record linking process. Our methodology deploys advanced data mining techniques for dealing with the high computational and inferential complexity of approximate record matching.",
"title": ""
},
{
"docid": "4b5336c5f2352fb7cd79b19d2538049b",
"text": "Energy-efficient computation is critical if we are going to continue to scale performance in power-limited systems. For floating-point applications that have large amounts of data parallelism, one should optimize the throughput/mm2 given a power density constraint. We present a method for creating a trade-off curve that can be used to estimate the maximum floating-point performance given a set of area and power constraints. Looking at FP multiply-add units and ignoring register and memory overheads, we find that in a 90 nm CMOS technology at 1 W/mm2, one can achieve a performance of 27 GFlops/mm2 single precision, and 7.5 GFlops/mm double precision. Adding register file overheads reduces the throughput by less than 50 percent if the compute intensity is high. Since the energy of the basic gates is no longer scaling rapidly, to maintain constant power density with scaling requires moving the overall FP architecture to a lower energy/performance point. A 1 W/mm2 design at 90 nm is a \"high-energy\" design, so scaling it to a lower energy design in 45 nm still yields a 7× performance gain, while a more balanced 0.1 W/mm2 design only speeds up by 3.5× when scaled to 45 nm. Performance scaling below 45 nm rapidly decreases, with a projected improvement of only ~3x for both power densities when scaling to a 22 nm technology.",
"title": ""
},
{
"docid": "b9ad079a04028adb9df3891ce763797b",
"text": "A n estimated 45 million people around the world are blind. 1 Most of them have lost their sight to diseases that are treatable or preventable. Eighty percent of them live in the lesser-developed world in countries where chronic economic deprivation is exacerbated by the added challenge of failing vision. Without intervention, the number of individuals with blindness might reach 76 million by 2020 because of a number of factors, primarily the rapid aging of populations in most countries. Since eye disease is seen largely in older people, the projected doubling of the world’s population older than 50 years to 2 billion by 2020 has profound effects on the number of those with blindness and low vision.",
"title": ""
},
{
"docid": "e3a2b7d38a777c0e7e06d2dc443774d5",
"text": "The area under the ROC (Receiver Operating Characteristic) curve, or simply AUC, has been widely used to measure model performance for binary classification tasks. It can be estimated under parametric, semiparametric and nonparametric assumptions. The non-parametric estimate of the AUC, which is calculated from the ranks of predicted scores of instances, does not always sufficiently take advantage of the predicted scores. This problem is tackled in this paper. On the basis of the ranks and the original values of the predicted scores, we introduce a new metric, called a scored AUC or sAUC. Experimental results on 20 UCI data sets empirically demonstrate the validity of the new metric for classifier evaluation and selection.",
"title": ""
},
{
"docid": "a2bdce49cd3faabd3b0afbe0abd8ef54",
"text": "The revolution of World Wide Web (WWW) and smart-phone technologies have been the key-factor behind remarkable success of social networks. With the ease of availability of check-in data, the location-based social networks (LBSN) (e.g., Facebook, etc.) have been heavily explored in the past decade for Point-of-Interest (POI) recommendation. Though many POI recommenders have been defined, most of them have focused on recommending a single location or an arbitrary list that is not contextually coherent. It has been cumbersome to rely on such systems when one needs a contextually coherent list of locations, that can be used for various day-to-day activities, for e.g., itinerary planning. This paper proposes a model termed as CAPS (Context Aware Personalized POI Sequence Recommender System) that generates contextually coherent POI sequences relevant to user preferences. To the best of our knowledge, CAPS is the first attempt to formulate the contextual POI sequence modeling by extending Recurrent Neural Network (RNN) and its variants. CAPS extends RNN by incorporating multiple contexts to the hidden layer and by incorporating global context (sequence features) to the hidden layers and output layer. It extends the variants of RNN (e.g., Long-short term memory (LSTM)) by incorporating multiple contexts and global features in the gate update relations. The major contributions of this paper are: (i) it models the contextual POI sequence problem by incorporating personalized user preferences through multiple constraints (e.g., categorical, social, temporal, etc.), (ii) it extends RNN to incorporate the contexts of individual item and that of whole sequence. It also extends the gated functionality of variants of RNN to incorporate the multiple contexts, and (iii) it evaluates the proposed models against two real-world data sets.",
"title": ""
},
{
"docid": "4cce019f5f4c4cfa934e599ddf9137cb",
"text": "Many distributed graph processing frameworks have emerged for helping doing large scale data analysis for many applications including social network and data mining. The existing frameworks usually focus on the system scalability without consideration of local computing performance. We have observed two locality issues which greatly influence the local computing performance in existing systems. One is the locality of the data associated with each vertex/edge. The data are often considered as a logical undividable unit and put into continuous memory. However, it is quite common that for some computing steps, only some portions of data (called as some properties) are needed. The current data layout incurs large amount of interleaved memory access. The other issue is their execution engine applies computation at a granularity of vertex. Making optimization for the locality of source vertex of each edge will often hurt the locality of target vertex or vice versa. We have built a distributed graph processing framework called Photon to address the above issues. Photon employs Property View to store the same type of property for all vertices and edges together. This will improve the locality while doing computation with a portion of properties. Photon also employs an edge-centric execution engine with Hilbert-Order that improve the locality during computation. We have evaluated Photon with five graph applications using five real-world graphs and compared it with four existing systems. The results show that Property View and edge-centric execution design improve graph processing by 2.4X.",
"title": ""
},
{
"docid": "084cbcbdfcd755149562546dbbc46269",
"text": "PID controller is widely used in industries for control applications. Tuning of PID controller is very much essential before its implementation. There are different methods of PID tuning such as Ziegler Nichols tuning method, Internal Model Control method, Cohen Coon tuning method, Tyreus-Luyben method, Chein-Hrones-Reswick method, etc. The focus of the work in this paper is to identify the system model for a flow control loop and implement PID controller in MATLAB for simulation study and in LabVIEW for real-time experimentation. Comparative study of three tuning methods viz. ZN, IMC and CC were carried out. Further the work is to appropriately tune the PID parameters. The flow control loop was interfaced to a computer via NI-DAQ card and PID was implemented using LabVIEW. The simulation and real-time results show that IMC tuning method gives better result than ZN and CC tuning methods.",
"title": ""
},
{
"docid": "3c33528735b53a4f319ce4681527c163",
"text": "Within the past two years, important advances have been made in modeling credit risk at the portfolio level. Practitioners and policy makers have invested in implementing and exploring a variety of new models individually. Less progress has been made, however, with comparative analyses. Direct comparison often is not straightforward, because the different models may be presented within rather different mathematical frameworks. This paper offers a comparative anatomy of two especially influential benchmarks for credit risk models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We then design simulation exercises which evaluate the effect of each of these differences individually. JEL Codes: G31, C15, G11 ∗The views expressed herein are my own and do not necessarily reflect those of the Board of Governors or its staff. I would like to thank David Jones for drawing my attention to this issue, and for his helpful comments. I am also grateful to Mark Carey for data and advice useful in calibration of the models, and to Chris Finger and Tom Wilde for helpful comments. Please address correspondence to the author at Division of Research and Statistics, Mail Stop 153, Federal Reserve Board, Washington, DC 20551, USA. Phone: (202)452-3705. Fax: (202)452-5295. Email: 〈mgordy@frb.gov〉. Over the past decade, financial institutions have developed and implemented a variety of sophisticated models of value-at-risk for market risk in trading portfolios. These models have gained acceptance not only among senior bank managers, but also in amendments to the international bank regulatory framework. Much more recently, important advances have been made in modeling credit risk in lending portfolios. The new models are designed to quantify credit risk on a portfolio basis, and thus have application in control of risk concentration, evaluation of return on capital at the customer level, and more active management of credit portfolios. Future generations of today’s models may one day become the foundation for measurement of regulatory capital adequacy. Two of the models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+, have been released freely to the public since 1997 and have quickly become influential benchmarks. Practitioners and policy makers have invested in implementing and exploring each of the models individually, but have made less progress with comparative analyses. The two models are intended to measure the same risks, but impose different restrictions and distributional assumptions, and suggest different techniques for calibration and solution. Thus, given the same portfolio of credit exposures, the two models will, in general, yield differing evaluations of credit risk. Determining which features of the models account for differences in output would allow us a better understanding of the sensitivity of the models to the particular assumptions they employ. Unfortunately, direct comparison of the models is not straightforward, because the two models are presented within rather different mathematical frameworks. The CreditMetrics model is familiar to econometricians as an ordered probit model. Credit events are driven by movements in underlying unobserved latent variables. The latent variables are assumed to depend on external “risk factors.” Common dependence on the same risk factors gives rise to correlations in credit events across obligors. The CreditRisk+ model is based instead on insurance industry models of event risk. Instead of a latent variable, each obligor has a default probability. The default probabilities are not constant over time, but rather increase or decrease in response to background macroeconomic factors. To the extent that two obligors are sensitive to the same set of background factors, their default probabilities will move together. These co-movements in probability give rise to correlations in defaults. CreditMetrics and CreditRisk+ may serve essentially the same function, but they appear to be constructed quite differently. This paper offers a comparative anatomy of CreditMetrics and CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We can then design simulation exercises which evaluate the effect of these differences individually. We proceed as follows. Section 1 presents a summary of the CreditRisk+ model, and introduces a restricted version of CreditMetrics. The restrictions are imposed to facilitate direct comparison of CreditMetrics and CreditRisk+. While some of the richness of the full CreditMetrics implementation is sacrificed, the essential mathematical characteristics of the model are preserved. Our",
"title": ""
},
{
"docid": "8704a4033132a1d26cf2da726a60045e",
"text": "In practical classification, there is often a mix of learnable and unlearnable classes and only a classifier above a minimum performance threshold can be deployed. This problem is exacerbated if the training set is created by active learning. The bias of actively learned training sets makes it hard to determine whether a class has been learned. We give evidence that there is no general and efficient method for reducing the bias and correctly identifying classes that have been learned. However, we characterize a number of scenarios where active learning can succeed despite these difficulties.",
"title": ""
},
{
"docid": "c8fa371ea4c48d940ce551ae2eb7d864",
"text": "To accelerate the learning of reinforcement learning, many types of function approximation are used to represent state value. However function approximation reduces the accuracy of state value, and brings difficulty in the convergence. To solve the problems of tradeoff between the generalization and accuracy in reinforcement learning, we represent state-action value by two CMAC networks with different generalization parameters. The accuracy CMAC network can represent values exactly, which achieves precise control in the states around target area. And the generalization CMAC network can extend experiences to unknown area, and guide the learning of accuracy CMAC network. The algorithm proposed in this paper can effectively avoid the dilemma of achieving tradeoff between generalization and accuracy. Simulation results for the control of double inverted pendulum are presented to show effectiveness of the proposed algorithm",
"title": ""
},
{
"docid": "497d6e0bf6f582924745c7aa192579e7",
"text": "The versatility of humanoid robots in locomotion, full-body motion, interaction with unmodified human environments, and intuitive human-robot interaction led to increased research interest. Multiple smaller platforms are available for research, but these require a miniaturized environment to interact with–and often the small scale of the robot diminishes the influence of factors which would have affected larger robots. Unfortunately, many research platforms in the larger size range are less affordable, more difficult to operate, maintain and modify, and very often closed-source. In this work, we introduce NimbRo-OP2, an affordable, fully open-source platform in terms of both hardware and software. Being almost 135 cm tall and only 18 kg in weight, the robot is not only capable of interacting in an environment meant for humans, but also easy and safe to operate and does not require a gantry when doing so. The exoskeleton of the robot is 3D printed, which produces a lightweight and visually appealing design. We present all mechanical and electrical aspects of the robot, as well as some of the software features of our well-established open-source ROS software. The NimbRo-OP2 performed at RoboCup 2017 in Nagoya, Japan, where it won the Humanoid League AdultSize Soccer competition and Technical Challenge.",
"title": ""
},
{
"docid": "243342f89a0670486fac8c1c4e5801c8",
"text": "Twitter is not only a social network, but also an increasingly important news media. In Twitter, retweeting is the most important information propagation mechanism, and supernodes (news medias) that have many followers are the most important information sources. Therefore, it is important to understand the news retweet propagation from supernodes and predict news popularity quickly at the very first few seconds upon publishing. Such understanding and prediction will benefit many applications such as social media management, advertisement and interaction optimization between news medias and followers. In this paper, we identify the characteristics of news propagation from supernodes from the trace data we crawled from Twitter. Based on the characteristics, we build a news popularity prediction model that can predict the final number of retweets of a news tweet very quickly. Through trace-driven experiments, we then validate our prediction model by comparing our predicted popularity and real popularity, and show its superior performance in comparison with the regression prediction model. From the study, we found that the average interaction frequency between the retweeters and the news source is correlated with news popularity. Also, the negative sentiment of news has some correlations with retweet popularity while the positive sentiment of news does not have such obvious correlation. Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "2aa4d40a0fb07996701c0148266ddc1b",
"text": "BACKGROUND/AIMS\nNeurodegenerative disorders (ND) have a major impact on quality of life (QoL) and place a substantial burden on patients, their families and carers; they are the second leading cause of disability. The objective of this study was to examine QoL in persons with ND.\n\n\nMETHODS\nA battery of subjective assessments was used, including the World Health Organization Quality of Life Questionnaire (WHOQOL-BREF) and the World Health Organization Quality of Life - Disability (WHOQOL-DIS). Psychometric properties of the WHOQOL-BREF and WHOQOL-DIS were investigated using classical psychometric methods.\n\n\nRESULTS\nParticipants (n = 149) were recruited and interviewed at two specialized centers to obtain information on health and disability perceptions, depressive symptoms (Hospital Anxiety and Depression Scale - Depression, HADS-D), Fatigue Assessment Scale (FAS), Satisfaction with Life (SWL), generic QoL (WHOQOL-BREF, WHOQOL-DIS), specific QoL (Multiple Sclerosis Impact Scale, MSIS-29; Parkinson's Disease Questionnaire, PDQ-39) and sociodemographics. Internal consistency was acceptable, except for the WHOQOL-BREF social (0.67). Associations, using Pearson's and Spearman's rho correlations, were confirmed between WHOQOL-BREF and WHOQOL-DIS with MSIS-29, PDQ-39, HADS-D, FAS and SWL. Regarding 'known group' differences, Student's t tests showed that WHOQOL-BREF and WHOQOL-DIS scores significantly discriminated between depressed and nondepressed and those perceiving a more severe impact of the disability on their lives.\n\n\nCONCLUSION\nThis study is the first to report on use of the WHOQOL-BREF and WHOQOL-DIS in Spanish persons with ND; they are promising useful tools in assessing persons with ND through the continuum of care, as they include important dimensions commonly omitted from other QoL measures.",
"title": ""
},
{
"docid": "fa5c27d91feb3b392e2dba2b2121e184",
"text": "Planned experiments are the gold standard in reliably comparing the causal effect of switching from a baseline policy to a new policy. One critical shortcoming of classical experimental methods, however, is that they typically do not take into account the dynamic nature of response to policy changes. For instance, in an experiment where we seek to understand the effects of a new ad pricing policy on auction revenue, agents may adapt their bidding in response to the experimental pricing changes. Thus, causal effects of the new pricing policy after such adaptation period, the long-term causal effects, are not captured by the classical methodology even though they clearly are more indicative of the value of the new policy. Here, we formalize a framework to define and estimate long-term causal effects of policy changes in multiagent economies. Central to our approach is behavioral game theory, which we leverage to formulate the ignorability assumptions that are necessary for causal inference. Under such assumptions we estimate long-term causal effects through a latent space approach, where a behavioral model of how agents act conditional on their latent behaviors is combined with a temporal model of how behaviors evolve over time.",
"title": ""
},
{
"docid": "ed3b8bfdd6048e4a07ee988f1e35fd21",
"text": "Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean ± std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset.",
"title": ""
},
{
"docid": "cc15583675d6b19fbd9a10f06876a61e",
"text": "Matrix factorization approaches to relation extraction provide several attractive features: they support distant supervision, handle open schemas, and leverage unlabeled data. Unfortunately, these methods share a shortcoming with all other distantly supervised approaches: they cannot learn to extract target relations without existing data in the knowledge base, and likewise, these models are inaccurate for relations with sparse data. Rule-based extractors, on the other hand, can be easily extended to novel relations and improved for existing but inaccurate relations, through first-order formulae that capture auxiliary domain knowledge. However, usually a large set of such formulae is necessary to achieve generalization. In this paper, we introduce a paradigm for learning low-dimensional embeddings of entity-pairs and relations that combine the advantages of matrix factorization with first-order logic domain knowledge. We introduce simple approaches for estimating such embeddings, as well as a novel training algorithm to jointly optimize over factual and first-order logic information. Our results show that this method is able to learn accurate extractors with little or no distant supervision alignments, while at the same time generalizing to textual patterns that do not appear in the formulae.",
"title": ""
},
{
"docid": "38e7a36e4417bff60f9ae0dbb7aaf136",
"text": "Asynchronous implementation techniques, which measure logic delays at runtime and activate registers accordingly, are inherently more robust than their synchronous counterparts, which estimate worst case delays at design time and constrain the clock cycle accordingly. Desynchronization is a new paradigm to automate the design of asynchronous circuits from synchronous specifications, thus, permitting widespread adoption of asynchronicity without requiring special design skills or tools. In this paper, different protocols for desynchronization are first studied, and their correctness is formally proven using techniques originally developed for distributed deployment of synchronous language specifications. A taxonomy of existing protocols for asynchronous latch controllers, covering, in particular, the four-phase handshake protocols devised in the literature for micropipelines, is also provided. A new controller that exhibits provably maximal concurrency is then proposed, and the performance of desynchronized circuits is analyzed with respect to the original synchronous optimized implementation. Finally, this paper proves the feasibility and effectiveness of the proposed approach by showing its application to a set of real designs, including a complete implementation of the DLX microprocessor architecture",
"title": ""
}
] |
scidocsrr
|
f34a6cfd26ac2f79ee368bfd79be5899
|
Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques
|
[
{
"docid": "f800ea72820e760a9218d6ad8351996c",
"text": "This paper investigates the subject of intrusi on detection over networks. Existing network-based IDS’s are categorised into three groups and the overall architectu re of each group is summarised and assessed. A new methodology to this problem is then presented, which is inspired by the human immune system and based on a novel artificial immune model. The architecture of the model is presented and its characteristics are compared with the requirements of network-based IDS’s. The paper concludes tha t this new approach shows considerable promise for future network-based IDS’s.",
"title": ""
}
] |
[
{
"docid": "b27914276ab35f7a8ec21035f2762652",
"text": "Current recommender systems exploit user and item similarities by collaborative filtering. Some advanced methods also consider the temporal evolution of item ratings as a global background process. However, all prior methods disregard the individual evolution of a user's experience level and how this is expressed in the user's writing in a review community. In this paper, we model the joint evolution of user experience, interest in specific item facets, writing style, and rating behavior. This way we can generate individual recommendations that take into account the user's maturity level (e.g., recommending art movies rather than blockbusters for a cinematography expert). As only item ratings and review texts are observables, we capture the user's experience and interests in a latent model learned from her reviews, vocabulary and writing style. We develop a generative HMM-LDA model to trace user evolution, where the Hidden Markov Model (HMM) traces her latent experience progressing over time -- with solely user reviews and ratings as observables over time. The facets of a user's interest are drawn from a Latent Dirichlet Allocation (LDA) model derived from her reviews, as a function of her (again latent) experience level. In experiments with four realworld datasets, we show that our model improves the rating prediction over state-of-the-art baselines, by a substantial margin. In addition, our model can also give some interpretations for the user experience level.",
"title": ""
},
{
"docid": "cd0786a460701482df190fe04be01bb0",
"text": "This software is designed to solve conic programming problems whose constraint cone is a product of semidefinite cones, second-order cones, nonnegative orthants and Euclidean spaces; and whose objective function is the sum of linear functions and log-barrier terms associated with the constraint cones. This includes the special case of determinant maximization problems with linear matrix inequalities. It employs an infeasible primal-dual predictor-corrector path-following method, with either the HKM or the NT search direction. The basic code is written in Matlab, but key subroutines in C are incorporated via Mex files. Routines are provided to read in problems in either SDPA or SeDuMi format. Sparsity and block diagonal structure are exploited. We also exploit low-rank structures in the constraint matrices associated the semidefinite blocks if such structures are explicitly given. To help the users in using our software, we also include some examples to illustrate the coding of problem data for our SQLP solver. Various techniques to improve the efficiency and stability of the algorithm are incorporated. For example, step-lengths associated with semidefinite cones are calculated via the Lanczos method. Numerical experiments show that this general purpose code can solve more than 80% of a total of about 300 test problems to an accuracy of at least 10−6 in relative duality gap and infeasibilities. Department of Mathematics, National University of Singapore, 2 Science Drive 2, Singapore 117543 (mattohkc@nus.edu.sg); and Singapore-MIT Alliance, E4-04-10, 4 Engineering Drive 3, Singapore 117576. Research supported in parts by NUS Research Grant R146-000-076-112 and SMA IUP Research Grant. Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA 15213, USA (reha@cmu.edu). Research supported in part by NSF through grants CCR-9875559, CCF-0430868 and by ONR through grant N00014-05-1-0147 School of Operations Research and Industrial Engineering, Cornell University, Ithaca, New York 14853, USA (miketodd@cs.cornell.edu). Research supported in part by NSF through grant DMS-0209457 and by ONR through grant N00014-02-1-0057.",
"title": ""
},
{
"docid": "b707002cdfb59ce9f9f15afc3e8026aa",
"text": "Watching another person being touched activates a similar neural circuit to actual touch and, for some people with 'mirror-touch' synesthesia, can produce a felt tactile sensation on their own body. In this study, we provide evidence for the existence of this type of synesthesia and show that it correlates with heightened empathic ability. This is consistent with the notion that we empathize with others through a process of simulation.",
"title": ""
},
{
"docid": "3d3c60b2491f9e720171f55e8ecb0a5c",
"text": "There is an increasing need for fault tolerance capabilities in logic devices brought about by the scaling of transistors to ever smaller geometries. This paper presents a hypervisor-based replication approach that can be applied to commodity hardware to allow for virtually lockstepped execution. It offers many of the benefits of hardware-based lockstep while being cheaper and easier to implement and more flexible in the configurations supported. A novel form of processor state fingerprinting is also presented, which can significantly reduce the fault detection latency. This further improves reliability by triggering rollback recovery before errors are recorded to a checkpoint. The mechanisms are validated using a full prototype and the benchmarks considered indicate an average performance overhead of approximately 14 percent with the possibility for significant optimization. Finally, a unique method of using virtual lockstep for fault injection testing is presented and used to show that significant detection latency reduction is achievable by comparing only a small amount of data across replicas.",
"title": ""
},
{
"docid": "0bbabbcc08ea494330b1675445851f9d",
"text": "One trend in the implementation of modern web systems is the use of activity data in the form of log or event messages that capture user and server activity. This data is at the heart of many internet systems in the domains of advertising, relevance, search, recommendation systems, and security, as well as continuing to fulfill its traditional role in analytics and reporting. Many of these uses place real-time demands on data feeds. Activity data is extremely high volume and real-time pipelines present new design challenges. This paper discusses the design and engineering problems we encountered in moving LinkedIn’s data pipeline from a batch-oriented file aggregation mechanism to a real-time publish-subscribe system called Kafka. This pipeline currently runs in production at LinkedIn and handles more than 10 billion message writes each day with a sustained peak of over 172,000 messages per second. Kafka supports dozens of subscribing systems and delivers more than 55 billion messages to these consumer processing each day. We discuss the origins of this systems, missteps on the path to real-time, and the design and engineering problems we encountered along the way.",
"title": ""
},
{
"docid": "019138302eadaf18b2148db11720bcc5",
"text": "Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX1 Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.",
"title": ""
},
{
"docid": "53267e7e574dce749bb3d5877640e017",
"text": "After a decline in enthusiasm for national community health worker (CHW) programmes in the 1980s, these have re-emerged globally, particularly in the context of HIV. This paper examines the case of South Africa, where there has been rapid growth of a range of lay workers (home-based carers, lay counsellors, DOT supporters etc.) principally in response to an expansion in budgets and programmes for HIV, most recently the rollout of antiretroviral therapy (ART). In 2004, the term community health worker was introduced as the umbrella concept for all the community/lay workers in the health sector, and a national CHW Policy Framework was adopted. We summarize the key features of the emerging national CHW programme in South Africa, which include amongst others, their integration into a national public works programme and the use of non-governmental organizations as intermediaries. We then report on experiences in one Province, Free State. Over a period of 2 years (2004--06), we made serial visits on three occasions to the first 16 primary health care facilities in this Province providing comprehensive HIV services, including ART. At each of these visits, we did inventories of CHW numbers and training, and on two occasions conducted facility-based group interviews with CHWs (involving a total of 231 and 182 participants, respectively). We also interviewed clinic nurses tasked with supervising CHWs. From this evaluation we concluded that there is a significant CHW presence in the South African health system. This infrastructure, however, shares many of the managerial challenges (stability, recognition, volunteer vs. worker, relationships with professionals) associated with previous national CHW programmes, and we discuss prospects for sustainability in the light of the new policy context.",
"title": ""
},
{
"docid": "152e8e88e8f560737ec0c20ae9aa0335",
"text": "UNLABELLED\nDysfunctional use of the mobile phone has often been conceptualized as a 'behavioural addiction' that shares most features with drug addictions. In the current article, we challenge the clinical utility of the addiction model as applied to mobile phone overuse. We describe the case of a woman who overuses her mobile phone from two distinct approaches: (1) a symptom-based categorical approach inspired from the addiction model of dysfunctional mobile phone use and (2) a process-based approach resulting from an idiosyncratic clinical case conceptualization. In the case depicted here, the addiction model was shown to lead to standardized and non-relevant treatment, whereas the clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific, empirically based psychological interventions. This finding highlights that conceptualizing excessive behaviours (e.g., gambling and sex) within the addiction model can be a simplification of an individual's psychological functioning, offering only limited clinical relevance.\n\n\nKEY PRACTITIONER MESSAGE\nThe addiction model, applied to excessive behaviours (e.g., gambling, sex and Internet-related activities) may lead to non-relevant standardized treatments. Clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific empirically based psychological interventions. The biomedical model might lead to the simplification of an individual's psychological functioning with limited clinical relevance.",
"title": ""
},
{
"docid": "5edaf5ec5276e6709b7cd4224139388a",
"text": "Recently, there has been considerable interest in the use of Model Checking for Systems Biology. Unfortunately, the state space of stochastic biological models is often too large for classical Model Checking techniques. For these models, a statistical approach to Model Checking has been shown to be an effective alternative. Extending our earlier work, we present the first algorithm for performing statistical Model Checking using Bayesian Sequential Hypothesis Testing. We show that our Bayesian approach outperforms current statistical Model Checking techniques, which rely on tests from Classical (aka Frequentist) statistics, by requiring fewer system simulations. Another advantage of our approach is the ability to incorporate prior Biological knowledge about the model being verified. We demonstrate our algorithm on a variety of models from the Systems Biology literature and show that it enables faster verification than state-of-the-art techniques, even when no prior knowledge is available.",
"title": ""
},
{
"docid": "59a32ec5b88436eca75d8fa9aa75951b",
"text": "A visual-relational knowledge graph (KG) is a multi-relational graph whose entities are associated with images. We introduce ImageGraph, a KG with 1,330 relation types, 14,870 entities, and 829,931 images. Visual-relational KGs lead to novel probabilistic query types where images are treated as first-class citizens. Both the prediction of relations between unseen images and multi-relational image retrieval can be formulated as query types in a visual-relational KG. We approach the problem of answering such queries with a novel combination of deep convolutional networks and models for learning knowledge graph embeddings. The resulting models can answer queries such as “How are these two unseen images related to each other?\" We also explore a zero-shot learning scenario where an image of an entirely new entity is linked with multiple relations to entities of an existing KG. The multi-relational grounding of unseen entity images into a knowledge graph serves as the description of such an entity. We conduct experiments to demonstrate that the proposed deep architectures in combination with KG embedding objectives can answer the visual-relational queries efficiently and accurately.",
"title": ""
},
{
"docid": "1df103aef2a4a5685927615cfebbd1ea",
"text": "While human subjects lift small objects using the precision grip between the tips of the fingers and thumb the ratio between the grip force and the load force (i.e. the vertical lifting force) is adapted to the friction between the object and the skin. The present report provides direct evidence that signals in tactile afferent units are utilized in this adaptation. Tactile afferent units were readily excited by small but distinct slips between the object and the skin revealed as vibrations in the object. Following such afferent slip responses the force ratio was upgraded to a higher, stable value which provided a safety margin to prevent further slips. The latency between the onset of the a slip and the appearance of the ratio change (74 ±9 ms) was about half the minimum latency for intended grip force changes triggered by cutaneous stimulation of the fingers. This indicated that the motor responses were automatically initiated. If the subjects were asked to very slowly separate their thumb and the opposing finger while the object was held in air, grip force reflexes originating from afferent slip responses appeared to counteract the voluntary command, but the maintained upgrading of the force ratio was suppressed. In experiments with weak electrical cutaneous stimulation delivered through the surfaces of the object it was established that tactile input alone could trigger the upgrading of the force ratio. Although, varying in responsiveness, each of the three types of tactile units which exhibit a pronounced dynamic sensitivity (FA I, FA II and SA I units) could reliably signal these slips. Similar but generally weaker afferent responses, sometimes followed by small force ratio changes, also occurred in the FA I and the SA I units in the absence of detectable vibrations events. In contrast to the responses associated with clear vibratory events, the weaker afferent responses were probably caused by localized frictional slips, i.e. slips limited to small fractions of the skin area in contact with the object. Indications were found that the early adjustment to a new frictional condition, which may appear soon (ca. 0.1–0.2 s) after the object is initially gripped, might depend on the vigorous responses in the FA I units during the initial phase of the lifts (see Westling and Johansson 1987). The role of the tactile input in the adaptation of the force coordination to the frictional condition is discussed.",
"title": ""
},
{
"docid": "46cecc587352fee7248377bbca2c03d2",
"text": "Several tasks in urban and architectural design are today undertaken in a geospatial context. Building Information Models (BIM) and geospatial technologies offer 3D data models that provide information about buildings and the surrounding environment. The Industry Foundation Classes (IFC) and CityGML are today the two most prominent semantic models for representation of BIM and geospatial models respectively. CityGML has emerged as a standard for modeling city models while IFC has been developed as a reference model for building objects and sites. Current CAD and geospatial software provide tools that allow the conversion of information from one format to the other. These tools are however fairly limited in their capabilities, often resulting in data and information losses in the transformations. This paper describes a new approach for data integration based on a unified building model (UBM) which encapsulates both the CityGML and IFC models, thus avoiding translations between the models and loss of information. To build the UBM, all classes and related concepts were initially collected from both models, overlapping concepts were merged, new objects were created to ensure the capturing of both indoor and outdoor objects, and finally, spatial relationships between the objects were redefined. Unified Modeling Language (UML) notations were used for representing its objects and relationships between them. There are two use-case scenarios, both set in a hospital: “evacuation” and “allocating spaces for patient wards” were developed to validate and test the proposed UBM data model. Based on these two scenarios, four validation queries OPEN ACCESS ISPRS Int. J. Geo-Inf. 2012, 1 121 were defined in order to validate the appropriateness of the proposed unified building model. It has been validated, through the case scenarios and four queries, that the UBM being developed is able to integrate CityGML data as well as IFC data in an apparently seamless way. Constraints and enrichment functions are used for populating empty database tables and fields. The motivation scenarios also show the needs and benefits of having an integrated approach to the modeling of indoor and outdoor spatial features.",
"title": ""
},
{
"docid": "165aa4bad30a95866be4aff878fbd2cf",
"text": "This paper reviews some recent developments in digital currency, focusing on platform-sponsored currencies such as Facebook Credits. In a model of platform management, we find that it will not likely be profitable for such currencies to expand to become fully convertible competitors to state-sponsored currencies. JEL Classification: D42, E4, L51 Bank Classification: bank notes, economic models, payment clearing and settlement systems * Rotman School of Management, University of Toronto and NBER (Gans) and Bank of Canada (Halaburda). The views here are those of the authors and no responsibility for them should be attributed to the Bank of Canada. We thank participants at the NBER Economics of Digitization Conference, Warren Weber and Glen Weyl for helpful comments on an earlier draft of this paper. Please send any comments to joshua.gans@gmail.com.",
"title": ""
},
{
"docid": "4655dcd241aa9e543111c5c95026b365",
"text": "Received: 15 May 2002 Revised: 31 January 2003 Accepted: 18 July 2003 Abstract In this study, we developed a conceptual model for studying the adoption of electronic business (e-business or EB) at the firm level, incorporating six adoption facilitators and inhibitors, based on the technology–organization– environment theoretical framework. Survey data from 3100 businesses and 7500 consumers in eight European countries were used to test the proposed adoption model. We conducted confirmatory factor analysis to assess the reliability and validity of constructs. To examine whether adoption patterns differ across different e-business environments, we divided the full sample into high EB-intensity and low EB-intensity countries. After controlling for variations of industry and country effects, the fitted logit models demonstrated four findings: (1) Technology competence, firm scope and size, consumer readiness, and competitive pressure are significant adoption drivers, while lack of trading partner readiness is a significant adoption inhibitor. (2) As EB-intensity increases, two environmental factors – consumer readiness and lack of trading partner readiness – become less important, while competitive pressure remains significant. (3) In high EB-intensity countries, e-business is no longer a phenomenon dominated by large firms; as more and more firms engage in e-business, network effect works to the advantage of small firms. (4) Firms are more cautious in adopting e-business in high EB-intensity countries – it seems to suggest that the more informed firms are less aggressive in adopting e-business, a somehow surprising result. Explanations and implications are offered. European Journal of Information Systems (2003) 12, 251–268. doi:10.1057/ palgrave.ejis.3000475",
"title": ""
},
{
"docid": "795f59c0658a56aa68a9271d591c81a6",
"text": "We present a new kind of network perimeter monitoring strategy, which focuses on recognizing the infection and coordination dialog that occurs during a successful malware infection. BotHunter is an application designed to track the two-way communication flows between internal assets and external entities, developing an evidence trail of data exchanges that match a state-based infection sequence model. BotHunter consists of a correlation engine that is driven by three malware-focused network packet sensors, each charged with detecting specific stages of the malware infection process, including inbound scanning, exploit usage, egg downloading, outbound bot coordination dialog, and outbound attack propagation. The BotHunter correlator then ties together the dialog trail of inbound intrusion alarms with those outbound communication patterns that are highly indicative of successful local host infection. When a sequence of evidence is found to match BotHunter’s infection dialog model, a consolidated report is produced to capture all the relevant events and event sources that played a role during the infection process. We refer to this analytical strategy of matching the dialog flows between internal assets and the broader Internet as dialog-based correlation, and contrast this strategy to other intrusion detection and alert correlation methods. We present our experimental results using BotHunter in both virtual and live testing environments, and discuss our Internet release of the BotHunter prototype. BotHunter is made available both for operational use and to help stimulate research in understanding the life cycle of malware infections.",
"title": ""
},
{
"docid": "69902c9571cafdbf126e14f608c081ce",
"text": "Most recent storage devices, such as NAND flash-based solid state drives (SSDs), provide low access latency and high degree of parallelism. However, conventional file systems, which are designed for slow hard disk drives, often encounter severe scalability bottlenecks in exploiting the advances of these fast storage devices on manycore architectures. To scale file systems to many cores, we propose SpanFS, a novel file system which consists of a collection of micro file system services called domains. SpanFS distributes files and directories among the domains, provides a global file system view on top of the domains and maintains consistency in case of system crashes. SpanFS is implemented based on the Ext4 file system. Experimental results evaluating SpanFS against Ext4 on a modern PCI-E SSD show that SpanFS scales much better than Ext4 on a 32-core machine. In microbenchmarks SpanFS outperforms Ext4 by up to 1226%. In application-level benchmarks SpanFS improves the performance by up to 73% relative to Ext4.",
"title": ""
},
{
"docid": "4b8cd508689eb4cfe4423bf1b30bce3e",
"text": "A two-dimensional (2D) periodic leaky-wave antenna consisting of a periodic distribution of rectangular patches on a grounded dielectric substrate, excited by a narrow slot in the ground plane, is studied here. The TM0 surface wave that is normally supported by a grounded dielectric substrate is perturbed by the presence of the periodic patches to produce radially-propagating leaky waves. In addition to making a novel microwave antenna structure, this design is motivated by the phenomena of directive beaming and enhanced transmission observed in plasmonic structures in the optical regime.",
"title": ""
},
{
"docid": "7d860b431f44d42572fc0787bf452575",
"text": "Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.",
"title": ""
},
{
"docid": "54d242cf31eaa27823217d34ea3b5c0a",
"text": "In this paper, we propose to employ the convolutional neural network (CNN) for the image question answering (QA) task. Our proposed CNN provides an end-to-end framework with convolutional architectures for learning not only the image and question representations, but also their inter-modal interactions to produce the answer. More specifically, our model consists of three CNNs: one image CNN to encode the image content, one sentence CNN to compose the words of the question, and one multimodal convolution layer to learn their joint representation for the classification in the space of candidate answer words. We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA datasets, which are two benchmark datasets for image QA, with the performances significantly outperforming the state-of-the-art.",
"title": ""
},
{
"docid": "dd723b23b4a7d702f8d34f15b5c90107",
"text": "Smartphones have become a prominent part of our technology driven world. When it comes to uncovering, analyzing and submitting evidence in today's criminal investigations, mobile phones play a more critical role. Thus, there is a strong need for software tools that can help investigators in the digital forensics field effectively analyze smart phone data to solve crimes.\n This paper will accentuate how digital forensic tools assist investigators in getting data acquisition, particularly messages, from applications on iOS smartphones. In addition, we will lay out the framework how to build a tool for verifying data integrity for any digital forensics tool.",
"title": ""
}
] |
scidocsrr
|
d2e6c944476982f9ae41d5a182401867
|
Game theory for cognitive radio networks: An overview
|
[
{
"docid": "3ea35f018869f02209105200f78d03b4",
"text": "We address the problem of spectrum pricing in a cognitive radio network where multiple primary service providers compete with each other to offer spectrum access opportunities to the secondary users. By using an equilibrium pricing scheme, each of the primary service providers aims to maximize its profit under quality of service (QoS) constraint for primary users. We formulate this situation as an oligopoly market consisting of a few firms and a consumer. The QoS degradation of the primary services is considered as the cost in offering spectrum access to the secondary users. For the secondary users, we adopt a utility function to obtain the demand function. With a Bertrand game model, we analyze the impacts of several system parameters such as spectrum substitutability and channel quality on the Nash equilibrium (i.e., equilibrium pricing adopted by the primary services). We present distributed algorithms to obtain the solution for this dynamic game. The stability of the proposed dynamic game algorithms in terms of convergence to the Nash equilibrium is studied. However, the Nash equilibrium is not efficient in the sense that the total profit of the primary service providers is not maximized. An optimal solution to gain the highest total profit can be obtained. A collusion can be established among the primary services so that they gain higher profit than that for the Nash equilibrium. However, since one or more of the primary service providers may deviate from the optimal solution, a punishment mechanism may be applied to the deviating primary service provider. A repeated game among primary service providers is formulated to show that the collusion can be maintained if all of the primary service providers are aware of this punishment mechanism, and therefore, properly weight their profits to be obtained in the future.",
"title": ""
}
] |
[
{
"docid": "8fcb30825553e58ff66fd85ded10111e",
"text": "Most ecological processes now show responses to anthropogenic climate change. In terrestrial, freshwater, and marine ecosystems, species are changing genetically, physiologically, morphologically, and phenologically and are shifting their distributions, which affects food webs and results in new interactions. Disruptions scale from the gene to the ecosystem and have documented consequences for people, including unpredictable fisheries and crop yields, loss of genetic diversity in wild crop varieties, and increasing impacts of pests and diseases. In addition to the more easily observed changes, such as shifts in flowering phenology, we argue that many hidden dynamics, such as genetic changes, are also taking place. Understanding shifts in ecological processes can guide human adaptation strategies. In addition to reducing greenhouse gases, climate action and policy must therefore focus equally on strategies that safeguard biodiversity and ecosystems.",
"title": ""
},
{
"docid": "c8bc6eb66ecb0dda480a049ecaef8390",
"text": "BACKGROUND\nThe practice of evidence-based medicine (EBM) requires clinicians to integrate their expertise with the latest scientific research. But this is becoming increasingly difficult with the growing numbers of published articles. There is a clear need for better tools to improve clinician's ability to search the primary literature. Randomized clinical trials (RCTs) are the most reliable source of evidence documenting the efficacy of treatment options. This paper describes the retrieval of key sentences from abstracts of RCTs as a step towards helping users find relevant facts about the experimental design of clinical studies.\n\n\nMETHOD\nUsing Conditional Random Fields (CRFs), a popular and successful method for natural language processing problems, sentences referring to Intervention, Participants and Outcome Measures are automatically categorized. This is done by extending a previous approach for labeling sentences in an abstract for general categories associated with scientific argumentation or rhetorical roles: Aim, Method, Results and Conclusion. Methods are tested on several corpora of RCT abstracts. First structured abstracts with headings specifically indicating Intervention, Participant and Outcome Measures are used. Also a manually annotated corpus of structured and unstructured abstracts is prepared for testing a classifier that identifies sentences belonging to each category.\n\n\nRESULTS\nUsing CRFs, sentences can be labeled for the four rhetorical roles with F-scores from 0.93-0.98. This outperforms the use of Support Vector Machines. Furthermore, sentences can be automatically labeled for Intervention, Participant and Outcome Measures, in unstructured and structured abstracts where the section headings do not specifically indicate these three topics. F-scores of up to 0.83 and 0.84 are obtained for Intervention and Outcome Measure sentences.\n\n\nCONCLUSION\nResults indicate that some of the methodological elements of RCTs are identifiable at the sentence level in both structured and unstructured abstract reports. This is promising in that sentences labeled automatically could potentially form concise summaries, assist in information retrieval and finer-grained extraction.",
"title": ""
},
{
"docid": "71570a28c887227b3421b1f91ba61f4c",
"text": "Anomaly based network intrusion detection (ANID) is an important problem that has been researched within diverse research areas and various application domains. Several anomaly based network intrusion detection systems (ANIDS) can be found in the literature. Most ANIDSs employ supervised algorithms, whose performances highly depend on attack-free training data. However, this kind of training data is difficult to obtain in real world network environment. Moreover, with changing network environment or services, patterns of normal traffic will be changed. This leads to high false positive rate of supervised ANIDSs. Using unsupervised anomaly detection techniques, however, the system can be trained with unlabeled data and is capable of detecting previously unseen attacks. We have categorized the existing ANIDSs based on its type, class, nature of detection/ processing, level of security, etc. We also enlist some proximity measures for intrusion data analysis and detection. We also report some experimental results for detection of attacks over the KDD’99 dataset.",
"title": ""
},
{
"docid": "48852e3ad890327757fcb6e5b0bc6e6e",
"text": "The credit scoring model development has become a very important issue, as the credit industry is highly competitive. Therefore, considerable credit scoring models have been widely studied in the areas of statistics to improve the accuracy of credit scoring during the past few years. This study constructs a hybrid SVM-based credit scoring models to evaluate the applicant’s credit score according to the applicant’s input features: (1) using neighborhood rough set to select input features; (2) using grid search to optimize RBF kernel parameters; (3) using the hybrid optimal input features and model parameters to solve the credit scoring problem with 10-fold cross validation; (4) comparing the accuracy of the proposed method with other methods. Experiment results demonstrate that the neighborhood rough set and SVM based hybrid classifier has the best credit scoring capability compared with other hybrid classifiers. It also outperforms linear discriminant analysis, logistic regression and neural networks. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9986073424bf18814ef0e5affd15d8e3",
"text": "This paper presents an energy-efficient feature extraction accelerator design aimed at visual navigation. The hardware-oriented algorithmic modifications such as a circular-shaped sampling region and unified description are proposed to minimize area and energy consumption while maintaining feature extraction quality. A matched-throughput accelerator employs fully-unrolled filters and single-stream descriptor enabled by algorithm-architecture co-optimization, which requires lower clock frequency for the given throughput requirement and reduces hardware cost of description processing elements. Due to the large number of FIFO blocks, a robust low-power FIFO architecture for the ultra-low voltage (ULV) regime is also proposed. This approach leverages shift-latch delay elements and balanced-leakage readout technique to achieve 62% energy savings and 37% delay reduction. We apply these techniques to a feature extraction accelerator that can process 30 fps VGA video in real time and is fabricated in 28 nm LP CMOS technology. The design consumes 2.7 mW with a clock frequency of 27 MHz at Vdd = 470 mV, providing 3.5× better energy efficiency than previous state-of-the-art while extracting features from entire image.",
"title": ""
},
{
"docid": "f7a9c40a3b91b95695395d6af6647cea",
"text": "U.S. Air Force special tactics operators at times use small wearable computers (SWCs) for mission objectives. The primary pointing device of a SWC is either a touchpad or trackpoint, which is embedded into the chassis of the SWC. In situations where the user cannot directly interact with these pointing devices, the utility of the SWC is decreased. We developed a pointing device called the G3 that can be used for SWCs used by operators. The device utilizes gyroscopic sensors attached to the user’s index finger to move the computer cursor according to the angular velocity of his finger. We showed that, as measured by Fitts’ law, the overall performance and accuracy of the G3 was better than that of the touchpad and trackpoint. These findings suggest that the G3 can adequately be used with SWCs. Additionally, we investigated the G3 ’s utility as a control device for operating micro remotely piloted aircrafts",
"title": ""
},
{
"docid": "141e927711efe3ee66b0512322bfee9c",
"text": "Reputation systems have become an indispensable component of modern E-commerce systems, as they help buyers make informed decisions in choosing trustworthy sellers. To attract buyers and increase the transaction volume, sellers need to earn reasonably high reputation scores. This process usually takes a substantial amount of time. To accelerate this process, sellers can provide price discounts to attract users, but the underlying difficulty is that sellers have no prior knowledge on buyers’ preferences over price discounts. In this article, we develop an online algorithm to infer the optimal discount rate from data. We first formulate an optimization framework to select the optimal discount rate given buyers’ discount preferences, which is a tradeoff between the short-term profit and the ramp-up time (for reputation). We then derive the closed-form optimal discount rate, which gives us key insights in applying a stochastic bandits framework to infer the optimal discount rate from the transaction data with regret upper bounds. We show that the computational complexity of evaluating the performance metrics is infeasibly high, and therefore, we develop efficient randomized algorithms with guaranteed performance to approximate them. Finally, we conduct experiments on a dataset crawled from eBay. Experimental results show that our framework can trade 60% of the short-term profit for reducing the ramp-up time by 40%. This reduction in the ramp-up time can increase the long-term profit of a seller by at least 20%.",
"title": ""
},
{
"docid": "a2fd33f276a336e2a33d84c2a0abc283",
"text": "The Smart information retrieval project emphasizes completely automatic approaches to the understanding and retrieval of large quantities of text. We continue our work in TREC 3, performing runs in the routing, ad-hoc, and foreign language environments. Our major focus is massive query expansion: adding from 300 to 530 terms to each query. These terms come from known relevant documents in the case of routing, and from just the top retrieved documents in the case of ad-hoc and Spanish. This approach improves e ectiveness from 7% to 25% in the various experiments. Other ad-hoc work extends our investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document which matches the query. Using an overlapping text window de nition of \\local\", we achieve a 16% improvement.",
"title": ""
},
{
"docid": "a93245f0e29ce5907d9ee2152f3d8ce8",
"text": "Supervisory signals can help topic models discover low-dimensional data representations that are more interpretable for clinical tasks. We propose a framework for training supervised latent Dirichlet allocation that balances two goals: faithful generative explanations of high-dimensional data and accurate prediction of associated class labels. Existing approaches fail to balance these goals by not properly handling a fundamental asymmetry: the intended task is always predicting labels from data, not data from labels. Our new prediction-constrained objective trains models that predict labels from heldout data well while also producing good generative likelihoods and interpretable topic-word parameters. In a case study on predicting depression medications from electronic health records, we demonstrate improved recommendations compared to previous supervised topic models and high-dimensional logistic regression from words alone.",
"title": ""
},
{
"docid": "b2895d35c6ffddfb9adc7c1d88cef793",
"text": "We develop algorithms for a stochastic appointment sequencing and scheduling problem with waiting time, idle time, and overtime costs. Scheduling surgeries in an operating room motivates the work. The problem is formulated as an integer stochastic program using sample average approximation. A heuristic solution approach based on Benders’ decomposition is developed and compared to exact methods and to previously proposed approaches. Extensive computational testing based on real data shows that the proposed methods produce good results compared to previous approaches. In addition we prove that the finite scenario sample average approximation problem is NP-complete.",
"title": ""
},
{
"docid": "2d3e779b25d0ffe8a97744be370125fa",
"text": "This paper describes the details of Sighthound’s fully automated age, gender and emotion recognition system. The backbone of our system consists of several deep convolutional neural networks that are not only computationally inexpensive, but also provide state-of-theart results on several competitive benchmarks. To power our novel deep networks, we collected large labeled datasets through a semi-supervised pipeline to reduce the annotation effort/time. We tested our system on several public benchmarks and report outstanding results. Our age, gender and emotion recognition models are available to developers through the Sighthound Cloud API at https://www.sighthound.com/products/cloud",
"title": ""
},
{
"docid": "dceef3bbc02b4c83918d87d56cad863e",
"text": "In this paper we present an automated way of using spare CPU resources within a shared memory multi-processor or multi-core machine. Our approach is (i) to profile the execution of a program, (ii) from this to identify pieces of work which are promising sources of parallelism, (iii) recompile the program with this work being performed speculatively via a work-stealing system and then (iv) to detect at run-time any attempt to perform operations that would reveal the presence of speculation.\n We assess the practicality of the approach through an implementation based on GHC 6.6 along with a limit study based on the execution profiles we gathered. We support the full Concurrent Haskell language compiled with traditional optimizations and including I/O operations and synchronization as well as pure computation. We use 20 of the larger programs from the 'nofib' benchmark suite. The limit study shows that programs vary a lot in the parallelism we can identify: some have none, 16 have a potential 2x speed-up, 4 have 32x. In practice, on a 4-core processor, we get 10-80% speed-ups on 7 programs. This is mainly achieved at the addition of a second core rather than beyond this.\n This approach is therefore not a replacement for manual parallelization, but rather a way of squeezing extra performance out of the threads of an already-parallel program or out of a program that has not yet been parallelized.",
"title": ""
},
{
"docid": "a3fe3b92fe53109888b26bb03c200180",
"text": "Using Artificial Neural Networh (A\".) in critical applications can be challenging due to the often experimental nature of A\" construction and the \"black box\" label that is fiequently attached to A\".. Wellaccepted process models exist for algorithmic sofhyare development which facilitate software validation and acceptance. The sojiware development process model presented herein is targeted specifically toward artificial neural networks in crik-al appliicationr. 7% model is not unwieldy, and could easily be used on projects without critical aspects. This should be of particular interest to organizations that use AMVs and need to maintain or achieve a Capability Maturity Model (CM&?I or IS0 sofhyare development rating. Further, while this model is aimed directly at neural network development, with minor moda&ations, the model could be applied to any technique wherein knowledge is extractedfiom existing &ka, such as other numeric approaches or knowledge-based systems.",
"title": ""
},
{
"docid": "fa52038147254f207a31760cf0109ccd",
"text": "Small objects detection is a challenging task in computer vision due to its limited resolution and information. In order to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. In this paper, we aim to detect small objects at a fast speed, using the best object detector Single Shot Multibox Detector (SSD) with respect to accuracy-vs-speed trade-off as base architecture. We propose a multi-level feature fusion method for introducing contextual information in SSD, in order to improve the accuracy for small objects. In detailed fusion operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of adding contextual information. Experimental results show that these two fusion modules obtain higher mAP on PASCAL VOC2007 than baseline SSD by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small objects categories. The testing speed of them is 43 and 40 FPS respectively, superior to the state of the art Deconvolutional single shot detector (DSSD) by 29.4 and 26.4 FPS.",
"title": ""
},
{
"docid": "7ccd75f1626966b4ffb22f2788d64fdc",
"text": "Diabetes has affected over 246 million people worldwide with a majority of them being women. According to the WHO report, by 2025 this number is expected to rise to over 380 million. The disease has been named the fifth deadliest disease in the United States with no imminent cure in sight. With the rise of information technology and its continued advent into the medical and healthcare sector, the cases of diabetes as well as their symptoms are well documented. This paper aims at finding solutions to diagnose the disease by analyzing the patterns found in the data through classification analysis by employing Decision Tree and Naïve Bayes algorithms. The research hopes to propose a quicker and more efficient technique of diagnosing the disease, leading to timely treatment of the patients.",
"title": ""
},
{
"docid": "902e6d047605a426ae9bebc3f9ddf139",
"text": "Learning based approaches have not yet achieved their full potential in optical flow estimation, where their performance still trails heuristic approaches. In this paper, we present a CNN based patch matching approach for optical flow estimation. An important contribution of our approach is a novel thresholded loss for Siamese networks. We demonstrate that our loss performs clearly better than existing losses. It also allows to speed up training by a factor of 2 in our tests. Furthermore, we present a novel way for calculating CNN based features for different image scales, which performs better than existing methods. We also discuss new ways of evaluating the robustness of trained features for the application of patch matching for optical flow. An interesting discovery in our paper is that low-pass filtering of feature maps can increase the robustness of features created by CNNs. We proved the competitive performance of our approach by submitting it to the KITTI 2012, KITTI 2015 and MPI-Sintel evaluation portals where we obtained state-of-the-art results on all three datasets.",
"title": ""
},
{
"docid": "3ba011d181a4644c8667b139c63f50ff",
"text": "Recent studies have suggested that positron emission tomography (PET) imaging with 68Ga-labelled DOTA-somatostatin analogues (SST) like octreotide and octreotate is useful in diagnosing neuroendocrine tumours (NETs) and has superior value over both CT and planar and single photon emission computed tomography (SPECT) somatostatin receptor scintigraphy (SRS). The aim of the present study was to evaluate the role of 68Ga-DOTA-1-NaI3-octreotide (68Ga-DOTANOC) in patients with SST receptor-expressing tumours and to compare the results of 68Ga-DOTA-D-Phe1-Tyr3-octreotate (68Ga-DOTATATE) in the same patient population. Twenty SRS were included in the study. Patients’ age (n = 20) ranged from 25 to 75 years (mean 55.4 ± 12.7 years). There were eight patients with well-differentiated neuroendocrine tumour (WDNET) grade1, eight patients with WDNET grade 2, one patient with poorly differentiated neuroendocrine carcinoma (PDNEC) grade 3 and one patient with mixed adenoneuroendocrine tumour (MANEC). All patients had two consecutive PET studies with 68Ga-DOTATATE and 68Ga-DOTANOC. All images were evaluated visually and maximum standardized uptake values (SUVmax) were also calculated for quantitative evaluation. On visual evaluation both tracers produced equally excellent image quality and similar body distribution. The physiological uptake sites of pituitary and salivary glands showed higher uptake in 68Ga-DOTATATE images. Liver and spleen uptake values were evaluated as equal. Both 68Ga-DOTATATE and 68Ga-DOTANOC were negative in 6 (30 %) patients and positive in 14 (70 %) patients. In 68Ga-DOTANOC images only 116 of 130 (89 %) lesions could be defined and 14 lesions were missed because of lack of any uptake. SUVmax values of lesions were significantly higher on 68Ga-DOTATATE images. Our study demonstrated that the images obtained by 68Ga-DOTATATE and 68Ga-DOTANOC have comparable diagnostic accuracy. However, 68Ga-DOTATATE seems to have a higher lesion uptake and may have a potential advantage.",
"title": ""
},
{
"docid": "7a8d82930ecfe4cdaabc7a5f5406763b",
"text": "This work presents a vision-based vehicle identification system which consists of object extraction, object tracking, occlusion detection and segmentation, and vehicle classification. Since the vehicles on the freeway may occlude each other, their trajectories may merge or split. To separate the occluded objects, we develop three processed: occlusion detection, motion vector calibration, and motion field clustering. Finally, the segmented objects are classified into seven different categorized vehicles.",
"title": ""
},
{
"docid": "1cc3f4ab70fadf1a497c694c0cfa6c69",
"text": "In modern C++ development, templates are essential to achieve maintainable and reusable code. On compilation of the source code, the compilers have to perform semantic checks of the template definition and instantiation, to satisfy the rules of the C++ standard and to make sure, the code related to the templates is correct. This task can be accomplished in different ways. In this paper, the process of the semantic verification of C++ tenmplates in CLANG is analyzed. The mechanism of checking the template definition and instantiation is explored separately by examining selected functions from the CLANG source code. Beside this, statements about the code quality and implementation are given. In addition, an analysis of the capability of CLANG to support Concepts, a type-system for template parameters, originally proposed for the C++0x standard is included in this paper. Based on the findings of the template definition and instantiation semantics, necessary changes to the code of CLANG are described as well as possible problems that can appear.",
"title": ""
}
] |
scidocsrr
|
c1e7e762978fd4a25f7abb6bfd28e104
|
Grasper having tactile sensing function using acoustic reflection for laparoscopic surgery
|
[
{
"docid": "b5cb1a416b3960b0cea7d5edc719dde8",
"text": "Research on surgical robotics demands systems for evaluating scientific approaches. Such systems can be divided into dedicated and versatile systems. Dedicated systems are designed for a single surgical task or technique, whereas versatile systems are designed to be expandable and useful in multiple surgical applications. Versatile systems are often based on industrial robots, though, and because of this, are hardly suitable for close contact with humans. To achieve a high degree of versatility the Miro robotic surgery platform (MRSP) consists of versatile components, dedicated front–ends towards surgery and configurable interfaces for the surgeon. This paper presents MiroSurge, a configuration of the MRSP that allows for bimanual endoscopic telesurgery with force feedback. While the components of the MiroSurge system are shown to fulfil the rigid design requirements for robotic telesurgery with force feedback, the system remains versatile, which is supposed to be a key issue for the further development and optimisation.",
"title": ""
}
] |
[
{
"docid": "66a04c37464888c83bdb7071aad36ad1",
"text": "As usual let s = σ+ it. For any fixed value t = t0 with |t0| ≥ 8, and for σ ≤ 0, we show that |ζ(s)| is strictly monotone decreasing in σ, with the same result also holding for the related functions ξ of Riemann and η of Euler. The following inequality relating the monotonicity of all three functions is proved:",
"title": ""
},
{
"docid": "170a1dba20901d88d7dc3988647e8a22",
"text": "This paper discusses two antennas monolithically integrated on-chip to be used respectively for wireless powering and UWB transmission of a tag designed and fabricated in 0.18-μm CMOS technology. A multiturn loop-dipole structure with inductive and resistive stubs is chosen for both antennas. Using these on-chip antennas, the chip employs asymmetric communication links: at downlink, the tag captures the required supply wirelessly from the received RF signal transmitted by a reader and, for the uplink, ultra-wideband impulse-radio (UWB-IR), in the 3.1-10.6-GHz band, is employed instead of backscattering to achieve extremely low power and a high data rate up to 1 Mb/s. At downlink with the on-chip power-scavenging antenna and power-management unit circuitry properly designed, 7.5-cm powering distance has been achieved, which is a huge improvement in terms of operation distance compared with other reported tags with on-chip antenna. Also, 7-cm operating distance is achieved with the implemented on-chip UWB antenna. The tag can be powered up at all the three ISM bands of 915 MHz and 2.45 GHz, with off-chip antennas, and 5.8 GHz with the integrated on-chip antenna. The tag receives its clock and the commands wirelessly through the modulated RF powering-up signal. Measurement results show that the tag can operate up to 1 Mb/s data rate with a minimum input power of -19.41 dBm at 915-MHz band, corresponding to 15.7 m of operation range with an off-chip 0-dB gain antenna. This is a great improvement compared with conventional passive RFIDs in term of data rate and operation distance. The power consumption of the chip is measured to be just 16.6 μW at the clock frequency of 10 MHz at 1.2-V supply. In addition, in this paper, for the first time, the radiation pattern of an on-chip antenna at such a frequency is measured. The measurement shows that the antenna has an almost omnidirectional radiation pattern so that the chip's performance is less direction-dependent.",
"title": ""
},
{
"docid": "5bd483e895de779f8b91ca8537950a2f",
"text": "To evaluate the efficacy of pregabalin in facilitating taper off chronic benzodiazepines, outpatients (N = 106) with a lifetime diagnosis of generalized anxiety disorder (current diagnosis could be subthreshold) who had been treated with a benzodiazepine for 8-52 weeks were stabilized for 2-4 weeks on alprazolam in the range of 1-4 mg/day. Patients were then randomized to 12 weeks of double-blind treatment with either pregabalin 300-600 mg/day or placebo while undergoing a gradual benzodiazepine taper at a rate of 25% per week, followed by a 6-week benzodiazepine-free phase during which they continued double-blind study treatment. Outcome measures included ability to remain benzodiazepine-free (primary) as well as changes in Hamilton Anxiety Rating Scale (HAM)-A and Physician Withdrawal Checklist (PWC). At endpoint, a non-significant higher proportion of patients remained benzodiazepine-free receiving pregabalin compared with placebo (51.4% vs 37.0%). Treatment with pregabalin was associated with significantly greater endpoint reduction in the HAM-A total score versus placebo (-2.5 vs +1.3; p < 0.001), and lower endpoint mean PWC scores (6.5 vs 10.3; p = 0.012). Thirty patients (53%) in the pregabalin group and 19 patients (37%) in the placebo group completed the study, reducing the power to detect a significant difference on the primary outcome. The results on the anxiety and withdrawal severity measures suggest that switching to pregabalin may be a safe and effective method for discontinuing long-term benzodiazepine therapy.",
"title": ""
},
{
"docid": "a083a09e0b156781d1a782e2b6951c9d",
"text": "If a person with carious lesions needs or requests crowns or inlays, these dental fillings have to be manufactured for each tooth and each person individually. We survey computer vision techniques which can be used to automate this process. We introduce three particular applications which are concerned with the reconstruction of surface information. The first one aims at building up a database of normalized depth images of posterior teeth and at extracting characteristic features from these images. In the second application, a given occlusal surface of a posterior tooth with a prepared cavity is digitally reconstructed using an intact model tooth from a given database. The calculated surface data can then be used for automatic milling of a dental prosthesis, e.g. from a preshaped ceramic block. In the third application a hand-made provisoric wax inlay or crown can be digitally scanned by a laser sensor and copied three dimensionally into a different material such as ceramic. The results are converted to a format required by the computer-integrated manufacturing (CIM) system for automatic milling.",
"title": ""
},
{
"docid": "d90efd08169f350d336afcbea291306c",
"text": "This paper describes a multi-UAV distributed decisional architecture developed in the framework of the AWARE Project together with a set of tests with real Unmanned Aerial Vehicles (UAVs) and Wireless Sensor Networks (WSNs) to validate this approach in disaster management and civil security applications. The paper presents the different components of the AWARE platform and the scenario in which the multi-UAV missions were carried out. The missions described in this paper include surveillance with multiple UAVs, sensor deployment and fire threat confirmation. In order to avoid redundancies, instead of describing the operation of the full architecture for every mission, only non-overlapping aspects are highlighted in each one. Key issues in multi-UAV systems such as distributed task allocation, conflict resolution and plan refining are solved in the execution of the missions.",
"title": ""
},
{
"docid": "907de88b781d58610b0a09313014017f",
"text": "This study was conducted to determine the seroprevalence of antibodies against Newcastle disease virus (NDV), Chicken infectious anemia virus (CIAV) and Avian influenza virus (AIV) in indigenous chickens in Grenada, West Indies. Indigenous chickens are kept for eggs and meat for either domestic consumption or local sale. These birds are usually kept in the backyard of the house with little or no shelter. The mean size of the flock per household was 14 birds (range 5-40 birds). Blood was collected from 368 birds from all the six parishes of Grenada and serum samples were tested for antibodies against NDV, CIAV and AIV using commercial enzyme-linked immunosorbent assay (ELISA) kits. The seroprevalence of antibodies against NDV, CIA and AI was 66.3% (95% CI; 61.5% to 71.1%), 59.5% (95% CI; 54.4% to 64.5%) and 10.3% (95% CI; 7.2% to 13.4%), respectively. Since indigenous chickens in Grenada are not vaccinated against poultry pathogens, these results indicate exposure of chickens to NDV, AIV and CIAV Indigenous chickens are thus among the risk factors acting as vectors of pathogens that can threaten commercial poultry and other avian species in Grenada",
"title": ""
},
{
"docid": "a4c739a3b4d6adbb907568c7fdc85d9d",
"text": "This paper describes about implementation of speech recognition system on a mobile robot for controlling movement of the robot. The methods used for speech recognition system are Linear Predictive Coding (LPC) and Artificial Neural Network (ANN). LPC method is used for extracting feature of a voice signal and ANN is used as the recognition method. Backpropagation method is used to train the ANN. Voice signals are sampled directly from the microphone and then they are processed using LPC method for extracting the features of voice signal. For each voice signal, LPC method produces 576 data. Then, these data become the input of the ANN. The ANN was trained by using 210 data training. This data training includes the pronunciation of the seven words used as the command, which are created from 30 different people. Experimental results show that the highest recognition rate that can be achieved by this system is 91.4%. This result is obtained by using 25 samples per word, 1 hidden layer, 5 neurons for each hidden layer, and learning rate 0.1.",
"title": ""
},
{
"docid": "b09ebc39f36f16a0ef7cb1b5e3ce9620",
"text": "Mobile applications (apps) can be very useful software on smartphones for all aspects of people’s lives. Chronic diseases, such as diabetes, can be made manageable with the support of mobile apps. Applications on smartphones can also help people with diabetes to control their fitness and health. A systematic review of free apps in the English language for smartphones in three of the most popular mobile app stores: Google Play (Android), App Store (iOS) and Windows Phone Store, was performed from November to December 2015. The review of freely available mobile apps for self-management of diabetes was conducted based on the criteria for promoting diabetes self-management as defined by Goyal and Cafazzo (monitoring blood glucose level and medication, nutrition, physical exercise and body weight). The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) was followed. Three independent experts in the field of healthcare-related mobile apps were included in the assessment for eligibility and testing phase. We tested and evaluated 65 apps (21 from Google Play Store, 31 from App Store and 13 from Windows Phone Store). Fifty-six of these apps did not meet even minimal requirements or did not work properly. While a wide selection of mobile applications is available for self-management of diabetes, current results show that there are only nine (5 from Google Play Store, 3 from App Store and 1 from Windows Phone Store) out of 65 reviewed mobile apps that can be versatile and useful for successful self-management of diabetes based on selection criteria. The levels of inclusion of features based on selection criteria in selected mobile apps can be very different. The results of the study can be used as a basis to prvide app developers with certain recommendations. There is a need for mobile apps for self-management of diabetes with more features in order to increase the number of long-term users and thus influence better self-management of the disease.",
"title": ""
},
{
"docid": "2f8a74054d456d1136f0a36303b722bc",
"text": "The swarm intelligence paradigm has proven to have very interesting properties such as robustness, flexibility and ability to solve complex problems exploiting parallelism and self-organization. Several robotics implementations of this paradigm confirm that these properties can be exploited for the control of a population of physically independent mobile robots. The work presented here introduces a new robotic concept called swarm-bot in which the collective interaction exploited by the swarm intelligence mechanism goes beyond the control layer and is extended to the physical level. This implies the addition of new mechanical functionalities on the single robot, together with new electronics and software to manage it. These new functionalities, even if not directly related to mobility and navigation, allow to address complex mobile robotics problems, such as extreme all-terrain exploration. The work shows also how this new concept is investigated using a simulation tool (swarmbot3d) specifically developed for quickly designing and evaluating new control algorithms. Experimental work shows how the simulated detailed representation of one s-bot has been calibrated to match the behaviour of the real robot.",
"title": ""
},
{
"docid": "9b06026e998df745d820fbd835554b13",
"text": "There have been significant advances in the field of Internet of Things (IoT) recently. At the same time there exists an ever-growing demand for ubiquitous healthcare systems to improve human health and well-being. In most of IoT-based patient monitoring systems, especially at smart homes or hospitals, there exists a bridging point (i.e., gateway) between a sensor network and the Internet which often just performs basic functions such as translating between the protocols used in the Internet and sensor networks. These gateways have beneficial knowledge and constructive control over both the sensor network and the data to be transmitted through the Internet. In this paper, we exploit the strategic position of such gateways to offer several higher-level services such as local storage, real-time local data processing, embedded data mining, etc., proposing thus a Smart e-Health Gateway. By taking responsibility for handling some burdens of the sensor network and a remote healthcare center, a Smart e-Health Gateway can cope with many challenges in ubiquitous healthcare systems such as energy efficiency, scalability, and reliability issues. A successful implementation of Smart e-Health Gateways enables massive deployment of ubiquitous health monitoring systems especially in clinical environments. We also present a case study of a Smart e-Health Gateway called UTGATE where some of the discussed higher-level features have been implemented. Our proof-of-concept design demonstrates an IoT-based health monitoring system with enhanced overall system energy efficiency, performance, interoperability, security, and reliability.",
"title": ""
},
{
"docid": "f5a188c87dd38a0a68612352891bcc3f",
"text": "Sentiment analysis of online documents such as news articles, blogs and microblogs has received increasing attention in recent years. In this article, we propose an efficient algorithm and three pruning strategies to automatically build a word-level emotional dictionary for social emotion detection. In the dictionary, each word is associated with the distribution on a series of human emotions. In addition, a method based on topic modeling is proposed to construct a topic-level dictionary, where each topic is correlated with social emotions. Experiment on the real-world data sets has validated the effectiveness and reliability of the methods. Compared with other lexicons, the dictionary generated using our approach is language-independent, fine-grained, and volume-unlimited. The generated dictionary has a wide range of applications, including predicting the emotional distribution of news articles, identifying social emotions on certain entities and news events.",
"title": ""
},
{
"docid": "1451c145b1ed5586755a2c89517a582f",
"text": "A robust automatic micro-expression recognition system would have broad applications in national safety, police interrogation, and clinical diagnosis. Developing such a system requires high quality databases with sufficient training samples which are currently not available. We reviewed the previously developed micro-expression databases and built an improved one (CASME II), with higher temporal resolution (200 fps) and spatial resolution (about 280×340 pixels on facial area). We elicited participants' facial expressions in a well-controlled laboratory environment and proper illumination (such as removing light flickering). Among nearly 3000 facial movements, 247 micro-expressions were selected for the database with action units (AUs) and emotions labeled. For baseline evaluation, LBP-TOP and SVM were employed respectively for feature extraction and classifier with the leave-one-subject-out cross-validation method. The best performance is 63.41% for 5-class classification.",
"title": ""
},
{
"docid": "a73275f83b94ee3fb1675a125edbb55a",
"text": "Treatment of biowaste, the predominant waste fraction in lowand middle-income settings, offers public health, environmental and economic benefits by converting waste into a hygienic product, diverting it from disposal sites, and providing a source of income. This article presents a comprehensive overview of 13 biowaste treatment technologies, grouped into four categories: (1) direct use (direct land application, direct animal feed, direct combustion), (2) biological treatment (composting, vermicomposting, black soldier fly treatment, anaerobic digestion, fermentation), (3) physico-chemical treatment (transesterification, densification), and (4) thermo-chemical treatment (pyrolysis, liquefaction, gasification). Based on a literature review and expert consultation, the main feedstock requirements, process conditions and treatment products are summarized, and the challenges and trends, particularly regarding the applicability of each technology in the urban lowand middle-income context, are critically discussed. An analysis of the scientific articles published from 2005 to 2015 reveals substantial differences in the amount and type of research published for each technology, a fact that can partly be explained with the development stage of the technologies. Overall, publications from case studies and field research seem disproportionately underrepresented for all technologies. One may argue that this reflects the main task of researchers—to conduct fundamental research for enhanced process understanding—but it may also be a result of the traditional embedding of the waste sector in the discipline of engineering science, where socio-economic and management aspects are seldom object of the research. More unbiased, wellstructured and reproducible evidence from case studies at scale could foster the knowledge transfer to practitioners and enhance the exchange between academia, policy and practice.",
"title": ""
},
{
"docid": "679f15129877227621332bce7ea40218",
"text": "The Semantic Web Rule Language (SWRL) allows the combination of rules and ontology terms, defined using the Web Ontology Language (OWL), to increase the expressiveness of both. However, as rule sets grow, they become difficult to understand and error prone, especially when used and maintained by more than one person. If SWRL is to become a true web standard, it has to be able to handle big rule sets. To find answers to this problem, we first surveyed business rule systems and found the key features and interfaces they used and then, based on our finds, we proposed techniques and tools that use new visual representations to edit rules in a web application. They allow error detection, rule similarity analysis, rule clustering visualization and atom reuse between rules. These tools are implemented in the SWRL Editor, an open source plug-in for Web-Protégé (a web-based ontology editor) that leverages Web-Protégé’s collaborative tools to allow groups of users to not only view and edit rules but also comment and discuss about them. We evaluated our solution comparing it to the only two SWRL editor implementations openly available and showed that it implements more of the key features present in traditional rule systems.",
"title": ""
},
{
"docid": "a62dc7e25b050addad1c27d92deee8b7",
"text": "Potentially dangerous cryptography errors are well-documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable, however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs. We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security benefits – reducing the decision space, as expected, prevents choice of insecure parameters – simplicity is not enough. Poor documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the availability of comprehensive documentation and easy-to-use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions, however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.",
"title": ""
},
{
"docid": "b0def34ea13c4b561a54bd71c8c9ec96",
"text": "This paper describes an algorithm about online gait trajectory generation method, controller for walking, brief introduction of humanoid robot platform KHR-3 (KAIST Humanoid Robot-3: HUBO) and experimental result. The gait trajectory has continuity, smoothness in varying walking period and stride, and it has simple mathematical form which can be implemented easily. It is tested on the robot with some control algorithms. The gait trajectory algorithm is composed of two kinds of function trajectory. The first one is cycloid function, which is used for ankle position in Cartesian coordinate space. Because this profile is made by superposition of linear and sinusoidal function, it has a property of slow start, fast moving, and slow stop. This characteristics can reduce the over burden at instantaneous high speed motion of the actuator. The second one is 3rd order polynomial function. It is continuous in the defined time interval, easy to use when the boundary condition is well defined, and has standard values of coefficients when the time scale is normalized. Position and velocity values are used for its boundary condition. Controllers mainly use F/T(Force/Torque) sensor at the ankle of the robot as a sensor data, and modify the input position profiles (in joint angle space and Cartesian coordinate space). They are to reduce unexpected external forces such as landing shock, and vibration induced by compliances of the sensors and reduction gears, because they can affect seriously on the walking stability. This trajectory and control algorithm is now on the implementing stage for the free-walking realization of KHR-3. As a first stage of realization, we realized the marking time and forward walking algorithm with variable frequency and stride",
"title": ""
},
{
"docid": "ba920ed04c20125f5975519367bebd02",
"text": "Tensor and matrix factorization methods have attracted a lot of attention recently thanks to their successful applications to information extraction, knowledge base population, lexical semantics and dependency parsing. In the first part, we will first cover the basics of matrix and tensor factorization theory and optimization, and then proceed to more advanced topics involving convex surrogates and alternative losses. In the second part we will discuss recent NLP applications of these methods and show the connections with other popular methods such as transductive learning, topic models and neural networks. The aim of this tutorial is to present in detail applied factorization methods, as well as to introduce more recently proposed methods that are likely to be useful to NLP applications.",
"title": ""
},
{
"docid": "661b7615e660ae8e0a3b2a7294b9b921",
"text": "In this paper, a very simple solution-based method is employed to coat amorphous MnO2 onto crystalline SnO2 nanowires grown on stainless steel substrate, which utilizes the better electronic conductivity of SnO2 nanowires as the supporting backbone to deposit MnO2 for supercapacitor electrodes. Cyclic voltammetry (CV) and galvanostatic charge/discharge methods have been carried out to study the capacitive properties of the SnO2/MnO2 composites. A specific capacitance (based on MnO2) as high as 637 F g(-1) is obtained at a scan rate of 2 mV s(-1) (800 F g(-1) at a current density of 1 A g(-1)) in 1 M Na2SO4 aqueous solution. The energy density and power density measured at 50 A g(-1) are 35.4 W h kg(-1) and 25 kW kg(-1), respectively, demonstrating the good rate capability. In addition, the SnO2/MnO2 composite electrode shows excellent long-term cyclic stability (less than 1.2% decrease of the specific capacitance is observed after 2000 CV cycles). The temperature-dependent capacitive behavior is also discussed. Such high-performance capacitive behavior indicates that the SnO2/MnO2 composite is a very promising electrode material for fabricating supercapacitors.",
"title": ""
},
{
"docid": "a00d2d9dde3f767ce6b7308a9cdd8f03",
"text": "Using an improved method of gel electrophoresis, many hitherto unknown proteins have been found in bacteriophage T4 and some of these have been identified with specific gene products. Four major components of the head are cleaved during the process of assembly, apparently after the precursor proteins have assembled into some large intermediate structure.",
"title": ""
},
{
"docid": "79a52df3cc3341fba18b665372ec7453",
"text": "This paper describes experiments, on two domains, to investigate the effect of averaging over predictions of multiple decision trees, instead of using a single tree. Other authors have pointed out theoretical and commonsense reasons for preferring· the multiple tree approach. Ideally, we would like to consider predictions from all trees, weighted by their probability. However, there is a vast·number of different trees, and it is difficult to estimate the probability of each tree. We sidestep the estimation problem by using a modified version of the ID3 algorithm to build good trees, and average over only these trees. Our results are encouraging. For each domain, we managed to produce a small number of good trees. We fmd that it is best to average across sets of trees with different structure; this usually gives better perfonnance than any of the constituent trees, including the ID3 tree.",
"title": ""
}
] |
scidocsrr
|
02c3f96b5188819d74897e59837fc742
|
Abstraction for Solving Large Incomplete-Information Games
|
[
{
"docid": "426a7e1f395213d627cd9fb3b3b561b1",
"text": "In the field of computational game theory, games are often compared in terms of their size. This can be measured in several ways, including the number of unique game states, the number of decision points, and the total number of legal actions over all decision points. These numbers are either known or estimated for a wide range of classic games such as chess and checkers. In the stochastic and imperfect information game of poker, these sizes are easily computed in “limit” games which restrict the players’ available actions, but until now had only been estimated for the more complicated “no-limit” variants. In this paper, we describe a simple algorithm for quickly computing the size of two-player no-limit poker games, provide an implementation of this algorithm, and present for the first time precise counts of the number of game states, information sets, actions and terminal nodes in the no-limit poker games played in the Annual Computer Poker Competition.",
"title": ""
}
] |
[
{
"docid": "2bc481a072f59d244eee80bdcc6eafb4",
"text": "This paper presents a soft switching DC/DC converter for high voltage application. The interleaved pulse-width modulation (PWM) scheme is used to reduce the ripple current at the output capacitor and the size of output inductors. Two converter cells are connected in series at the high voltage side to reduce the voltage stresses of the active switches. Thus, the voltage stress of each switch is clamped at one half of the input voltage. On the other hand, the output sides of two converter cells are connected in parallel to achieve the load current sharing and reduce the current stress of output inductors. In each converter cell, a half-bridge converter with the asymmetrical PWM scheme is adopted to control power switches and to regulate the output voltage at a desired voltage level. Based on the resonant behavior by the output capacitance of power switches and the transformer leakage inductance, active switches can be turned on at zero voltage switching (ZVS) during the transition interval. Thus, the switching losses of power MOSFETs are reduced. The current doubler rectifier is used at the secondary side to partially cancel ripple current. Therefore, the root-mean-square (rms) current at output capacitor is reduced. The proposed converter can be applied for high input voltage applications such as a three-phase 380V utility system. Finally, experiments based on a laboratory prototype with 960W (24V/40A) rated power are provided to demonstrate the performance of proposed converter.",
"title": ""
},
{
"docid": "842d06943ac9ad55ef90d2a4a3c65ed4",
"text": "The abundance of memory corruption and disclosure vulnerabilities in kernel code necessitates the deployment of hardening techniques to prevent privilege escalation attacks. As more strict memory isolation mechanisms between the kernel and user space, like Intel's SMEP, become commonplace, attackers increasingly rely on code reuse techniques to exploit kernel vulnerabilities. Contrary to similar attacks in more restrictive settings, such as web browsers, in kernel exploitation, non-privileged local adversaries have great flexibility in abusing memory disclosure vulnerabilities to dynamically discover, or infer, the location of certain code snippets and construct code-reuse payloads. Recent studies have shown that the coupling of code diversification with the enforcement of a \"read XOR execute\" (R^X) memory safety policy is an effective defense against the exploitation of userland software, but so far this approach has not been applied for the protection of the kernel itself.\n In this paper, we fill this gap by presenting kR^X: a kernel hardening scheme based on execute-only memory and code diversification. We study a previously unexplored point in the design space, where a hypervisor or a super-privileged component is not required. Implemented mostly as a set of GCC plugins, kR^X is readily applicable to the x86-64 Linux kernel and can benefit from hardware support (e.g., MPX on modern Intel CPUs) to optimize performance. In full protection mode, kR^X incurs a low runtime overhead of 4.04%, which drops to 2.32% when MPX is available.",
"title": ""
},
{
"docid": "56dda298f1033dc3bd381d525678b904",
"text": "This study was undertaken to characterize functions of the outer membrane protein OmpW, which potentially contributes to the development of colistin- and imipenem-resistance in Acinetobacter baumannii. Reconstitution of OmpW in artificial lipid bilayers showed that it forms small channels (23 pS in 1 m KCl) and markedly interacts with iron and colistin, but not with imipenem. In vivo, (55) Fe uptake assays comparing the behaviours of ΔompW mutant and wild-type strains confirmed a role for OmpW in A. baumannii iron homeostasis. However, the loss of OmpW expression did not have an impact on A. baumannii susceptibilities to colistin or imipenem.",
"title": ""
},
{
"docid": "d1ab899118a6700d43e7d86ebf5bd19b",
"text": "Taking full advantage of the high resistivity substrate and underlying oxide of SOI technology, a high performance CMOS SPDT T/R switch has been designed and fabricated in a partially depleted, 0.25µm SOI process. The targeted Bluetooth class II specifications have been fully fitted. The switch over the high resistivity substrate exhibits a 0.7dB insertion loss and a 50dB isolation at 2.4GHz; at 5GHz insertion loss and isolation are 1dB and 47dB respectively. The measured ICP1dBis +12dBm.",
"title": ""
},
{
"docid": "b814aa8f08884ac3c483236ee7533ec4",
"text": "Biometric systems based on face recognition have been shown unreliable under the presence of face-spoofing images. Hence, automatic solutions for spoofing detection became necessary. In this paper, face-spoofing detection is proposed by searching for Moiré patterns due to the overlap of the digital grids. The conditions under which these patterns arise are first described, and their detection is proposed which is based on peak detection in the frequency domain. Experimental results for the algorithm are presented for an image database of facial shots under several conditions.",
"title": ""
},
{
"docid": "9446421ed0c69e8e0eadc39674283625",
"text": "The paper presents the main results of a previously developed methodology to better evaluate new technologies in Smart Cities, using a tool to evaluate different systems and technologies regarding their usefulness, considering each application and how technologies can impact the physical space and natural environment. Technologies have also been evaluated according to how they are used by citizens, who must be the main concern of all urban development. Through a survey conducted among the Smart City Spanish network (RECI) we found that the ICT’s that change our cities everyday must be reviewed, developing an innovative methodology in order to find an analysis matrix to assess and score all the technologies that affect a Smart City strategy. The paper provides the results of this methodology regarding the three main aspects to be considered in urban developments: mobility, energy efficiency, and quality of life after obtaining the final score for every analyzed technology. This methodology fulfills an identified need to study how new technologies could affect urban scenarios before being applied, developing an analysis system to be used by urban planners and policy-makers to decide how best to use them, and this paper tries to show, in a simple way, how they can appreciate the variances between different solutions.",
"title": ""
},
{
"docid": "dc1c602709691d96edea1e64c4afa114",
"text": "The authors propose an integration of person-centered therapy, with its focus on the here and now of client awareness of self, and solution-focused therapy, with its future-oriented techniques that also raise awareness of client potentials. Although the two theories hold different assumptions regarding the therapist's role in facilitating client change, it is suggested that solution-focused techniques are often compatible for use within a person-centered approach. Further, solution-focused activities may facilitate the journey of becoming self-aware within the person-centered tradition. This article reviews the two theories, clarifying the similarities and differences. To illustrate the potential integration of the approaches, several types of solution-focused strategies are offered through a clinical example. (PsycINFO Database Record (c) 2011 APA, all rights reserved).",
"title": ""
},
{
"docid": "9f32b1e95e163c96ebccb2596a2edb8d",
"text": "This paper is devoted to the control of a cable driven redundant parallel manipulator, which is a challenging problem due the optimal resolution of its inherent redundancy. Additionally to complicated forward kinematics, having a wide workspace makes it difficult to directly measure the pose of the end-effector. The goal of the controller is trajectory tracking in a large and singular free workspace, and to guarantee that the cables are always under tension. A control topology is proposed in this paper which is capable to fulfill the stringent positioning requirements for these type of manipulators. Closed-loop performance of various control topologies are compared by simulation of the closed-loop dynamics of the KNTU CDRPM, while the equations of parallel manipulator dynamics are implicit in structure and only special integration routines can be used for their integration. It is shown that the proposed joint space controller is capable to satisfy the required tracking performance, despite the inherent limitation of task space pose measurement.",
"title": ""
},
{
"docid": "eff17ece2368b925f0db8e18ea0fc897",
"text": "Blockchain, as the backbone technology of the current popular Bitcoin digital currency, has become a promising decentralized data management framework. Although blockchain has been widely adopted in many applications (e.g., finance, healthcare, and logistics), its application in mobile services is still limited. This is due to the fact that blockchain users need to solve preset proof-of-work puzzles to add new data (i.e., a block) to the blockchain. Solving the proof of work, however, consumes substantial resources in terms of CPU time and energy, which is not suitable for resource-limited mobile devices. To facilitate blockchain applications in future mobile Internet of Things systems, multiple access mobile edge computing appears to be an auspicious solution to solve the proof-of-work puzzles for mobile users. We first introduce a novel concept of edge computing for mobile blockchain. Then we introduce an economic approach for edge computing resource management. Moreover, a prototype of mobile edge computing enabled blockchain systems is presented with experimental results to justify the proposed concept.",
"title": ""
},
{
"docid": "855bffa65ab1a459223ae73aee777e85",
"text": "We present a program synthesis-oriented dataset consisting of human written problem statements and solutions for these problems. The problem statements were collected via crowdsourcing and the program solutions were extracted from humanwritten solutions in programming competitions, accompanied by input/output examples. We propose using this dataset for the program synthesis tasks aimed for working with real user-generated data. As a baseline we present few models, with best model achieving 5.6% accuracy, showcasing both complexity of the dataset and large room for",
"title": ""
},
{
"docid": "048081246f39fc80273d08493c770016",
"text": "Skin detection is used in many applications, such as face recognition, hand tracking, and human-computer interaction. There are many skin color detection algorithms that are used to extract human skin color regions that are based on the thresholding technique since it is simple and fast for computation. The efficiency of each color space depends on its robustness to the change in lighting and the ability to distinguish skin color pixels in images that have a complex background. For more accurate skin detection, we are proposing a new threshold based on RGB and YUV color spaces. The proposed approach starts by converting the RGB color space to the YUV color model. Then it separates the Y channel, which represents the intensity of the color model from the U and V channels to eliminate the effects of luminance. After that the threshold values are selected based on the testing of the boundary of skin colors with the help of the color histogram. Finally, the threshold was applied to the input image to extract skin parts. The detected skin regions were quantitatively compared to the actual skin parts in the input images to measure the accuracy and to compare the results of our threshold to the results of other’s thresholds to prove the efficiency of our approach. The results of the experiment show that the proposed threshold is more robust in terms of dealing with the complex background and light conditions than others. Keyword: Skin segmentation; Thresholding technique; Skin detection; Color space",
"title": ""
},
{
"docid": "50044f80063441c9477acc40ac07e19a",
"text": "Natural Language Inference (NLI) is fundamental to many Natural Language Processing (NLP) applications including semantic search and question answering. The NLI problem has gained significant attention due to the release of large scale, challenging datasets. Present approaches to the problem largely focus on learning-based methods that use only textual information in order to classify whether a given premise entails, contradicts, or is neutral with respect to a given hypothesis. Surprisingly, the use of methods based on structured knowledge – a central topic in artificial intelligence – has not received much attention vis-a-vis the NLI problem. While there are many open knowledge bases that contain various types of reasoning information, their use for NLI has not been well explored. To address this, we present a combination of techniques that harness external knowledge to improve performance on the NLI problem in the science questions domain. We present the results of applying our techniques on text, graph, and text-and-graph based models; and discuss the implications of using external knowledge to solve the NLI problem. Our model achieves close to state-of-the-art performance for NLI on the SciTail science questions dataset.",
"title": ""
},
{
"docid": "09a8aee1ff3315562c73e5176a870c37",
"text": "In a sparse-representation-based face recognition scheme, the desired dictionary should have good representational power (i.e., being able to span the subspace of all faces) while supporting optimal discrimination of the classes (i.e., different human subjects). We propose a method to learn an over-complete dictionary that attempts to simultaneously achieve the above two goals. The proposed method, discriminative K-SVD (D-KSVD), is based on extending the K-SVD algorithm by incorporating the classification error into the objective function, thus allowing the performance of a linear classifier and the representational power of the dictionary being considered at the same time by the same optimization procedure. The D-KSVD algorithm finds the dictionary and solves for the classifier using a procedure derived from the K-SVD algorithm, which has proven efficiency and performance. This is in contrast to most existing work that relies on iteratively solving sub-problems with the hope of achieving the global optimal through iterative approximation. We evaluate the proposed method using two commonly-used face databases, the Extended YaleB database and the AR database, with detailed comparison to 3 alternative approaches, including the leading state-of-the-art in the literature. The experiments show that the proposed method outperforms these competing methods in most of the cases. Further, using Fisher criterion and dictionary incoherence, we also show that the learned dictionary and the corresponding classifier are indeed better-posed to support sparse-representation-based recognition.",
"title": ""
},
{
"docid": "9f81e82aa60f06f3eac37d9bce3c9707",
"text": "Active contours are image segmentation methods that minimize the total energy of the contour to be segmented. Among the active contour methods, the radial methods have lower computational complexity and can be applied in real time. This work aims to present a new radial active contour technique, called pSnakes, using the 1D Hilbert transform as external energy. The pSnakes method is based on the fact that the beams in ultrasound equipment diverge from a single point of the probe, thus enabling the use of polar coordinates in the segmentation. The control points or nodes of the active contour are obtained in pairs and are called twin nodes. The internal energies as well as the external one, Hilbertian energy, are redefined. The results showed that pSnakes can be used in image segmentation of short-axis echocardiogram images and that they were effective in image segmentation of the left ventricle. The echo-cardiologist's golden standard showed that the pSnakes was the best method when compared with other methods. The main contributions of this work are the use of pSnakes and Hilbertian energy, as the external energy, in image segmentation. The Hilbertian energy is calculated by the 1D Hilbert transform. Compared with traditional methods, the pSnakes method is more suitable for ultrasound images because it is not affected by variations in image contrast, such as noise. The experimental results obtained by the left ventricle segmentation of echocardiographic images demonstrated the advantages of the proposed model. The results presented in this paper are justified due to an improved performance of the Hilbert energy in the presence of speckle noise.",
"title": ""
},
{
"docid": "c03bf622dde1bd81c0eb83a87e1f9924",
"text": "Image-schemas (e.g. CONTAINER, PATH, FORCE) are pervasive skeletal patterns of a preconceptual nature which arise from everyday bodily and social experiences and which enable us to mentally structure perceptions and events (Johnson 1987; Lakoff 1987, 1989). Within Cognitive Linguistics, these recurrent non-propositional models are taken to unify the different sensory and motor experiences in which they manifest themselves in a direct way and, most significantly, they may be metaphorically projected from the realm of the physical to other more abstract domains. In this paper, we intend to provide a cognitively plausible account of the OBJECT image-schema, which has received rather contradictory treatments in the literature. The OBJECT schema is experientially grounded in our everyday interaction with our own bodies and with other discrete entities. In the light of existence-related language (more specifically, linguistic expressions concerning the creation and destruction of both physical and abstract entities), it is argued that the OBJECT image-schema may be characterized as a basic image-schema, i.e. one that functions as a guideline for the activation of additional models, including other dependent image-schematic patterns (LINK, PART-WHOLE, CENTREPERIPHERY, etc.) which highlight various facets of the higher-level schema.",
"title": ""
},
{
"docid": "289a8d4cc1535b9ec07d85127f6096cd",
"text": "Automated tracking of events from chronologically ordered document streams is a new challenge for statistical text classification. Existing learning techniques must be adapted or improved in order to effectively handle difficult situations where the number of positive training instances per event is extremely small, the majority of training documents are unlabelled, and most of the events have a short duration in time. We adapted several supervised text categorization methods, specifically several new variants of the k-Nearest Neighbor (kNN) algorithm and a Rocchio approach, to track events. All of these methods showed significant improvement (up to 71% reduction in weighted error rates) over the performance of the original kNN algorithm on TDT benchmark collections, making kNN among the top-performing systems in the recent TDT3 official evaluation. Furthermore, by combining these methods, we significantly reduced the variance in performance of our event tracking system over different data collections, suggesting a robust solution for parameter optimization.",
"title": ""
},
{
"docid": "60a92a659fbfe0c81da9a6902e062455",
"text": "Public knowledge of crime and justice is largely derived from the media. This paper examines the influence of media consumption on fear of crime, punitive attitudes and perceived police effectiveness. This research contributes to the literature by expanding knowledge on the relationship between fear of crime and media consumption. This study also contributes to limited research on the media’s influence on punitive attitudes, while providing a much-needed analysis of the relationship between media consumption and satisfaction with the police. Employing OLS regression, the results indicate that respondents who are regular viewers of crime drama are more likely to fear crime. However, the relationship is weak. Furthermore, the results indicate that gender, education, income, age, perceived neighborhood problems and police effectiveness are statistically related to fear of crime. In addition, fear of crime, income, marital status, race, and education are statistically related to punitive attitudes. Finally, age, fear of crime, race, and perceived neighborhood problems are statistically related to perceived police effectiveness.",
"title": ""
},
{
"docid": "24fc1997724932c6ddc3311a529d7505",
"text": "In these days securing a network is an important issue. Many techniques are provided to secure network. Cryptographic is a technique of transforming a message into such form which is unreadable, and then retransforming that message back to its original form. Cryptography works in two techniques: symmetric key also known as secret-key cryptography algorithms and asymmetric key also known as public-key cryptography algorithms. In this paper we are reviewing different symmetric and asymmetric algorithms.",
"title": ""
},
{
"docid": "191031b86c491072d18012a5460d87e3",
"text": "Stepper motor drives exhibit advantages like open loop capability, high torque density and lower cost with respect to other brushless servo alternatives. However, the typical performances of conventional open loop stepper motor drives are limited, making them unsuitable where high speeds, fast dynamics and smooth motion is required. They are also easily prone to stall and usually produce loud audible noise. Recently, the increasing price of rare earth materials used in permanent magnets is making the use of high-quality PMSMs prohibitive when medium requirements are present. To achieve performances comparable to those of a servo drive, vector control is applied, since the hybrid stepper motor can be considered as a special case of the PMSM, characterized by the presence of two phases and a high pole count (usually 50 pole pairs). Removal of the position/speed sensor is therefore highly desirable to maintain low system costs. In this paper sensorless speed control is achieved by means of a simple yet reliable stationary reference frame back-EMF observer, that can be analytically tuned. The adoption of a standard three phase inverter contributes to the reduction of the system costs, while the injection of a small constant direct axis current leads to a strong reduction of the estimation noise effect, especially at low speed. Both a laboratory test bench and an actual industrial automation machine (i.e. high speed labeller) are considered for experiments, demonstrating the importance and effectiveness of the proposal.",
"title": ""
},
{
"docid": "93388c2897ec6ec7141bcc820ab6734c",
"text": "We address the task of single depth image inpainting. Without the corresponding color images, previous or next frames, depth image inpainting is quite challenging. One natural solution is to regard the image as a matrix and adopt the low rank regularization just as color image inpainting. However, the low rank assumption does not make full use of the properties of depth images. A shallow observation inspires us to penalize the nonzero gradients by sparse gradient regularization. However, statistics show that though most pixels have zero gradients, there is still a non-ignorable part of pixels, whose gradients are small but nonzero. Based on this property of depth images, we propose a low gradient regularization method in which we reduce the penalty for small gradients while penalizing the nonzero gradients to allow for gradual depth changes. The proposed low gradient regularization is integrated with the low rank regularization into the low rank low gradient approach for depth image inpainting. We compare our proposed low gradient regularization with the sparse gradient regularization. The experimental results show the effectiveness of our proposed approach.",
"title": ""
}
] |
scidocsrr
|
5586ddfafeca7f3e88573042260469d1
|
Governance in Social Media: A case study of the Wikipedia promotion process
|
[
{
"docid": "3c1f6ef650ce559f7e2d388347bf8e84",
"text": "Relations between users on social media sites often reflect a mixture of positive (friendly) and negative (antagonistic) interactions. In contrast to the bulk of research on social networks that has focused almost exclusively on positive interpretations of links between people, we study how the interplay between positive and negative relationships affects the structure of on-line social networks. We connect our analyses to theories of signed networks from social psychology. We find that the classical theory of structural balance tends to capture certain common patterns of interaction, but that it is also at odds with some of the fundamental phenomena we observe --- particularly related to the evolving, directed nature of these on-line networks. We then develop an alternate theory of status that better explains the observed edge signs and provides insights into the underlying social mechanisms. Our work provides one of the first large-scale evaluations of theories of signed networks using on-line datasets, as well as providing a perspective for reasoning about social media sites.",
"title": ""
}
] |
[
{
"docid": "07495a5eab98ad895d5a88a239ceb5bc",
"text": "The ability of the multiple model adaptive estimation method (MMAE) to detect faults based on a predefined hypothesis and the parameter-estimating ability of an extended Kalman filter (EKF) results in an efficient fault detection approach. This extended multiple model adaptive estimation method (EMMAE) has been investigated on a nonlinear model of an aircraft to estimate on-line the state vector of the system and the control surface deflection in case of failed actuators. A supervision module has been designed to enhance the performance of the EMMAE method and to appropriately change settings in a control allocation module. The results show that this reconfigurable flight control system is capable of detecting, isolating, and compensating for actuator faults of various types, without any need to add additional sensors to measure control-surface deflections or to change the flight controller",
"title": ""
},
{
"docid": "56d00919a57f91e89672c23919bb68db",
"text": "Now days, the power of internet is having an immense impact on human life and helps one to make important decisions. Since plenty of knowledge and valuable information is available on the internet therefore many users read review information given on web to take decisions such as buying products, watching movies, going to restaurants etc. Reviews contain user opinion about the product, service, event or topic. It is difficult for web users to read and understand the contents from large number of reviews. Whenever any detail is required in the document, this can be achieved by many probabilistic topic models. A topic model provides a generative model for documents and it defines a probabilistic scheme by which documents can be achieved. Topic model is an Integration of acquaintance and these acquaintances are blended with theme, where a theme is a fusion of terms. We describe Latent Dirichlet Markov Allocation 4 level hierarchical Bayesian Model (LDMA), planted on Latent Dirichlet Allocation (LDA) and Hidden Markov Model (HMM), which highlights on extracting multiword topics from text data. To retrieve the sentiment of the reviews, along with LDMA we will be using SentiWordNet and will compare our result to LDMA with feature extraction of baseline method of sentiment analysis.",
"title": ""
},
{
"docid": "3419c35e0dff7b47328943235419a409",
"text": "Several methods of classification of partially edentulous arches have been proposed and are in use. The most familiar classifications are those originally proposed by Kennedy, Cummer, and Bailyn. None of these classification systems include implants, simply because most of them were proposed before implants became widely accepted. At this time, there is no classification system for partially edentulous arches incorporating implants placed or to be placed in the edentulous spaces for a removable partial denture (RPD). This article proposes a simple classification system for partially edentulous arches with implants based on the Kennedy classification system, with modification, to be used for RPDs. It incorporates the number and positions of implants placed or to be placed in the edentulous areas. A different name, Implant-Corrected Kennedy (ICK) Classification System, is given to the new classification system to be differentiated from other partially edentulous arch classification systems.",
"title": ""
},
{
"docid": "22ef6b3fd2f4c926d81881039244511f",
"text": "Whereas in most cases a fatty liver remains free of inflammation, 10%-20% of patients who have fatty liver develop inflammation and fibrosis (nonalcoholic steatohepatitis [NASH]). Inflammation may precede steatosis in certain instances. Therefore, NASH could reflect a disease where inflammation is followed by steatosis. In contrast, NASH subsequent to simple steatosis may be the consequence of a failure of antilipotoxic protection. In both situations, many parallel hits derived from the gut and/or the adipose tissue may promote liver inflammation. Endoplasmic reticulum stress and related signaling networks, (adipo)cytokines, and innate immunity are emerging as central pathways that regulate key features of NASH.",
"title": ""
},
{
"docid": "94535b71855026738a0dad677f14e5b8",
"text": "Rule extraction (RE) from recurrent neural networks (RNNs) refers to finding models of the underlying RNN, typically in the form of finite state machines, that mimic the network to a satisfactory degree while having the advantage of being more transparent. RE from RNNs can be argued to allow a deeper and more profound form of analysis of RNNs than other, more or less ad hoc methods. RE may give us understanding of RNNs in the intermediate levels between quite abstract theoretical knowledge of RNNs as a class of computing devices and quantitative performance evaluations of RNN instantiations. The development of techniques for extraction of rules from RNNs has been an active field since the early 1990s. This article reviews the progress of this development and analyzes it in detail. In order to structure the survey and evaluate the techniques, a taxonomy specifically designed for this purpose has been developed. Moreover, important open research issues are identified that, if addressed properly, possibly can give the field a significant push forward.",
"title": ""
},
{
"docid": "1ff61150d7c8359d3dead84612093754",
"text": "In this work, a novel learning-based approach has been developed to generate driving paths by integrating LIDAR point clouds, GPS-IMU information, and Google driving directions. The system is based on a fully convolutional neural network that jointly learns to carry out perception and path generation from real-world driving sequences and that is trained using automatically generated training examples. Several combinations of input data were tested in order to assess the performance gain provided by specific information modalities. The fully convolutional neural network trained using all the available sensors together with driving directions achieved the best MaxF score of 88.13% when considering a region of interest of 60×60 meters. By considering a smaller region of interest, the agreement between predicted paths and ground-truth increased to 92.60%. The positive results obtained in this work indicate that the proposed system may help fill the gap between low-level scene parsing and behavior-reflex approaches by generating outputs that are close to vehicle control and at the same time human-interpretable.",
"title": ""
},
{
"docid": "8f978ac84eea44a593e9f18a4314342c",
"text": "There is clear evidence that interpersonal social support impacts stress levels and, in turn, degree of physical illness and psychological well-being. This study examines whether mediated social networks serve the same palliative function. A survey of 401 undergraduate Facebook users revealed that, as predicted, number of Facebook friends associated with stronger perceptions of social support, which in turn associated with reduced stress, and in turn less physical illness and greater well-being. This effect was minimized when interpersonal network size was taken into consideration. However, for those who have experienced many objective life stressors, the number of Facebook friends emerged as the stronger predictor of perceived social support. The \"more-friends-the-better\" heuristic is proposed as the most likely explanation for these findings.",
"title": ""
},
{
"docid": "3a7dca2e379251bd08b32f2331329f00",
"text": "Canonical correlation analysis (CCA) is a method for finding linear relations between two multidimensional random variables. This paper presents a generalization of the method to more than two variables. The approach is highly scalable, since it scales linearly with respect to the number of training examples and number of views (standard CCA implementations yield cubic complexity). The method is also extended to handle nonlinear relations via kernel trick (this increases the complexity to quadratic complexity). The scalability is demonstrated on a large scale cross-lingual information retrieval task.",
"title": ""
},
{
"docid": "db26de1462b3e8e53bf54846849ae2c2",
"text": "The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.",
"title": ""
},
{
"docid": "53d7816b9db8dd5b1d2f2fc2ebaebcf5",
"text": "Estimates suggest that up to 90% or more youth between 12 and 18 years have access to the Internet. Concern has been raised that this increased accessibility may lead to a rise in pornography seeking among children and adolescents, with potentially serious ramifications for child and adolescent sexual development. Using data from the Youth Internet Safety Survey, a nationally representative, cross-sectional telephone survey of 1501 children and adolescents (ages 10-17 years), characteristics associated with self-reported pornography seeking behavior, both on the Internet and using traditional methods (e.g., magazines), are identified. Seekers of pornography, both online and offline, are significantly more likely to be male, with only 5% of self-identified seekers being female. The vast majority (87%) of youth who report looking for sexual images online are 14 years of age or older, when it is developmentally appropriate to be sexually curious. Children under the age of 14 who have intentionally looked at pornography are more likely to report traditional exposures, such as magazines or movies. Concerns about a large group of young children exposing themselves to pornography on the Internet may be overstated. Those who report intentional exposure to pornography, irrespective of source, are significantly more likely to cross-sectionally report delinquent behavior and substance use in the previous year. Further, online seekers versus offline seekers are more likely to report clinical features associated with depression and lower levels of emotional bonding with their caregiver. Results of the current investigation raise important questions for further inquiry. Findings from these cross-sectional data provide justification for longitudinal studies aimed at parsing out temporal sequencing of psychosocial experiences.",
"title": ""
},
{
"docid": "ecc4f1d5fb66b816daa9ae514bd58b45",
"text": "In this paper, we introduce SLQS, a new entropy-based measure for the unsupervised identification of hypernymy and its directionality in Distributional Semantic Models (DSMs). SLQS is assessed through two tasks: (i.) identifying the hypernym in hyponym-hypernym pairs, and (ii.) discriminating hypernymy among various semantic relations. In both tasks, SLQS outperforms other state-of-the-art measures.",
"title": ""
},
{
"docid": "ab589fb1d97849e95da05d7e9b1d0f4f",
"text": "We introduce a new speaker independent method for reducing wind noise in single-channel recordings of noisy speech. The method is based on non-negative sparse coding and relies on a wind noise dictionary which is estimated from an isolated noise recording. We estimate the parameters of the model and discuss their sensitivity. We then compare the algorithm with the classical spectral subtraction method and the Qualcomm-ICSI-OGI noise reduction method. We optimize the sound quality in terms of signal-to-noise ratio and provide results on a noisy speech recognition task.",
"title": ""
},
{
"docid": "8eca353064d3b510b32c486e5f26c264",
"text": "Theoretical control algorithms are developed and an experimental system is described for 6-dof kinesthetic force/moment feedback to a human operator from a remote system. The remote system is a common six-axis slave manipulator with a force/torque sensor, while the haptic interface is a unique, cable-driven, seven-axis, force/moment-reflecting exoskeleton. The exoskeleton is used for input when motion commands are sent to the robot and for output when force/moment wrenches of contact are reflected to the human operator. This system exists at Wright-Patterson AFB. The same techniques are applicable to a virtual environment with physics models and general haptic interfaces.",
"title": ""
},
{
"docid": "32a65d95c26762ab8d72584fc3c39195",
"text": "This article presents the first steps towards a sociological understanding of emergent social media. This article uses Twitter, the most popular social media website, as its focus. Recently, the social media site has been prominently associated with social movements in Libya, Egypt, Tunisia, and Algeria. Rather than rush to breathlessly describe its novel role in shaping contemporary social movements, this article takes a step back and considers Twitter in historical and broad sociological terms. This article is not intended to provide empirical evidence or a fully formed theoretical understanding of Twitter, but rather to provide a selected literature review and a set of directions for sociologists. The article makes connections specifically to Erving Goffman’s interactionist work, not only to make the claim that some existing sociological theory can be used to think critically about Twitter, but also to provide some initial thoughts on how such theoretical innovations can be developed.",
"title": ""
},
{
"docid": "c6003079ee0b54e65aabfdb677ae8abc",
"text": "The sensitive artificial listener (SAL) project is interested in ways to elicit emotions from humans. This can be done in several ways; one of them is by presenting colors. The use of colors to stimulate a certain feeling, may it be calm, aggressive, energetic, happy etc. This paper tries to provide a method for generating colors to elicit a certain feelings based on an emotional state from the SAL agent. This is done by mapping the emotional state of the agent into a color. The emotional state is represented by two values Pleasure and Arousal these two form the two dimensional space in which the distinctive emotions can be placed. The emotional state of the agent is a point (coordinate) on the same 2d space, and by looking at the position of this point the current corresponding color can be calculated by interpolating between the emotions on the 2d space. The end use is to use colors elicit a certain feeling in the user, how the agent uses this to his advantage is up to the agent. Examples of use can be, the agents virtual body changes color (expressing his emotion), or the complete virtual world gets a change in color glow (a narrow emotional commitment), or a more physical example the lights in your house change colors (a broad emotional commitment).",
"title": ""
},
{
"docid": "ff24e5e100d26c9de2bde8ae8cd7fec4",
"text": "The Global Positioning System (GPS) grows into a ubiquitous utility that provides positioning, navigation, and timing (PNT) services. As an essential element of the global information infrastructure, cyber security of GPS faces serious challenges. Some mission-critical systems even rely on GPS as a security measure. However, civilian GPS itself has no protection against malicious acts such as spoofing. GPS spoofing breaches authentication by forging satellite signals to mislead users with wrong location/timing data that threatens homeland security. In order to make civilian GPS secure and resilient for diverse applications, we must understand the nature of attacks. This paper proposes a novel attack modeling of GPS spoofing with event-driven simulation package. Simulation supplements usual experiments to limit incidental harms and to comprehend a surreptitious scenario. We also provide taxonomy of GPS spoofing through characterization. The work accelerates the development of defense technology against GPS-based attacks.",
"title": ""
},
{
"docid": "37f861984ad6aeeb6981835c33db2f7b",
"text": "Emergence of resistance among the most important bacterial pathogens is recognized as a major public health threat affecting humans worldwide. Multidrug-resistant organisms have not only emerged in the hospital environment but are now often identified in community settings, suggesting that reservoirs of antibiotic-resistant bacteria are present outside the hospital. The bacterial response to the antibiotic \"attack\" is the prime example of bacterial adaptation and the pinnacle of evolution. \"Survival of the fittest\" is a consequence of an immense genetic plasticity of bacterial pathogens that trigger specific responses that result in mutational adaptations, acquisition of genetic material, or alteration of gene expression producing resistance to virtually all antibiotics currently available in clinical practice. Therefore, understanding the biochemical and genetic basis of resistance is of paramount importance to design strategies to curtail the emergence and spread of resistance and to devise innovative therapeutic approaches against multidrug-resistant organisms. In this chapter, we will describe in detail the major mechanisms of antibiotic resistance encountered in clinical practice, providing specific examples in relevant bacterial pathogens.",
"title": ""
},
{
"docid": "139f750d4e53b86bc785785b7129e6ee",
"text": "Enterprise Resource Planning (ERP) systems hold great promise for integrating business processes and have proven their worth in a variety of organizations. Yet the gains that they have enabled in terms of increased productivity and cost savings are often achieved in the face of daunting usability problems. While one frequently hears anecdotes about the difficulties involved in using ERP systems, there is little documentation of the types of problems typically faced by users. The purpose of this study is to begin addressing this gap by categorizing and describing the usability issues encountered by one division of a Fortune 500 company in the first years of its large-scale ERP implementation. This study also demonstrates the promise of using collaboration theory to evaluate usability characteristics of existing systems and to design new systems. Given the impressive results already achieved by some corporations with these systems, imagine how much more would be possible if understanding how to use them weren’t such an",
"title": ""
},
{
"docid": "9779a5ac2ada20f0ccd5751b0784e9cc",
"text": "Early-stage romantic love can induce euphoria, is a cross-cultural phenomenon, and is possibly a developed form of a mammalian drive to pursue preferred mates. It has an important influence on social behaviors that have reproductive and genetic consequences. To determine which reward and motivation systems may be involved, we used functional magnetic resonance imaging and studied 10 women and 7 men who were intensely \"in love\" from 1 to 17 mo. Participants alternately viewed a photograph of their beloved and a photograph of a familiar individual, interspersed with a distraction-attention task. Group activation specific to the beloved under the two control conditions occurred in dopamine-rich areas associated with mammalian reward and motivation, namely the right ventral tegmental area and the right postero-dorsal body and medial caudate nucleus. Activation in the left ventral tegmental area was correlated with facial attractiveness scores. Activation in the right anteromedial caudate was correlated with questionnaire scores that quantified intensity of romantic passion. In the left insula-putamen-globus pallidus, activation correlated with trait affect intensity. The results suggest that romantic love uses subcortical reward and motivation systems to focus on a specific individual, that limbic cortical regions process individual emotion factors, and that there is localization heterogeneity for reward functions in the human brain.",
"title": ""
}
] |
scidocsrr
|
0fecf46a839431618a262136ec4a7e7b
|
On dynamic data-driven selection of sensor streams
|
[
{
"docid": "1c5f53fe8d663047a3a8240742ba47e4",
"text": "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLAHANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLAHANS. Our analysis and experiments show that with the assistance of CLAHANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing clustering methods show that CLAHANS is the most efficient.",
"title": ""
}
] |
[
{
"docid": "d9084b01f0ff2cfe05082d8131196d7f",
"text": "3D pose estimation is a key component of many important computer vision tasks like autonomous navigation and robot manipulation. Current state-of-the-art approaches for 3D object pose estimation, like Viewpoints & Keypoints and Render for CNN, solve this problem by discretizing the pose space into bins and solving a pose-classification task. We argue that 3D pose is continuous and can be solved in a regression framework if done with the right representation, data augmentation and loss function. We modify a standard VGG network for the task of 3D pose regression and show competitive performance compared to state-of-the-art.",
"title": ""
},
{
"docid": "09c5fdbd76b7e81ef95c8edcc367bce7",
"text": "Convolution Neural Networks (CNN), known as ConvNets are widely used in many visual imagery application, object classification, speech recognition. After the implementation and demonstration of the deep convolution neural network in Imagenet classification in 2012 by krizhevsky, the architecture of deep Convolution Neural Network is attracted many researchers. This has led to the major development in Deep learning frameworks such as Tensorflow, caffe, keras, theno. Though the implementation of deep learning is quite possible by employing deep learning frameworks, mathematical theory and concepts are harder to understand for new learners and practitioners. This article is intended to provide an overview of ConvNets architecture and to explain the mathematical theory behind it including activation function, loss function, feedforward and backward propagation. In this article, grey scale image is taken as input information image, ReLU and Sigmoid activation function are considered for developing the architecture and cross-entropy loss function is used for computing the difference between predicted value and actual value. The architecture is developed in such a way that it can contain one convolution layer, one pooling layer, and multiple dense layers.",
"title": ""
},
{
"docid": "c26ff98ac6cc027b07fec213a192a446",
"text": "Basic to all motile life is a differential approach/avoid response to perceived features of environment. The stages of response are initial reflexive noticing and orienting to the stimulus, preparation, and execution of response. Preparation involves a coordination of many aspects of the organism: muscle tone, posture, breathing, autonomic functions, motivational/emotional state, attentional orientation, and expectations. The organism organizes itself in relation to the challenge. We propose to call this the \"preparatory set\" (PS). We suggest that the concept of the PS can offer a more nuanced and flexible perspective on the stress response than do current theories. We also hypothesize that the mechanisms of body-mind therapeutic and educational systems (BTES) can be understood through the PS framework. We suggest that the BTES, including meditative movement, meditation, somatic education, and the body-oriented psychotherapies, are approaches that use interventions on the PS to remedy stress and trauma. We discuss how the PS can be adaptive or maladaptive, how BTES interventions may restore adaptive PS, and how these concepts offer a broader and more flexible view of the phenomena of stress and trauma. We offer supportive evidence for our hypotheses, and suggest directions for future research. We believe that the PS framework will point to ways of improving the management of stress and trauma, and that it will suggest directions of research into the mechanisms of action of BTES.",
"title": ""
},
{
"docid": "f95863031edd888b9f841cde0af4c9be",
"text": "The research tries to identify factors that are critical for a Big Data project’s success. In total 27 success factors could be identified throughout the analysis of these published case studies. Subsequently, to the identification the success factors were categorized according to their importance for the project’s success. During the categorization process 6 out of the 27 success factors were declared mission critical. Besides this identification of success factors, this thesis provides a process model, as a suggested way to approach Big Data projects. The process model is divided into separate phases. In addition to a description of the tasks to fulfil, the identified success factors are assigned to the individual phases of the analysis process. Finally, this thesis provides a process model for Big Data projects and also assigns success factors to individual process stages, which are categorized according to their importance for the success of the entire project.",
"title": ""
},
{
"docid": "895e3932443118e7dc40dc89c3bdb6fa",
"text": "Bed-making is a universal home task that can be challenging for senior citizens due to reaching motions. Automating bed-making has multiple technical challenges such as perception in an unstructured environments, deformable object manipulation, obstacle avoidance and sequential decision making. We explore how DART, an LfD algorithm for learning robust policies, can be applied to automating bed making without fiducial markers with a Toyota Human Support Robot (HSR). By gathering human demonstrations for grasping the sheet and failure detection, we can learn deep neural network policies that leverage pre-trained YOLO features to automate the task. Experiments with a scale bed and distractors placed on the bed, suggest policies learned on 50 demonstrations with DART achieve 96% sheet coverage, which is over 200% better than a corner detector baseline using contour detection.",
"title": ""
},
{
"docid": "8883e758297e13a1b3cc3cf2dfc1f6c4",
"text": "Melanoma mortality rates are the highest amongst skin cancer patients. Melanoma is life threating when it grows beyond the dermis of the skin. Hence, depth is an important factor to diagnose melanoma. This paper introduces a non-invasive computerized dermoscopy system that considers the estimated depth of skin lesions for diagnosis. A 3-D skin lesion reconstruction technique using the estimated depth obtained from regular dermoscopic images is presented. On basis of the 3-D reconstruction, depth and 3-D shape features are extracted. In addition to 3-D features, regular color, texture, and 2-D shape features are also extracted. Feature extraction is critical to achieve accurate results. Apart from melanoma, in-situ melanoma the proposed system is designed to diagnose basal cell carcinoma, blue nevus, dermatofibroma, haemangioma, seborrhoeic keratosis, and normal mole lesions. For experimental evaluations, the PH2, ISIC: Melanoma Project, and ATLAS dermoscopy data sets is considered. Different feature set combinations is considered and performance is evaluated. Significant performance improvement is reported the post inclusion of estimated depth and 3-D features. The good classification scores of sensitivity = 96%, specificity = 97% on PH2 data set and sensitivity = 98%, specificity = 99% on the ATLAS data set is achieved. Experiments conducted to estimate tumor depth from 3-D lesion reconstruction is presented. Experimental results achieved prove that the proposed computerized dermoscopy system is efficient and can be used to diagnose varied skin lesion dermoscopy images.",
"title": ""
},
{
"docid": "e7eea375a6c3e3f96959d5031796b00d",
"text": "In content-based semantic recommender systems the items to be considered are defined in terms of a set of semantic attributes, which may take as values the concepts of a domain ontology. The aim of these systems is to suggest to the user the items that fit better with his/her preferences, stored in the user profile. When large ontologies are considered it is unrealistic to expect to have complete information about the user preference on each concept. In this work, we explain how the Weighted Ordered Weighted Averaging operator may be used to deduce the user preferences on all concepts, given the structure of the ontology and some partial preferential information. The parameters of the WOWA operator enable to establish the desired aggregation policy, which ranges from a full conjunction to a full disjunction. Different aggregation policies have been analyzed in a case study involving the recommendation of touristic activities in the city of Tarragona. Several profiles have been compared and the results indicate that different aggregation policies should be used depending on the type of user. The amount of information available in the ontology must be also taken into account in order to establish the parameters of the proposed algorithm.",
"title": ""
},
{
"docid": "32a4c17a53643042a5c19180bffd7c21",
"text": "Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a \"$1 recognizer\" that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers' N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing.",
"title": ""
},
{
"docid": "c4a104956ee7e0db325348e683947134",
"text": "Intracellular pH (pH(i)) plays a critical role in the physiological and pathophysiological processes of cells, and fluorescence imaging using pH-sensitive indicators provides a powerful tool to assess the pH(i) of intact cells and subcellular compartments. Here we describe a nanoparticle-based ratiometric pH sensor, comprising a bright and photostable semiconductor quantum dot (QD) and pH-sensitive fluorescent proteins (FPs), exhibiting dramatically improved sensitivity and photostability compared to BCECF, the most widely used fluorescent dye for pH imaging. We found that Förster resonance energy transfer between the QD and multiple FPs modulates the FP/QD emission ratio, exhibiting a >12-fold change between pH 6 and 8. The modularity of the probe enables customization to specific biological applications through genetic engineering of the FPs, as illustrated by the altered pH range of the probe through mutagenesis of the fluorescent protein. The QD-FP probes facilitate visualization of the acidification of endosomes in living cells following polyarginine-mediated uptake. These probes have the potential to enjoy a wide range of intracellular pH imaging applications that may not be feasible with fluorescent proteins or organic fluorophores alone.",
"title": ""
},
{
"docid": "747319dc1492cf26e9b9112e040cbba7",
"text": "Markerless tracking of hands and fingers is a promising enabler for human-computer interaction. However, adoption has been limited because of tracking inaccuracies, incomplete coverage of motions, low framerate, complex camera setups, and high computational requirements. In this paper, we present a fast method for accurately tracking rapid and complex articulations of the hand using a single depth camera. Our algorithm uses a novel detectionguided optimization strategy that increases the robustness and speed of pose estimation. In the detection step, a randomized decision forest classifies pixels into parts of the hand. In the optimization step, a novel objective function combines the detected part labels and a Gaussian mixture representation of the depth to estimate a pose that best fits the depth. Our approach needs comparably less computational resources which makes it extremely fast (50 fps without GPU support). The approach also supports varying static, or moving, camera-to-scene arrangements. We show the benefits of our method by evaluating on public datasets and comparing against previous work.",
"title": ""
},
{
"docid": "ed06226e548fac89cc06a798618622c6",
"text": "Exciting yet challenging times lie ahead. The electrical power industry is undergoing rapid change. The rising cost of energy, the mass electrification of everyday life, and climate change are the major drivers that will determine the speed at which such transformations will occur. Regardless of how quickly various utilities embrace smart grid concepts, technologies, and systems, they all agree onthe inevitability of this massive transformation. It is a move that will not only affect their business processes but also their organization and technologies.",
"title": ""
},
{
"docid": "ef97a18b62a24ef1633cd0338495a4e5",
"text": "Supervisory Control and Data Acquisition (SCADA) system monitors and controls industrial process in physical critical Infrastructures. It is thus of vital importance that any vulnerabilities of SCADA system must be identified and mitigated. DNP3 is and open SCADA network protocol that is mainly used in electrical utilities. However, the security mechanisms of DNP3 were neglected at its design stage. For example, the coverage of DNP3 Secure Authentication is limited to itself only. In our experiments, we have successfully performed a number of attacks to DNP3 on a small-scale testbed. Hence, this paper will not only discuss our experimental results but also propose a novel hybrid method that can enhance the security of existing DNP3 protocol by combining both encryption and authentication techniques.",
"title": ""
},
{
"docid": "19a0954fb21092853d9577e25019aaee",
"text": "In this paper the design of a CMOS cascoded operational amplifier is described. Due to technology scaling the design of a former developed operational amplifier has now overcome its stability problems. A stable three stage operational amplifier is presented. A layout has been created automatically by using the ALADIN tool. With help of the extracted layout the performance data of the amplifier is simulated.",
"title": ""
},
{
"docid": "e38814c1869f8dd0209f0e5f7ceaf4dc",
"text": "This paper describes a novel image-based pointing-tracking feedback control scheme for an inertially stabilized double-gimbal airborne camera platform combined with a computer vision system. The key idea is to enhance the intuitive decoupled controller structure with measurements of the camera inertial angular rate around its optical axis. The resulting controller can also compensate for the apparent translation between the camera and the observed object, but then the velocity of this mutual translation must be measured or estimated. Even though the proposed controller is more robust against longer sampling periods of the computer-vision system then the decoupled controller, a sketch of a simple compensation of this delay is also given. Numerical simulations are accompanied by laboratory experiments with a real benchmark system.",
"title": ""
},
{
"docid": "36af986f61252f221a8135e80fe6432d",
"text": "This chapter considers a set of questions at the interface of the study of intuitive theories, causal knowledge, and problems of inductive inference. By an intuitive theory, we mean a cognitive structure that in some important ways is analogous to a scientific theory. It is becoming broadly recognized that intuitive theories play essential roles in organizing our most basic knowledge of the world, particularly for causal structures in physical, biological, psychological or social domains (Atran, 1995; Carey, 1985a; Kelley, 1973; McCloskey, 1983; Murphy & Medin, 1985; Nichols & Stich, 2003). A principal function of intuitive theories in these domains is to support the learning of new causal knowledge: generating and constraining people’s hypotheses about possible causal relations, highlighting variables, actions and observations likely to be informative about those hypotheses, and guiding people’s interpretation of the data they observe (Ahn & Kalish, 2000; Pazzani, 1987; Pazzani, Dyer & Flowers, 1986; Waldmann, 1996). Leading accounts of cognitive development argue for the importance of intuitive theories in children’s mental lives and frame the major transitions of cognitive development as instances of theory change (Carey, 1985a; Gopnik & Meltzoff, 1997; Inagaki & Hatano 2002; Wellman & Gelman, 1992). Here we attempt to lay out some prospects for understanding the structure, function, and acquisition of intuitive theories from a rational computational perspective. From this viewpoint, theory-like representations are not just a convenient way of summarizing certain aspects of human knowledge. They provide crucial foundations for successful learning and reasoning, and we want to understand how they do so. With this goal in mind, we focus on",
"title": ""
},
{
"docid": "caf0b3a9385dffe3663c4847c1637cec",
"text": "In this paper we present a novel method for plausible real-time rendering of indirect illumination effects for diffuse and non-diffuse surfaces. The scene geometry causing indirect illumination is captured by an extended shadow map, as proposed in previous work, and secondary light sources are distributed on directly lit surfaces. One novelty is the rendering of these secondary lights' contribution by splatting in a deferred shading process, which decouples rendering time from scene complexity. An importance sampling strategy, implemented entirely on the GPU, allows efficient selection of secondary light sources. Adapting the light's splat shape to surface glossiness also allows efficient rendering of caustics. Unlike previous approaches the approximated indirect lighting does barely exhibit coarse artifacts - even under unfavorable viewing and lighting conditions. We describe an implementation on contemporary graphics hardware, show a comparison to previous approaches, and present adaptation to and results in game-typical applications.",
"title": ""
},
{
"docid": "0988297cfd3aaeb077e2be71f4106c81",
"text": "HadoopDB is a hybrid of MapReduce and DBMS technologies, designed to meet the growing demand of analyzing massive datasets on very large clusters of machines. Our previous work has shown that HadoopDB approaches parallel databases in performance and still yields the scalability and fault tolerance of MapReduce-based systems. In this demonstration, we focus on HadoopDB's flexible architecture and versatility with two real world application scenarios: a semantic web data application for protein sequence analysis and a business data warehousing application based on TPC-H. The demonstration offers a thorough walk-through of how to easily build applications on top of HadoopDB.",
"title": ""
},
{
"docid": "2af56829daf6d2c6c633c759d07f2208",
"text": "Height of Burst (HOB) sensor is one of the critical parts in guided missiles. While seekers control the guiding scheme of the missile, proximity sensors set the trigger for increased effectiveness of the warhead. For the well-developed guided missiles of Roketsan, a novel proximity sensor is developed. The design of the sensor is for multi-purpose use. In this presentation, the application of the sensor is explained for operation as a HOB sensor in the range of 3m–50m with ± 1m accuracy. Measurement results are also presented. The same sensor is currently being developed for proximity sensor for missile defence.",
"title": ""
},
{
"docid": "f2014c61ab20bcb3dc586b660116b8d8",
"text": "Detection of stationary foreground objects (i.e., moving objects that remain static throughout several frames) has attracted the attention of many researchers over the last decades and, consequently, many new ideas have been recently proposed, trying to achieve high-quality detections in complex scenarios with the lowest misdetections, while keeping real-time constraints. Most of these strategies are focused on detecting abandoned objects. However, there are some approaches that also allow detecting partially-static foreground objects (e.g. people remaining temporarily static) or stolen objects (i.e., objects removed from the background of the scene). This paper provides a complete survey of the most relevant approaches for detecting all kind of stationary foreground objects. The aim of this survey is not to compare the existing methods, but to provide the information needed to get an idea of the state of the art in this field: kinds of stationary foreground objects, main challenges in the field, main datasets for testing the detection of stationary foreground, main stages in the existing approaches and algorithms typically used in such stages.",
"title": ""
},
{
"docid": "a5a36d7d267e299088d05dafa1ce2b6c",
"text": "Agent-based modelling is a bottom-up approach to understanding systems which provides a powerful tool for analysing complex, non-linear markets. The method involves creating artificial agents designed to mimic the attributes and behaviours of their real-world counterparts. The system’s macro-observable properties emerge as a consequence of these attributes and behaviours and the interactions between them. The simulation output may be potentially used for explanatory, exploratory and predictive purposes. The aim of this paper is to introduce the reader to some of the basic concepts and methods behind agent-based modelling and to present some recent business applications of these tools, including work in the telecoms and media markets.",
"title": ""
}
] |
scidocsrr
|
1d6af4432894f8c42229c0ba1756dfa3
|
Echoes from the past: how technology mediated reflection improves well-being
|
[
{
"docid": "d5870092a3e8401654b5b9948c77cb0a",
"text": "Recent research shows that there has been increased interest in investigating the role of mood and emotions in the HCI domain. Our moods, however, are complex. They are affected by many dynamic factors and can change multiple times throughout each day. Furthermore, our mood can have significant implications in terms of our experiences, our actions and most importantly on our interactions with other people. We have developed MobiMood, a proof-of-concept social mobile application that enables groups of friends to share their moods with each other. In this paper, we present the results of an exploratory field study of MobiMood, focusing on explicit mood sharing in-situ. Our results highlight that certain contextual factors had an effect on mood and the interpretation of moods. Furthermore, mood sharing and mood awareness appear to be good springboards for conversations and increased communication among users. These and other findings lead to a number of key implications in the design of mobile social awareness applications.",
"title": ""
}
] |
[
{
"docid": "bb447bbd4df92339bace55dc5610fbcc",
"text": "Fuzz testing has helped security researchers and organizations discover a large number of vulnerabilities. Although it is efficient and widely used in industry, hardly any empirical studies and experience exist on the customization of fuzzers to real industrial projects. In this paper, collaborating with the engineers from Huawei, we present the practice of adapting fuzz testing to a proprietary message middleware named libmsg, which is responsible for the message transfer of the entire distributed system department. We present the main obstacles coming across in applying an efficient fuzzer to libmsg, including system configuration inconsistency, system build complexity, fuzzing driver absence. The solutions for those typical obstacles are also provided. For example, for the most difficult and expensive obstacle of writing fuzzing drivers, we present a low-cost approach by converting existing sample code snippets into fuzzing drivers. After overcoming those obstacles, we can effectively identify software bugs, and report 9 previously unknown vulnerabilities, including flaws that lead to denial of service or system crash.",
"title": ""
},
{
"docid": "6a51e7a1b32a844160ba6a0e3b329b46",
"text": "We present an overview of the current pharmacological treatment of urinary incontinence (UI) in women, according to the latest evidence available. After a brief description of the lower urinary tract receptors and mediators (detrusor, bladder neck, and urethra), the potential sites of pharmacological manipulation in the treatment of UI are discussed. Each class of drug used to treat UI has been evaluated, taking into account published rate of effectiveness, different doses, and way of administration. The prevalence of the most common adverse effects and overall compliance had also been pointed out, with cost evaluation after 1 month of treatment for each class of drug. Moreover, we describe those newer agents whose efficacy and safety need to be further investigated. We stress the importance of a better understanding of the causes and pathophysiology of UI to ensure newer and safer treatments for such a debilitating condition.",
"title": ""
},
{
"docid": "6fd3f4ab064535d38c01f03c0135826f",
"text": "BACKGROUND\nThere is evidence of under-detection and poor management of pain in patients with dementia, in both long-term and acute care. Accurate assessment of pain in people with dementia is challenging and pain assessment tools have received considerable attention over the years, with an increasing number of tools made available. Systematic reviews on the evidence of their validity and utility mostly compare different sets of tools. This review of systematic reviews analyses and summarises evidence concerning the psychometric properties and clinical utility of pain assessment tools in adults with dementia or cognitive impairment.\n\n\nMETHODS\nWe searched for systematic reviews of pain assessment tools providing evidence of reliability, validity and clinical utility. Two reviewers independently assessed each review and extracted data from them, with a third reviewer mediating when consensus was not reached. Analysis of the data was carried out collaboratively. The reviews were synthesised using a narrative synthesis approach.\n\n\nRESULTS\nWe retrieved 441 potentially eligible reviews, 23 met the criteria for inclusion and 8 provided data for extraction. Each review evaluated between 8 and 13 tools, in aggregate providing evidence on a total of 28 tools. The quality of the reviews varied and the reporting often lacked sufficient methodological detail for quality assessment. The 28 tools appear to have been studied in a variety of settings and with varied types of patients. The reviews identified several methodological limitations across the original studies. The lack of a 'gold standard' significantly hinders the evaluation of tools' validity. Most importantly, the samples were small providing limited evidence for use of any of the tools across settings or populations.\n\n\nCONCLUSIONS\nThere are a considerable number of pain assessment tools available for use with the elderly cognitive impaired population. However there is limited evidence about their reliability, validity and clinical utility. On the basis of this review no one tool can be recommended given the existing evidence.",
"title": ""
},
{
"docid": "78c8331beb0d09570c4063fab7d21f2d",
"text": "This paper presents a new single stage dc-dc boost converter topology with very large gain conversion ratio as a switched inductor multilevel boost converter (SIMLBC). It is a PWM-based dc-dc converter which combines the Switched-Inductor Structures and the switching capacitor function to provide a very large output voltage with different output dc levels which makes it suitable for multilevel inverter applications. The proposed topology has only single switch like the conventional dc-dc converter which can be controlled in a very simple way. In addition to, two inductors, 2N+2 diodes, N is the number of output dc voltage levels, and 2N-1 dc capacitors. A high switching frequency is employed to decrease the size of these components and thus much increasing the dynamic performance. The proposed topology has been compared with the existence dc-dc boost converters and it gives a higher voltage gain conversion ratio. The proposed converter has been analyzed, simulated and a prototype has been built and experimentally tested. Simulation and experimental results have been provided for validation.",
"title": ""
},
{
"docid": "2f02235636c5c0aecd8918cba512888d",
"text": "To determine whether an AIDS prevention mass media campaign influenced risk perception, self-efficacy and other behavioural predictors. We used household survey data collected from 2,213 sexually experienced male and female Kenyans aged 15-39. Respondents were administered a questionnaire asking them about their exposure to branded and generic mass media messages concerning HIV/AIDS and condom use. They were asked questions concerning their personal risk perception, self-efficacy, condom effectiveness, condom availability, and their embarrassment in obtaining condoms. Logistic regression analysis was used to determine the impact of exposure to mass media messages on these predictors of behaviour change. Those exposed to branded advertising messages were significantly more likely to consider themselves at higher risk of acquiring HIV and to believe in the severity of AIDS. Exposure to branded messages was also associated with a higher level of personal self-efficacy, a greater belief in the efficacy of condoms, a lower level of perceived difficulty in obtaining condoms and reduced embarrassment in purchasing condoms. Moreover, there was a dose-response relationship: a higher intensity of exposure to advertising was associated with more positive outcomes. Exposure to generic advertising messages was less frequently associated with positive health beliefs and these relationships were also weaker. Branded mass media campaigns that promote condom use as an attractive lifestyle choice are likely to contribute to the development of perceptions that are conducive to the adoption of condom use.",
"title": ""
},
{
"docid": "a2f46b51b65c56acf6768f8e0d3feb79",
"text": "In this paper we introduce Linear Relational Embedding as a means of learning a distributed representation of concepts from data consisting of binary relations between concepts. The key idea is to represent concepts as vectors, binary relations as matrices, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related concept. A representation for concepts and relations is learned by maximizing an appropriate discriminative goodness function using gradient ascent. On a task involving family relationships, learning is fast and leads to good generalization. Learning Distributed Representations of Concepts using Linear Relational Embedding Alberto Paccanaro Geoffrey Hinton Gatsby Unit",
"title": ""
},
{
"docid": "4417f505ed279689afa0bde104b3d472",
"text": "A single-cavity dual-mode substrate integrated waveguide (SIW) bandpass filter (BPF) for X-band application is presented in this paper. Coplanar waveguide (CPW) is used as SIW-microstrip transition in this design. Two slots of the CPW with unequal lengths are used to excite two degenerate modes, i.e. TE102 and TE201. A slot line is etched on the ground plane of the SIW cavity for perturbation. Its size and position are related to the effect of mode-split, namely the coupling between the two degenerate modes. Due to the cancellation of the two modes, a transmission zero in the lower stopband of the BPF is achieved, which improves the selectivity of the proposed BPF. And the location of the transmission zero can be controlled by adjusting the position and the size of the slot line perturbation properly. By introducing source-load coupling, an additional transmission zero is produced in the upper stopband of the BPF, it enhances the stopband performance of the BPF. Influences of the slot line perturbation on the BPF have been studied. A dual-mode BPF for X-band application has been designed, fabricated and measured. A good agreement between simulation and measurement verifies the validity of this design methodology.",
"title": ""
},
{
"docid": "e259e255f9acf3fa1e1429082e1bf1de",
"text": "In this work we describe an autonomous soft-bodied robot that is both self-contained and capable of rapid, continuum-body motion. We detail the design, modeling, fabrication, and control of the soft fish, focusing on enabling the robot to perform rapid escape responses. The robot employs a compliant body with embedded actuators emulating the slender anatomical form of a fish. In addition, the robot has a novel fluidic actuation system that drives body motion and has all the subsystems of a traditional robot onboard: power, actuation, processing, and control. At the core of the fish's soft body is an array of fluidic elastomer actuators. We design the fish to emulate escape responses in addition to forward swimming because such maneuvers require rapid body accelerations and continuum-body motion. These maneuvers showcase the performance capabilities of this self-contained robot. The kinematics and controllability of the robot during simulated escape response maneuvers are analyzed and compared with studies on biological fish. We show that during escape responses, the soft-bodied robot has similar input-output relationships to those observed in biological fish. The major implication of this work is that we show soft robots can be both self-contained and capable of rapid body motion.",
"title": ""
},
{
"docid": "1dc5a78a3a9c072f1f71da4aa257d3f2",
"text": "A Bayesian network is a graphical model that encodes probabilistic relationships among variables of interest. When used in conjunction with statistical techniques, the graphical model has several advantages for data analysis. One, because the model encodes dependencies among all variables, it readily handles situations where some data entries are missing. Two, a Bayesian network can be used to learn causal relationships, and hence can be used to gain understanding about a problem domain and to predict the consequences of intervention. Three, because the model has both a causal and probabilistic semantics, it is an ideal representation for combining prior knowledge (which often comes in causal form) and data. Four, Bayesian statistical methods in conjunction with Bayesian networks o er an e cient and principled approach for avoiding the over tting of data. In this paper, we discuss methods for constructing Bayesian networks from prior knowledge and summarize Bayesian statistical methods for using data to improve these models. With regard to the latter task, we describe methods for learning both the parameters and structure of a Bayesian network, including techniques for learning with incomplete data. In addition, we relate Bayesian-network methods for learning to techniques for supervised and unsupervised learning. We illustrate the graphical-modeling approach using a real-world case study.",
"title": ""
},
{
"docid": "a6785836b67bdf806e09012a45e05fd3",
"text": "Cloud computing is an emerging and popular method of accessing shared and dynamically configurable resources via the computer network on demand. Cloud computing is excessively used by mobile applications to offload data over the network to the cloud. There are some security and privacy concerns using both mobile devices to offload data to the facilities provided by the cloud providers. One of the critical threats facing cloud users is the unauthorized access by the insiders (cloud administrators) or the justification of location where the cloud providers operating. Although, there exist variety of security mechanisms to prevent unauthorized access by unauthorized user by the cloud administration, but there is no security provision to prevent unauthorized access by the cloud administrators to the client data on the cloud computing. In this paper, we demonstrate how steganography, which is a secrecy method to hide information, can be used to enhance the security and privacy of data (images) maintained on the cloud by mobile applications. Our proposed model works with a key, which is embedded in the image along with the data, to provide an additional layer of security, namely, confidentiality of data. The practicality of the proposed method is represented via a simple case study.",
"title": ""
},
{
"docid": "aa98236ba9b9468b4780a3c8be27b62c",
"text": "The final goal of Interpretable Semantic Textual Similarity (iSTS) is to build systems that explain which are the differences and commonalities between two sentences. The task adds an explanatory level on top of STS, formalized as an alignment between the chunks in the two input sentences, indicating the relation and similarity score of each alignment. The task provides train and test data on three datasets: news headlines, image captions and student answers. It attracted nine teams, totaling 20 runs. All datasets and the annotation guideline are freely available1",
"title": ""
},
{
"docid": "c091e5b24dc252949b3df837969e263a",
"text": "The emergence of powerful portable computers, along with advances in wireless communication technologies, has made mobile computing a reality. Among the applications that are finding their way to the market of mobile computingthose that involve data managementhold a prominent position. In the past few years, there has been a tremendous surge of research in the area of data management in mobile computing. This research has produced interesting results in areas such as data dissemination over limited bandwith channels, location-dependent querying of data, and advanced interfaces for mobile computers. This paper is an effort to survey these techniques and to classify this research in a few broad areas.",
"title": ""
},
{
"docid": "bd2c3ee69cda5c08eb106e0994a77186",
"text": "This paper explores the combination of self-organizing map (SOM) and feedback, in order to represent sequences of inputs. In general, neural networks with time-delayed feedback represent time implicitly, by combining current inputs and past activities. It has been difficult to apply this approach to SOM, because feedback generates instability during learning. We demonstrate a solution to this problem, based on a nonlinearity. The result is a generalization of SOM that learns to represent sequences recursively. We demonstrate that the resulting representations are adapted to the temporal statistics of the input series.",
"title": ""
},
{
"docid": "ec4bf9499f16c415ccb586a974671bf1",
"text": "Memory circuit elements, namely memristive, memcapacitive and meminductive systems, are gaining considerable attention due to their ubiquity and use in diverse areas of science and technology. Their modeling within the most widely used environment, SPICE, is thus critical to make substantial progress in the design and analysis of complex circuits. Here, we present a collection of models of different memory circuit elements and provide a methodology for their accurate and reliable modeling in the SPICE environment. We also provide codes of these models written in the most popular SPICE versions (PSpice, LTspice, HSPICE) for the benefit of the reader. We expect this to be of great value to the growing community of scientists interested in the wide range of applications of memory circuit elements.",
"title": ""
},
{
"docid": "8d4ad49a599e68e28fdcf9e5e92d78ff",
"text": "The regulation of Ace2 and morphogenesis (RAM) network is a protein kinase signaling pathway conserved among eukaryotes from yeasts to humans. Among fungi, the RAM network has been most extensively studied in the model yeast Saccharomyces cerevisiae and has been shown to regulate a range of cellular processes, including daughter cell-specific gene expression, cell cycle regulation, cell separation, mating, polarized growth, maintenance of cell wall integrity, and stress signaling. Increasing numbers of recent studies on the role of the RAM network in pathogenic fungal species have revealed that this network also plays an important role in the biology and pathogenesis of these organisms. In addition to providing a brief overview of the RAM network in S. cerevisiae, we summarize recent developments in the understanding of RAM network function in the human fungal pathogens Candida albicans, Candida glabrata, Cryptococcus neoformans, Aspergillus fumigatus, and Pneumocystis spp.",
"title": ""
},
{
"docid": "da5339bb74d6af2bfa7c8f46b4f50bb3",
"text": "Conversational agents are exploding in popularity. However, much work remains in the area of non goal-oriented conversations, despite significant growth in research interest over recent years. To advance the state of the art in conversational AI, Amazon launched the Alexa Prize, a 2.5-million dollar university competition where sixteen selected university teams built conversational agents to deliver the best social conversational experience. Alexa Prize provided the academic community with the unique opportunity to perform research with a live system used by millions of users. The subjectivity associated with evaluating conversations is key element underlying the challenge of building non-goal oriented dialogue systems. In this paper, we propose a comprehensive evaluation strategy with multiple metrics designed to reduce subjectivity by selecting metrics which correlate well with human judgement. The proposed metrics provide granular analysis of the conversational agents, which is not captured in human ratings. We show that these metrics can be used as a reasonable proxy for human judgment. We provide a mechanism to unify the metrics for selecting the top performing agents, which has also been applied throughout the Alexa Prize competition. To our knowledge, to date it is the largest setting for evaluating agents with millions of conversations and hundreds of thousands of ratings from users. We believe that this work is a step towards an automatic evaluation process for conversational AIs.",
"title": ""
},
{
"docid": "23f2f6e5dd50942809aece136c26e549",
"text": "Paraphrases extracted from parallel corpora by the pivot method (Bannard and Callison-Burch, 2005) constitute a valuable resource for multilingual NLP applications. In this study, we analyse the semantics of unigram pivot paraphrases and use a graph-based sense induction approach to unveil hidden sense distinctions in the paraphrase sets. The comparison of the acquired senses to gold data from the Lexical Substitution shared task (McCarthy and Navigli, 2007) demonstrates that sense distinctions exist in the paraphrase sets and highlights the need for a disambiguation step in applications using this resource.",
"title": ""
},
{
"docid": "0e5f4253ea4fba9c9c42dd579cbba76c",
"text": "Binary code search has received much attention recently due to its impactful applications, e.g., plagiarism detection, malware detection and software vulnerability auditing. However, developing an effective binary code search tool is challenging due to the gigantic syntax and structural differences in binaries resulted from different compilers, architectures and OSs. In this paper, we propose BINGO — a scalable and robust binary search engine supporting various architectures and OSs. The key contribution is a selective inlining technique to capture the complete function semantics by inlining relevant library and user-defined functions. In addition, architecture and OS neutral function filtering is proposed to dramatically reduce the irrelevant target functions. Besides, we introduce length variant partial traces to model binary functions in a program structure agnostic fashion. The experimental results show that BINGO can find semantic similar functions across architecture and OS boundaries, even with the presence of program structure distortion, in a scalable manner. Using BINGO, we also discovered a zero-day vulnerability in Adobe PDF Reader, a COTS binary.",
"title": ""
},
{
"docid": "87dd54ed1ea62ee085323d37bbf7b4b2",
"text": "We examine the use of modern recommender system technology to aid command awareness in complex software applications. We first describe our adaptation of traditional recommender system algorithms to meet the unique requirements presented by the domain of software commands. A user study showed that our item-based collaborative filtering algorithm generates 2.1 times as many good suggestions as existing techniques. Motivated by these positive results, we propose a design space framework and its associated algorithms to support both global and contextual recommendations. To evaluate the algorithms, we developed the CommunityCommands plug-in for AutoCAD. This plug-in enabled us to perform a 6-week user study of real-time, within-application command recommendations in actual working environments. We report and visualize command usage behaviors during the study, and discuss how the recommendations affected users behaviors. In particular, we found that the plug-in successfully exposed users to new commands, as unique commands issued significantly increased.",
"title": ""
},
{
"docid": "3e6df23444ae08f65ded768c5dc8dc9d",
"text": "In this paper, we propose a method for automatically detecting various types of snore sounds using image classification convolutional neural network (CNN) descriptors extracted from audio file spectrograms. The descriptors, denoted as deep spectrum features, are derived from forwarding spectrograms through very deep task-independent pre-trained CNNs. Specifically, activations of fully connected layers from two common image classification CNNs, AlexNet and VGG19, are used as feature vectors. Moreover, we investigate the impact of differing spectrogram colour maps and two CNN architectures on the performance of the system. Results presented indicate that deep spectrum features extracted from the activations of the second fully connected layer of AlexNet using a viridis colour map are well suited to the task. This feature space, when combined with a support vector classifier, outperforms the more conventional knowledge-based features of 6 373 acoustic functionals used in the INTERSPEECH ComParE 2017 Snoring sub-challenge baseline system. In comparison to the baseline, unweighted average recall is increased from 40.6% to 44.8% on the development partition, and from 58.5% to 67.0% on the test partition.",
"title": ""
}
] |
scidocsrr
|
05678a8d072f167be30e1f85436c0bae
|
Impact Analysis of Start-Up Lost Time at Major Intersections on Sathorn Road Using a Synchro Optimization and a Microscopic SUMO Traffic Simulation
|
[
{
"docid": "661c99429dc6684ca7d6394f01201ac3",
"text": "SUMO is an open source traffic simulation package including net import and demand modeling components. We describe the current state of the package as well as future developments and extensions. SUMO helps to investigate several research topics e.g. route choice and traffic light algorithm or simulating vehicular communication. Therefore the framework is used in different projects to simulate automatic driving or traffic management strategies. Keywordsmicroscopic traffic simulation, software, open",
"title": ""
}
] |
[
{
"docid": "e881c4a576682a73bc9ff6d368cee763",
"text": "Functional connectivity (FC) patterns obtained from resting-state functional magnetic resonance imaging data are commonly employed to study neuropsychiatric conditions by using pattern classifiers such as the support vector machine (SVM). Meanwhile, a deep neural network (DNN) with multiple hidden layers has shown its ability to systematically extract lower-to-higher level information of image and speech data from lower-to-higher hidden layers, markedly enhancing classification accuracy. The objective of this study was to adopt the DNN for whole-brain resting-state FC pattern classification of schizophrenia (SZ) patients vs. healthy controls (HCs) and identification of aberrant FC patterns associated with SZ. We hypothesized that the lower-to-higher level features learned via the DNN would significantly enhance the classification accuracy, and proposed an adaptive learning algorithm to explicitly control the weight sparsity in each hidden layer via L1-norm regularization. Furthermore, the weights were initialized via stacked autoencoder based pre-training to further improve the classification performance. Classification accuracy was systematically evaluated as a function of (1) the number of hidden layers/nodes, (2) the use of L1-norm regularization, (3) the use of the pre-training, (4) the use of framewise displacement (FD) removal, and (5) the use of anatomical/functional parcellation. Using FC patterns from anatomically parcellated regions without FD removal, an error rate of 14.2% was achieved by employing three hidden layers and 50 hidden nodes with both L1-norm regularization and pre-training, which was substantially lower than the error rate from the SVM (22.3%). Moreover, the trained DNN weights (i.e., the learned features) were found to represent the hierarchical organization of aberrant FC patterns in SZ compared with HC. Specifically, pairs of nodes extracted from the lower hidden layer represented sparse FC patterns implicated in SZ, which was quantified by using kurtosis/modularity measures and features from the higher hidden layer showed holistic/global FC patterns differentiating SZ from HC. Our proposed schemes and reported findings attained by using the DNN classifier and whole-brain FC data suggest that such approaches show improved ability to learn hidden patterns in brain imaging data, which may be useful for developing diagnostic tools for SZ and other neuropsychiatric disorders and identifying associated aberrant FC patterns.",
"title": ""
},
{
"docid": "426839489ceae1e47c66d8f31214e9bd",
"text": "State-of-the-art methods of people counting in crowded scenes rely on deep networks to estimate people density in the image plane. Perspective distortion effects are handled implicitly by either learning scale-invariant features or estimating density in patches of different sizes, neither of which accounts for the fact that scale changes must be consistent over the whole scene. In this paper, we show that feeding an explicit model of the scale changes to the network considerably increases performance. An added benefit is that it lets us reason in terms of number of people per square meter on the ground, allowing us to enforce physicallyinspired temporal consistency constraints that do not have to be learned. This yields an algorithm that outperforms state-of-the-art methods on crowded scenes, especially when perspective effects are strong.",
"title": ""
},
{
"docid": "99f57f28f8c262d4234d07deb9dcf49d",
"text": "Historically, conversational systems have focused on goal-directed interaction and this focus defined much of the work in the field of spoken dialog systems. More recently researchers have started to focus on nongoal-oriented dialog systems often referred to as ”chat” systems. We can refer to these as Chat-oriented Dialog (CHAD)systems. CHAD systems are not task-oriented and focus on what can be described as social conversation where the goal is to interact while maintaining an appropriate level of engagement with a human interlocutor. Work to date has identified a number of techniques that can be used to implement working CHADs but it has also highlighted important limitations. This note describes CHAD characteristics and proposes a research agenda.",
"title": ""
},
{
"docid": "33431760dfc16c095a4f0b8d4ed94790",
"text": "Millions of individuals worldwide are afflicted with acute and chronic respiratory diseases, causing temporary and permanent disabilities and even death. Oftentimes, these diseases occur as a result of altered immune responses. The aryl hydrocarbon receptor (AhR), a ligand-activated transcription factor, acts as a regulator of mucosal barrier function and may influence immune responsiveness in the lungs through changes in gene expression, cell–cell adhesion, mucin production, and cytokine expression. This review updates the basic immunobiology of the AhR signaling pathway with regards to inflammatory lung diseases such as asthma, chronic obstructive pulmonary disease, and silicosis following data in rodent models and humans. Finally, we address the therapeutic potential of targeting the AhR in regulating inflammation during acute and chronic respiratory diseases.",
"title": ""
},
{
"docid": "86314426c9afd5dbd13d096605af7b05",
"text": "Large scale knowledge graphs (KGs) such as Freebase are generally incomplete. Reasoning over multi-hop (mh) KG paths is thus an important capability that is needed for question answering or other NLP tasks that require knowledge about the world. mh-KG reasoning includes diverse scenarios, e.g., given a head entity and a relation path, predict the tail entity; or given two entities connected by some relation paths, predict the unknown relation between them. We present ROPs, recurrent one-hop predictors, that predict entities at each step of mh-KB paths by using recurrent neural networks and vector representations of entities and relations, with two benefits: (i) modeling mh-paths of arbitrary lengths while updating the entity and relation representations by the training signal at each step; (ii) handling different types of mh-KG reasoning in a unified framework. Our models show state-of-the-art for two important multi-hop KG reasoning tasks: Knowledge Base Completion and Path Query Answering.1",
"title": ""
},
{
"docid": "38b6660a0f246590ad97b75be074899d",
"text": "Technology has been playing a major role in our lives. One definition for technology is “all the knowledge, products, processes, tools, methods and systems employed in the creation of goods or in providing services”. This makes technological innovations raise the competitiveness between organizations that depend on supply chain and logistics in the global market. With increasing competitiveness, new challenges arise due to lack of information and assets tractability. This paper introduces three scenarios for solving these challenges using the Blockchain technology. In this work, Blockchain technology targets two main issues within the supply chain, namely, data transparency and resource sharing. These issues are reflected into the organization's strategies and",
"title": ""
},
{
"docid": "79729b8f7532617015cbbdc15a876a5c",
"text": "We introduce recurrent neural networkbased Minimum Translation Unit (MTU) models which make predictions based on an unbounded history of previous bilingual contexts. Traditional back-off n-gram models suffer under the sparse nature of MTUs which makes estimation of highorder sequence models challenging. We tackle the sparsity problem by modeling MTUs both as bags-of-words and as a sequence of individual source and target words. Our best results improve the output of a phrase-based statistical machine translation system trained on WMT 2012 French-English data by up to 1.5 BLEU, and we outperform the traditional n-gram based MTU approach by up to 0.8 BLEU.",
"title": ""
},
{
"docid": "ea49e4a74c165f3819e24d48df4777f2",
"text": "BACKGROUND\nThe fatty tissue of the face is divided into compartments. The structures delimiting these compartments help shape the face, are involved in aging, and are encountered during surgical procedures.\n\n\nOBJECTIVE\nTo study the border between the lateral-temporal and the middle cheek fat compartments of the face.\n\n\nMETHODS & MATERIALS\nWe studied 40 human cadaver heads with gross dissections and macroscopic and histological sections. Gelatin was injected into the subcutaneous tissues of 35 heads.\n\n\nRESULTS\nA sheet of connective tissue, comparable to a septum, was consistently found between the lateral-temporal and the middle compartments. We call this structure the septum subcutaneum parotideomassetericum.\n\n\nCONCLUSION\nThere is a distinct septum between the lateral-temporal and the middle fat compartments of the face.",
"title": ""
},
{
"docid": "0bb944bccfc46d82cd788d105ac14249",
"text": "Live facial expression recognition is an effective and essential research area in human computer interaction (HCI), and the automatic sign language recognition (ASLR) fields. This paper presents a fully automatic facial expression and direction of sight recognition system, that we called SignsWorld Facial Expression Recognition System (FERS). The SignsWorld FERS is divided into three main components: Face detection that is robust to occlusion, key facial features points extraction and facial expression with direction of sight recognition. We present a powerful multi-detector technique to localize the key facial feature points so that contours of the facial components such as the eyes, nostrils, chin, and mouth are sampled. Based on the extracted 66 facial features points, 20 geometric formulas (GFs), 15 ratios (Rs) are calculated, and the classifier based on rule-based reasoning approach are then formed for both of the gaze direction and the facial expression (Normal, Smiling, Sadness or Surprising). SignsWorld FERS is the person independent facial expression and achieved a recognition rate of 97%.",
"title": ""
},
{
"docid": "dc4d11c0478872f3882946580bb10572",
"text": "An increasing number of neural implantable devices will become available in the near future due to advances in neural engineering. This discipline holds the potential to improve many patients' lives dramatically by offering improved-and in some cases entirely new-forms of rehabilitation for conditions ranging from missing limbs to degenerative cognitive diseases. The use of standard engineering practices, medical trials, and neuroethical evaluations during the design process can create systems that are safe and that follow ethical guidelines; unfortunately, none of these disciplines currently ensure that neural devices are robust against adversarial entities trying to exploit these devices to alter, block, or eavesdrop on neural signals. The authors define \"neurosecurity\"-a version of computer science security principles and methods applied to neural engineering-and discuss why neurosecurity should be a critical consideration in the design of future neural devices.",
"title": ""
},
{
"docid": "2f308d1c4b4c900e3024833561792f7e",
"text": "We present the first large-scale survey to investigate how users experience the Bitcoin ecosystem in terms of security, privacy and anonymity. We surveyed 990 Bitcoin users to determine Bitcoin management strategies and identified how users deploy security measures to protect their keys and bitcoins. We found that about 46% of our participants use web-hosted solutions to manage at least some of their bitcoins, and about half of them use exclusively such solutions. We also found that many users do not use all security capabilities of their selected Bitcoin management tool and have significant misconceptions on how to remain anonymous and protect their privacy in the Bitcoin network. Also, 22% of our participants have already lost money due to security breaches or self-induced errors. To get a deeper understanding, we conducted qualitative interviews to explain some of the observed phenomena.",
"title": ""
},
{
"docid": "714c06da1a728663afd8dbb1cd2d472d",
"text": "This paper proposes hybrid semiMarkov conditional random fields (SCRFs) for neural sequence labeling in natural language processing. Based on conventional conditional random fields (CRFs), SCRFs have been designed for the tasks of assigning labels to segments by extracting features from and describing transitions between segments instead of words. In this paper, we improve the existing SCRF methods by employing word-level and segment-level information simultaneously. First, word-level labels are utilized to derive the segment scores in SCRFs. Second, a CRF output layer and an SCRF output layer are integrated into an unified neural network and trained jointly. Experimental results on CoNLL 2003 named entity recognition (NER) shared task show that our model achieves state-of-the-art performance when no external knowledge is used.",
"title": ""
},
{
"docid": "7c9aba06418b51a90f1f3d97c3e3f83a",
"text": "BACKGROUND\nResearch indicates that music therapy can improve social behaviors and joint attention in children with Autism Spectrum Disorder (ASD); however, more research on the use of music therapy interventions for social skills is needed to determine the impact of group music therapy.\n\n\nOBJECTIVE\nTo examine the effects of a music therapy group intervention on eye gaze, joint attention, and communication in children with ASD.\n\n\nMETHOD\nSeventeen children, ages 6 to 9, with a diagnosis of ASD were randomly assigned to the music therapy group (MTG) or the no-music social skills group (SSG). Children participated in ten 50-minute group sessions over a period of 5 weeks. All group sessions were designed to target social skills. The Social Responsiveness Scale (SRS), the Autism Treatment Evaluation Checklist (ATEC), and video analysis of sessions were used to evaluate changes in social behavior.\n\n\nRESULTS\nThere were significant between-group differences for joint attention with peers and eye gaze towards persons, with participants in the MTG demonstrating greater gains. There were no significant between-group differences for initiation of communication, response to communication, or social withdraw/behaviors. There was a significant interaction between time and group for SRS scores, with improvements for the MTG but not the SSG. Scores on the ATEC did not differ over time between the MTG and SSG.\n\n\nCONCLUSIONS\nThe results of this study support further research on the use of music therapy group interventions for social skills in children with ASD. Statistical results demonstrate initial support for the use of music therapy social groups to develop joint attention.",
"title": ""
},
{
"docid": "68fe4f62d48270395ca3f257bbf8a18a",
"text": "Adjectives like warm, hot, and scalding all describe temperature but differ in intensity. Understanding these differences between adjectives is a necessary part of reasoning about natural language. We propose a new paraphrasebased method to automatically learn the relative intensity relation that holds between a pair of scalar adjectives. Our approach analyzes over 36k adjectival pairs from the Paraphrase Database under the assumption that, for example, paraphrase pair really hot↔ scalding suggests that hot < scalding. We show that combining this paraphrase evidence with existing, complementary patternand lexicon-based approaches improves the quality of systems for automatically ordering sets of scalar adjectives and inferring the polarity of indirect answers to yes/no questions.",
"title": ""
},
{
"docid": "dc3417d01a998ee476aeafc0e9d11c74",
"text": "We present an overview of techniques for quantizing convolutional neural networks for inference with integer weights and activations. 1. Per-channel quantization of weights and per-layer quantization of activations to 8-bits of precision post-training produces classification accuracies within 2% of floating point networks for a wide variety of CNN architectures (section 3.1). 2. Model sizes can be reduced by a factor of 4 by quantizing weights to 8bits, even when 8-bit arithmetic is not supported. This can be achieved with simple, post training quantization of weights (section 3.1). 3. We benchmark latencies of quantized networks on CPUs and DSPs and observe a speedup of 2x-3x for quantized implementations compared to floating point on CPUs. Speedups of up to 10x are observed on specialized processors with fixed point SIMD capabilities, like the Qualcomm QDSPs with HVX (section 6). 4. Quantization-aware training can provide further improvements, reducing the gap to floating point to 1% at 8-bit precision. Quantization-aware training also allows for reducing the precision of weights to four bits with accuracy losses ranging from 2% to 10%, with higher accuracy drop for smaller networks (section 3.2). 5. We introduce tools in TensorFlow and TensorFlowLite for quantizing convolutional networks (Section 3). 6. We review best practices for quantization-aware training to obtain high accuracy with quantized weights and activations (section 4). 7. We recommend that per-channel quantization of weights and per-layer quantization of activations be the preferred quantization scheme for hardware acceleration and kernel optimization. We also propose that future processors and hardware accelerators for optimized inference support precisions of 4, 8 and 16 bits (section 7).",
"title": ""
},
{
"docid": "83ae128f71bb154177881012dfb6a680",
"text": "Cell imbalance in large battery packs degrades their capacity delivery, especially for cells connected in series where the weakest cell dominates their overall capacity. In this article, we present a case study of exploiting system reconfigurations to mitigate the cell imbalance in battery packs. Specifically, instead of using all the cells in a battery pack to support the load, selectively skipping cells to be discharged may actually enhance the pack’s capacity delivery. Based on this observation, we propose CSR, a Cell Skipping-assisted Reconfiguration algorithm that identifies the system configuration with (near)-optimal capacity delivery. We evaluate CSR using large-scale emulation based on empirically collected discharge traces of 40 lithium-ion cells. CSR achieves close-to-optimal capacity delivery when the cell imbalance in the battery pack is low and improves the capacity delivery by about 20% and up to 1x in the case of a high imbalance.",
"title": ""
},
{
"docid": "b622c27ba400e349d2b1ad40c7fc90e1",
"text": "In this work we examine the feasibility of quantitatively characterizing some aspects of security. In particular, we investigate if it is possible to predict the number of vulnerabilities that can potentially be present in a software system but may not have been found yet. We use several major operating systems as representatives of complex software systems. The data on vulnerabilities discovered in these systems are analyzed. We examine the results to determine if the density of vulnerabilities in a program is a useful measure. We also address the question about what fraction of software defects are security related, i.e., are vulnerabilities. We examine the dynamics of vulnerability discovery hypothesizing that it may lead us to an estimate of the magnitude of the undiscovered vulnerabilities still present in the system. We consider the vulnerability discovery rate to see if models can be developed to project future trends. Finally, we use the data for both commercial and opensource systems to determine whether the key observations are generally applicable. Our results indicate that the values of vulnerability densities fall within a range of values, just like the commonly used measure of defect density for general defects. Our examination also reveals that it is possible to model the vulnerability discovery using a logistic model that can sometimes be approximated by a linear model. a 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2a40501256bdaa11ab9b4c0c9f04d45b",
"text": "In recent years, deep learning has achieved great success in many computer vision applications. Convolutional neural networks (CNNs) have lately emerged as a major approach to image classification. Most research on CNNs thus far has focused on developing architectures such as the Inception and residual networks. The convolution layer is the core of the CNN, but few studies have addressed the convolution unit itself. In this paper, we introduce a convolution unit called the active convolution unit (ACU). A new convolution has no fixed shape, because of which we can define any form of convolution. Its shape can be learned through backpropagation during training. Our proposed unit has a few advantages. First, the ACU is a generalization of convolution, it can define not only all conventional convolutions, but also convolutions with fractional pixel coordinates. We can freely change the shape of the convolution, which provides greater freedom to form CNN structures. Second, the shape of the convolution is learned while training and there is no need to tune it by hand. Third, the ACU can learn better than a conventional unit, where we obtained the improvement simply by changing the conventional convolution to an ACU. We tested our proposed method on plain and residual networks, and the results showed significant improvement using our method on various datasets and architectures in comparison with the baseline. Code is available at https://github.com/jyh2986/Active-Convolution.",
"title": ""
},
{
"docid": "23670ac6fb88e2f5d3a31badc6dc38f9",
"text": "The purpose of this review article is to report on the recent developments and the performance level achieved in the strained-Si/SiGe material system. In the first part, the technology of the growth of a high-quality strained-Si layer on a relaxed, linear or step-graded SiGe buffer layer is reviewed. Characterization results of strained-Si films obtained with secondary ion mass spectroscopy, Rutherford backscattering spectroscopy, atomic force microscopy, spectroscopic ellipsometry and Raman spectroscopy are presented. Techniques for the determination of bandgap parameters from electrical characterization of metal–oxide–semiconductor (MOS) structures on strained-Si film are discussed. In the second part, processing issues of strained-Si films in conventional Si technology with low thermal budget are critically reviewed. Thermal and low-temperature microwave plasma oxidation and nitridation of strained-Si layers are discussed. Some recent results on contact metallization of strained-Si using Ti and Pt are presented. In the last part, device applications of strained Si with special emphasis on heterostructure metal oxide semiconductor field effect transistors and modulation-doped field effect transistors are discussed. Design aspects and simulation results of nand p-MOS devices with a strained-Si channel are presented. Possible future applications of strained-Si/SiGe in high-performance SiGe CMOS technology are indicated.",
"title": ""
}
] |
scidocsrr
|
3848a7bd1983002f72f41c79f543c9d9
|
Detection and Classification of Plant Leaf Diseases using ANN
|
[
{
"docid": "a0bf1cb4ba1bc9dee0d5957691906732",
"text": "To identify different plants by leaves digital image is one key problem in precision farming. By the combination of image processing and neural network, Most of the image blocks of different plants could be correctly classified. Firstly, the image enhancement processing can make objects in the source image clear. Secondly, due to the different shapes and sizes of image blocks of leaves, they could be separated and extracted from sources. Then, by using image analysis tools from Matlab, these characters such as sizes, radius, perimeters, solidity, and eccentricity could be calculated. Then, using them as input data, create a radial basis function neural networks. Divide the input data into two parts. Select one part to train the network and the other to check the validity of the model. Finally, input data from other image frames under the same condition could be used to check the model. In this work, the total accuracy is about 80%. These methods was simple and highly effective, So they could easily be integrated into auto machines in the field, which can largely saving labor and enhance productivity.",
"title": ""
},
{
"docid": "804113bb0459eb04d9b163c086050207",
"text": "The techniques of machine vision are extensively applied to agricultural science, and it has great perspective especially in the plant protection field, which ultimately leads to crops management. The paper describes a software prototype system for rice disease detection based on the infected images of various rice plants. Images of the infected rice plants are captured by digital camera and processed using image growing, image segmentation techniques to detect infected parts of the plants. Then the infected part of the leaf has been used for the classification purpose using neural network. The methods evolved in this system are both image processing and soft computing technique applied on number of diseased rice plants.",
"title": ""
},
{
"docid": "213c393635b8a7bb341fd1cc05e23d2d",
"text": "Vegetables and fruits are the most important export agricultural products of Thailand. In order to obtain more value-added products, a product quality control is essentially required. Many studies show that quality of agricultural products may be reduced from many causes. One of the most important factors of such quality is plant diseases. Consequently, minimizing plant diseases allows substantially improving quality of the products. This work presents automatic plant disease diagnosis using multiple artificial intelligent techniques. The system can diagnose plant leaf disease without maintaining any expertise once the system is trained. Mainly, the grape leaf disease is focused in this work. The proposed system consists of three main parts: (i) grape leaf color segmentation, (ii) grape leaf disease segmentation, and (iii) analysis & classification of diseases. The grape leaf color segmentation is pre-processing module which segments out any irrelevant background information. A self-organizing feature map together with a back-propagation neural network is deployed to recognize colors of grape leaf. This information is used to segment grape leaf pixels within the image. Then the grape leaf disease segmentation is performed using modified self-organizing feature map with genetic algorithms for optimization and support vector machines for classification. Finally, the resulting segmented image is filtered by Gabor wavelet which allows the system to analyze leaf disease color features more efficient. The support vector machines are then again applied to classify types of grape leaf diseases. The system can be able to categorize the image of grape leaf into three classes: scab disease, rust disease and no disease. The proposed system shows desirable results which can be further developed for any agricultural product analysis/inspection system.",
"title": ""
}
] |
[
{
"docid": "2a5710aeaba7e39c5e08c1a5310c89f6",
"text": "We present an augmented reality system that supports human workers in a rapidly changing production environment. By providing spatially registered information on the task directly in the user's field of view the system can guide the user through unfamiliar tasks (e.g. assembly of new products) and visualize information directly in the spatial context were it is relevant. In the first version we present the user with picking and assembly instructions in an assembly application. In this paper we present the initial experience with this system, which has already been used successfully by several hundred users who had no previous experience in the assembly task.",
"title": ""
},
{
"docid": "9e3d3783aa566b50a0e56c71703da32b",
"text": "Heterogeneous networks are widely used to model real-world semi-structured data. The key challenge of learning over such networks is the modeling of node similarity under both network structures and contents. To deal with network structures, most existing works assume a given or enumerable set of meta-paths and then leverage them for the computation of meta-path-based proximities or network embeddings. However, expert knowledge for given meta-paths is not always available, and as the length of considered meta-paths increases, the number of possible paths grows exponentially, which makes the path searching process very costly. On the other hand, while there are often rich contents around network nodes, they have hardly been leveraged to further improve similarity modeling. In this work, to properly model node similarity in content-rich heterogeneous networks, we propose to automatically discover useful paths for pairs of nodes under both structural and content information. To this end, we combine continuous reinforcement learning and deep content embedding into a novel semi-supervised joint learning framework. Specifically, the supervised reinforcement learning component explores useful paths between a small set of example similar pairs of nodes, while the unsupervised deep embedding component captures node contents and enables inductive learning on the whole network. The two components are jointly trained in a closed loop to mutually enhance each other. Extensive experiments on three real-world heterogeneous networks demonstrate the supreme advantages of our algorithm.",
"title": ""
},
{
"docid": "a2238524731bf855a1edb9ad874740a6",
"text": "Lack of trust has been identified as a major obstacle to the adoption of online shopping. However, there is paucity of research that investigates the effectiveness of various trust building mechanisms, especially the interactions amongst these mechanisms. In this study, three trust building mechanisms (i.e., third-party certification, reputation, and return policy) were examined. Scenario survey method was used for data collection. 463 usable questionnaires were collected from respondents with diverse backgrounds. Regression results show that all three trust building mechanisms have significant positive effects on trust in the online vendor. Their effects are not simple ones; the different trust building mechanisms interact with one another to produce an overall effect on the level of trust. These results have both theoretical and practical implications.",
"title": ""
},
{
"docid": "a0e14f5c359de4aa8e7640cf4ff5effa",
"text": "In speech translation, we are faced with the problem of how to couple the speech recognition process and the translation process. Starting from the Bayes decision rule for speech translation, we analyze how the interaction between the recognition process and the translation process can be modelled. In the light of this decision rule, we discuss the already existing approaches to speech translation. None of the existing approaches seems to have addressed this direct interaction. We suggest two new methods, the local averaging approximation and the monotone alignments.",
"title": ""
},
{
"docid": "718e31eabfd386768353f9b75d9714eb",
"text": "The mathematical structure of Sudoku puzzles is akin to hard constraint satisfaction problems lying at the basis of many applications, including protein folding and the ground-state problem of glassy spin systems. Via an exact mapping of Sudoku into a deterministic, continuous-time dynamical system, here we show that the difficulty of Sudoku translates into transient chaotic behavior exhibited by this system. We also show that the escape rate κ, an invariant of transient chaos, provides a scalar measure of the puzzle's hardness that correlates well with human difficulty ratings. Accordingly, η = -log₁₀κ can be used to define a \"Richter\"-type scale for puzzle hardness, with easy puzzles having 0 < η ≤ 1, medium ones 1 < η ≤ 2, hard with 2 < η ≤ 3 and ultra-hard with η > 3. To our best knowledge, there are no known puzzles with η > 4.",
"title": ""
},
{
"docid": "fd9e411292f1c1fff998df9f1887b8e0",
"text": "In this paper we present PerKApp, a context-aware system for inducing the user to adopt healthier lifestyles, based on a novel combination of persuasion technologies, natural language generation techniques, and deep knowledge representation tools. In our view, personalized and tailored messages generated according to the characteristic of user, user preferences and the context are extremely useful to increase the effectiveness of persuasion efforts in terms of user acceptance of the proposed behaviors. The architecture of PerKApp is designed with the goal of ease scalability and extendibility to other domains by redefinition of the knowledge and linguistic content.",
"title": ""
},
{
"docid": "53a7aff5f5409e3c2187a5d561ff342e",
"text": "We present a study focused on constructing models of players for the major commercial title Tomb Raider: Underworld (TRU). Emergent self-organizing maps are trained on high-level playing behavior data obtained from 1365 players that completed the TRU game. The unsupervised learning approach utilized reveals four types of players which are analyzed within the context of the game. The proposed approach automates, in part, the traditional user and play testing procedures followed in the game industry since it can inform game developers, in detail, if the players play the game as intended by the game design. Subsequently, player models can assist the tailoring of game mechanics in real-time for the needs of the player type identified.",
"title": ""
},
{
"docid": "b7177265a8e82e4357fdb8eeb3cbab12",
"text": "Various hand-crafted features and metric learning methods prevail in the field of person re-identification. Compared to these methods, this paper proposes a more general way that can learn a similarity metric from image pixels directly. By using a \"siamese\" deep neural network, the proposed method can jointly learn the color feature, texture feature and metric in a unified framework. The network has a symmetry structure with two sub-networks which are connected by a cosine layer. Each sub network includes two convolutional layers and a full connected layer. To deal with the big variations of person images, binomial deviance is used to evaluate the cost between similarities and labels, which is proved to be robust to outliers. Experiments on VIPeR illustrate the superior performance of our method and a cross database experiment also shows its good generalization.",
"title": ""
},
{
"docid": "b77bef86667caed885fee95c79dc2292",
"text": "In this work, we propose a novel method for vocabulary selection to automatically adapt automatic speech recognition systems to the diverse topics that occur in educational and scientific lectures. Utilizing materials that are available before the lecture begins, such as lecture slides, our proposed framework iteratively searches for related documents on the web and generates a lecture-specific vocabulary based on the resulting documents. In this paper, we propose a novel method for vocabulary selection where we first collect documents similar to an initial seed document and then rank the resulting vocabulary based on a score which is calculated using a combination of word features. This is a critical component for adaptation that has typically been overlooked in prior works. On the inter ACT German-English simultaneous lecture translation system our proposed approach significantly improved vocabulary coverage, reducing the out-of-vocabulary rate, on average by 57.0% and up to 84.9%, compared to a lecture-independent baseline. Furthermore, our approach reduced the word error rate, by 12.5% on average and up to 25.3%, compared to a lecture-independent baseline.",
"title": ""
},
{
"docid": "b254f1e5bbafa8c824842f78b594490b",
"text": "In a previous examination of feedback research (Mory, 1996), the use of feedback in the facilitation of learning was examined extensively according to various historical and paradigmatic views of the past feedback literature. Most of the research presented in that volume in the area of feedback was completed with specific assumptions as to what purpose feedback serves. This still holds true, and even more so, because our theories and paradigms have expanded, and the field of instructional design has undergone and will continue to undergo rapid changes in technologies that will afford new advances to take place in both the delivery and the context of using feedback in instruction. It is not surprising that feedback may have various functions according to the particular learning environment in which it is examined and the particular learning paradigm under which it is viewed. In fact, feedback is incorporated in many paradigms of learning, from the early views of behaviorism (Skinner, 1958), to cognitivism (Gagné, 1985; Kulhavy & Wager 1993) through more recent models of constructivism (Jonassen, 1991, 1999; Mayer, 1999; Willis, 2000), settings such as open learning environments (Hannafin, Land, & Oliver, 1999), and views that support multiple approaches to understanding (Gardner, 1999), to name just a few. While feedback has been an essential element of theories of learning and instruction in the past (Bangert-Drowns, Kulik, Kulik, & Morgan, 1991), it still pervades the literature and instructional models as an important aspect of instruction (Collis, De Boer, & Slotman, 2001; Dick, Carey, & Carey, 2001).",
"title": ""
},
{
"docid": "06d30f5d22689e07190961ae76f7b9a0",
"text": "In recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet. Typically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burden on the underlying network. In this paper, we target high-bandwidth data distribution from a single source to a large number of receivers. Applications include large-file transfers and real-time multimedia streaming. For these applications, we argue that an overlay mesh, rather than a tree, can deliver fundamentally higher bandwidth and reliability relative to typical tree structures. This paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh. We construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network. Individual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel.Key contributions of this work include: i) an algorithm that sends data to different points in the overlay such that any data object is equally likely to appear at any node, ii) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items, and iii) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances. In addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing. In a tree, it is critical that a node's parent delivers a high rate of application data to each child. In Bullet however, nodes simultaneously receive data from multiple sources in parallel, making it less important to locate any single source capable of sustaining a high transmission rate.",
"title": ""
},
{
"docid": "76773a08016cf73a5eb062a84b2a577e",
"text": "This grounded theory study on developing a leadership identity revealed a 6-stage developmental process. The thirteen diverse students in this study described their leadership identity as moving from a leader-centric view to one that embraced leadership as a collaborative, relational process. Developing a leadership identity was connected to the categories of developmental influences, developing self, group influences, students’ changing view of self with others, and students’ broadening view of leadership. A conceptual model illustrating the grounded theory of developing a leadership identity is presented.",
"title": ""
},
{
"docid": "bd37aa47cf495c7ea327caf2247d28e4",
"text": "The purpose of this study is to identify the negative effects of social network sites such as Facebook among Asia Pacific University scholars. The researcher, distributed 152 surveys to students of the chosen university to examine and study the negative effects. Electronic communication is emotionally gratifying but how do such technological distraction impact on academic performance? Because of social media platform’s widespread adoption by university students, there is an interest in how Facebook is related to academic performance. This paper measure frequency of use, participation in activities and time spent preparing for class, in order to know if Facebook affects the performance of students. Moreover, the impact of social network site on academic performance also raised another major concern which is health. Today social network sites are running the future and carrier of students. Social network sites were only an electronic connection between users, but unfortunately it has become an addiction for students. This paper examines the relationship between social network sites and health threat. Lastly, the paper provides a comprehensive analysis of the law and privacy of Facebook. It shows how Facebook users socialize on the site, while they are not aware or misunderstand the risk involved and how their privacy suffers as a result.",
"title": ""
},
{
"docid": "2a81d56c89436b3379c7dec082d19b17",
"text": "We present a fast, efficient, and automatic method for extracting vessels from retinal images. The proposed method is based on the second local entropy and on the gray-level co-occurrence matrix (GLCM). The algorithm is designed to have flexibility in the definition of the blood vessel contours. Using information from the GLCM, a statistic feature is calculated to act as a threshold value. The performance of the proposed approach was evaluated in terms of its sensitivity, specificity, and accuracy. The results obtained for these metrics were 0.9648, 0.9480, and 0.9759, respectively. These results show the high performance and accuracy that the proposed method offers. Another aspect evaluated in this method is the elapsed time to carry out the segmentation. The average time required by the proposed method is 3 s for images of size 565 9 584 pixels. To assess the ability and speed of the proposed method, the experimental results are compared with those obtained using other existing methods.",
"title": ""
},
{
"docid": "ac8eb93297186b67e3bc03c687f55d4c",
"text": "This paper presents a scale and view invariant approach for human detection in the presence of various other objects like animals, vehicles, etc. Human detection is one of the essential steps in applications like activity recognition, gait recognition, human centric surveillance etc. Inaccurate detection of humans in such applications may increase the number of false alarms. In the proposed work, fuzzy logic has been used to model a robust background for object detection. Three different features are extracted from the contours of the detected objects. These features are aggregated using fuzzy inference system. Then human contour is identified using template matching. The proposed method consists of four main steps; Moving Object Detection, Feature Extraction, Feature Aggregation, and Human Contour Detection.",
"title": ""
},
{
"docid": "45a98a82d462d8b12445cbe38f20849d",
"text": "Proliferative verrucous leukoplakia (PVL) is an aggressive form of oral leukoplakia that is persistent, often multifocal, and refractory to treatment with a high risk of recurrence and malignant transformation. This article describes the clinical aspects and histologic features of a case that demonstrated the typical behavior pattern in a long-standing, persistent lesion of PVL of the mandibular gingiva and that ultimately developed into squamous cell carcinoma. Prognosis is poor for this seemingly harmless-appearing white lesion of the oral mucosa.",
"title": ""
},
{
"docid": "59678b6abdc3264bad930cd31f1a0481",
"text": "Supervised learning with large scale labeled datasets and deep layered models has made a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers generalization issues under the presence of a domain shift between the training and the test data distribution. In this regard, unsupervised domain adaptation algorithms have been proposed to directly address the domain shift problem. In this paper, we approach the problem from a transductive perspective. We incorporate the domain shift and the transductive target inference into our framework by jointly solving for an asymmetric similarity metric and the optimal transductive target label assignment. We also show that our model can easily be extended for deep feature learning in order to learn features which are discriminative in the target domain. Our experiments show that the proposed method significantly outperforms state-of-the-art algorithms in both object recognition and digit classification experiments by a large margin.",
"title": ""
},
{
"docid": "2ae6b3cd88594351646d9c88c9931842",
"text": "Inpainting is the process of reconstructing lost or deteriorated part of images based on the background information. i. e. image Inpainting fills the missing or damaged region in an image utilizing spatial information of its neighbouring region. Inpainting algorithm have numerous applications. It is helpfully used for restoration of old films and object removal in digital photographs. It is also applied to red-eye correction, super resolution, compression etc. The main goal of the Inpainting algorithm is to modify the damaged region in an image in such a way that the inpainted region is undetectable to the ordinary observers who are not familiar with the original image. There have been several approaches proposed for the image inpainting. This proposed work presents a brief survey of different image inpainting techniques and comparative study of these techniques. In this paper we provide a review of different techniques used for image Inpainting. We discuss different inpainting techniques like image PDE based image inpainting, Exemplar based image inpainting, hybrid inpainting, and texture synthesis based image inpainting and semi-automatic and fast digital Inpainting.",
"title": ""
},
{
"docid": "d525021f4d39bae4d24e290f14e4dca7",
"text": "Aging face recognition refers to matching the same person's faces across different ages, e.g., matching a person's older face to his (or her) younger one, which has many important practical applications, such as finding missing children. The major challenge of this task is that facial appearance is subject to significant change during the aging process. In this paper, we propose to solve the problem with a hierarchical model based on two-level learning. At the first level, effective features are learned from low-level microstructures, based on our new feature descriptor called local pattern selection (LPS). The proposed LPS descriptor greedily selects low-level discriminant patterns in a way, such that intra-user dissimilarity is minimized. At the second level, higher level visual information is further refined based on the output from the first level. To evaluate the performance of our new method, we conduct extensive experiments on the MORPH data set (the largest face aging data set available in the public domain), which show a significant improvement in accuracy over the state-of-the-art methods.",
"title": ""
},
{
"docid": "06bb270af257fc847b7b4147daab49ec",
"text": "Traffic congestion is a major concern for many cities throughout the world. Developing a sophisticated traffic monitoring and control system would result in an effective solution to this problem. In a conventional traffic light controller, the traffic lights change at constant cycle time. Hence it does not provide an optimal solution. Many traffic light controllers implemented in current practice, are based on the 'time-of-the-day' scheme, which use a limited number of predetermined traffic light patterns and implement these patterns depending upon the time of the day. These automated systems do not provide an optimal control for fluctuating traffic volumes. A traffic light controller based on fuzzy logic can be used for optimum control of fluctuating traffic volumes such as over saturated or unusual load conditions. The objective is to improve the vehicular throughput and minimize delays. The rules of fuzzy logic controller are formulated by following the same protocols that a human operator would use to control the time intervals of the traffic light. The length of the current green phase is extended or terminated depending upon the 'arrival' i.e. the number of vehicles approaching the green phase and the 'queue' that corresponds to the number of queuing vehicles in red phases. A prototype system for controlling traffic at an intersection is designed using VB6 and Matlab tool. The traffic intersection is simulated in VB6 and the data regarding the traffic parameters is collected in VB6 environment. The decision on the duration of the extension is taken using the Matlab tool. This decision is based on the Arrival and Queue of vehicles, which is imported in Matlab from VB6 environment. The time delay experienced by the vehicles using the fixed as well as fuzzy traffic controller is then compared to observe the effectiveness of the fuzzy traffic controller.",
"title": ""
}
] |
scidocsrr
|
a2c03a19b1e12da7fca66855a2266e6f
|
SQenloT: Semantic query engine for industrial Internet-of-Things gateways
|
[
{
"docid": "fdac9bbe4e92fedfcd237878afdefc90",
"text": "Pervasive and sensor-driven systems are by nature open and extensible, both in terms of input and tasks they are required to perform. Data streams coming from sensors are inherently noisy, imprecise and inaccurate, with di↵ering sampling rates and complex correlations with each other. These characteristics pose a significant challenge for traditional approaches to storing, representing, exchanging, manipulating and programming with sensor data. Semantic Web technologies provide a uniform framework for capturing these properties. O↵ering powerful representation facilities and reasoning techniques, these technologies are rapidly gaining attention towards facing a range of issues such as data and knowledge modelling, querying, reasoning, service discovery, privacy and provenance. This article reviews the application of the Semantic Web to pervasive and sensor-driven systems with a focus on information modelling and reasoning along with streaming data and uncertainty handling. The strengths and weaknesses of current and projected approaches are analysed and a roadmap is derived for using the Semantic Web as a platform, on which open, standard-based, pervasive, adaptive and sensor-driven systems can be deployed.",
"title": ""
}
] |
[
{
"docid": "60a4d92be550fb5f729359f472420c29",
"text": "A simple and effective technique for designing integrated planar Marchand balun is presented in this paper. The approach uses the physical transformer model to replace the lossy coupled transmission lines in a conventional Marchand balun design. As a demonstration and validation of the design approach, a Marchand balun using silicon-based integrated passive device (IPD) technology is carried out at a center frequency of 2.45 GHz. The measured results show low insertion loss and high balance property over a wide bandwidth for the implemented Marchand balun. Comparison among modeled, EM simulated and measured results shows good agreement.",
"title": ""
},
{
"docid": "40495cc96353f56481ed30f7f5709756",
"text": "This paper reported the construction of partial discharge measurement system under influence of cylindrical metal particle in transformer oil. The partial discharge of free cylindrical metal particle in the uniform electric field under AC applied voltage was studied in this paper. The partial discharge inception voltage (PDIV) for the single particle was measure to be 11kV. The typical waveform of positive PD and negative PD was also obtained. The result shows that the magnitude of negative PD is higher compared to positive PD. The observation on cylindrical metal particle movement revealed that there were a few stages of motion process involved.",
"title": ""
},
{
"docid": "8b1fa33cc90434abddf5458e05db0293",
"text": "The Stand-Alone Modula-2 System (SAM2S) is a portable, concurrent operating system and Modula-2 programming support environment. It is based on a highly modular kernel task running on single process-multiplexed microcomputers. SAM2S offers extensive network communication facilities. It provides the foundation for the locally resident portions of the MICROS distributed operating system for large netcomputers. SAM2S now supports a five-pass Modula-2 compiler, a task linker, link and load file decoders, a static symbolic debugger, a filer, and other utility tasks. SAM2S is currently running on each node of a network of DEC LSI-11/23 and Heurikon/Motorola 68000 workstations connected by an Ethernet. This paper reviews features of Modula-2 for operating system development and outlines the design of SAM2S with special emphasis on its modularity and communication flexibility. The two SAM2S implementations differ mainly in their peripheral drivers and in the large amount of memory available on the 68000 systems. Modula-2 has proved highly suitable for writing large, portable, concurrent and distributed operating systems.",
"title": ""
},
{
"docid": "13584c61e4caecf3828f2a11037f492e",
"text": "Privacy in social networks is a large and growing concern in recent times. It refers to various issues in a social network which include privacy of users, links, and their attributes. Each privacy component of a social network is vast and consists of various sub-problems. For example, user privacy includes multiple sub-problems like user location privacy, and user personal information privacy. This survey on privacy in social networks is intended to serve as an initial introduction and starting step to all further researchers. We present various privacy preserving models and methods include naive anonymization, perturbation, or building a complete alternative network. We show the work done by multiple researchers in the past, where social networks are stated as network graphs with users represented as nodes and friendship between users represented as links between the nodes. We study ways and mechanisms developed to protect these nodes and links in the network. We also review other systems proposed, along with all the available databases for future researchers in this area.",
"title": ""
},
{
"docid": "fcab229efac66654e418e4e23f49c099",
"text": "An adaptive and fast constant false alarm rate (CFAR) algorithm based on automatic censoring (AC) is proposed for target detection in high-resolution synthetic aperture radar (SAR) images. First, an adaptive global threshold is selected to obtain an index matrix which labels whether each pixel of the image is a potential target pixel or not. Second, by using the index matrix, the clutter environment can be determined adaptively to prescreen the clutter pixels in the sliding window used for detecting. The G 0 distribution, which can model multilook SAR images within an extensive range of degree of homogeneity, is adopted as the statistical model of clutter in this paper. With the introduction of AC, the proposed algorithm gains good CFAR detection performance for homogeneous regions, clutter edge, and multitarget situations. Meanwhile, the corresponding fast algorithm greatly reduces the computational load. Finally, target clustering is implemented to obtain more accurate target regions. According to the theoretical performance analysis and the experiment results of typical real SAR images, the proposed algorithm is shown to be of good performance and strong practicability.",
"title": ""
},
{
"docid": "664a759c81c6f2fbaa2941acfe1c34e4",
"text": "Convolutional highways are deep networks based on multiple stacked convolutional layers for feature preprocessing. We introduce an evolutionary algorithm (EA) for optimization of the structure and hyperparameters of convolutional highways and demonstrate the potential of this optimization setting on the well-known MNIST data set. The (1+1)-EA employs Rechenberg’s mutation rate control and a niching mechanism to overcome local optima adapts the optimization approach. An experimental study shows that the EA is capable of improving the state-of-the-art network contribution and of evolving highway networks from scratch.",
"title": ""
},
{
"docid": "7c0748301936c39166b9f91ba72d92ef",
"text": "methods and native methods are considered to be type safe if they do not override a final method. methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), member(abstract, AccessFlags). methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), member(native, AccessFlags). private methods and static methods are orthogonal to dynamic method dispatch, so they never override other methods (§5.4.5). doesNotOverrideFinalMethod(class('java/lang/Object', L), Method) :isBootstrapLoader(L). doesNotOverrideFinalMethod(Class, Method) :isPrivate(Method, Class). doesNotOverrideFinalMethod(Class, Method) :isStatic(Method, Class). doesNotOverrideFinalMethod(Class, Method) :isNotPrivate(Method, Class), isNotStatic(Method, Class), doesNotOverrideFinalMethodOfSuperclass(Class, Method). doesNotOverrideFinalMethodOfSuperclass(Class, Method) :classSuperClassName(Class, SuperclassName), classDefiningLoader(Class, L), loadedClass(SuperclassName, L, Superclass), classMethods(Superclass, SuperMethodList), finalMethodNotOverridden(Method, Superclass, SuperMethodList). 4.10 Verification of class Files THE CLASS FILE FORMAT 202 final methods that are private and/or static are unusual, as private methods and static methods cannot be overridden per se. Therefore, if a final private method or a final static method is found, it was logically not overridden by another method. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isFinal(Method, Superclass), isPrivate(Method, Superclass). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isFinal(Method, Superclass), isStatic(Method, Superclass). If a non-final private method or a non-final static method is found, skip over it because it is orthogonal to overriding. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isPrivate(Method, Superclass), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isStatic(Method, Superclass), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). THE CLASS FILE FORMAT Verification of class Files 4.10 203 If a non-final, non-private, non-static method is found, then indeed a final method was not overridden. Otherwise, recurse upwards. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isNotStatic(Method, Superclass), isNotPrivate(Method, Superclass). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), notMember(method(_, Name, Descriptor), SuperMethodList), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). 4.10 Verification of class Files THE CLASS FILE FORMAT 204 4.10.1.6 Type Checking Methods with Code Non-abstract, non-native methods are type correct if they have code and the code is type correct. methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), methodAttributes(Method, Attributes), notMember(native, AccessFlags), notMember(abstract, AccessFlags), member(attribute('Code', _), Attributes), methodWithCodeIsTypeSafe(Class, Method). A method with code is type safe if it is possible to merge the code and the stack map frames into a single stream such that each stack map frame precedes the instruction it corresponds to, and the merged stream is type correct. The method's exception handlers, if any, must also be legal. methodWithCodeIsTypeSafe(Class, Method) :parseCodeAttribute(Class, Method, FrameSize, MaxStack, ParsedCode, Handlers, StackMap), mergeStackMapAndCode(StackMap, ParsedCode, MergedCode), methodInitialStackFrame(Class, Method, FrameSize, StackFrame, ReturnType), Environment = environment(Class, Method, ReturnType, MergedCode, MaxStack, Handlers), handlersAreLegal(Environment), mergedCodeIsTypeSafe(Environment, MergedCode, StackFrame). THE CLASS FILE FORMAT Verification of class Files 4.10 205 Let us consider exception handlers first. An exception handler is represented by a functor application of the form: handler(Start, End, Target, ClassName) whose arguments are, respectively, the start and end of the range of instructions covered by the handler, the first instruction of the handler code, and the name of the exception class that this handler is designed to handle. An exception handler is legal if its start (Start) is less than its end (End), there exists an instruction whose offset is equal to Start, there exists an instruction whose offset equals End, and the handler's exception class is assignable to the class Throwable. The exception class of a handler is Throwable if the handler's class entry is 0, otherwise it is the class named in the handler. An additional requirement exists for a handler inside an <init> method if one of the instructions covered by the handler is invokespecial of an <init> method. In this case, the fact that a handler is running means the object under construction is likely broken, so it is important that the handler does not swallow the exception and allow the enclosing <init> method to return normally to the caller. Accordingly, the handler is required to either complete abruptly by throwing an exception to the caller of the enclosing <init> method, or to loop forever. 4.10 Verification of class Files THE CLASS FILE FORMAT 206 handlersAreLegal(Environment) :exceptionHandlers(Environment, Handlers), checklist(handlerIsLegal(Environment), Handlers). handlerIsLegal(Environment, Handler) :Handler = handler(Start, End, Target, _), Start < End, allInstructions(Environment, Instructions), member(instruction(Start, _), Instructions), offsetStackFrame(Environment, Target, _), instructionsIncludeEnd(Instructions, End), currentClassLoader(Environment, CurrentLoader), handlerExceptionClass(Handler, ExceptionClass, CurrentLoader), isBootstrapLoader(BL), isAssignable(ExceptionClass, class('java/lang/Throwable', BL)), initHandlerIsLegal(Environment, Handler). instructionsIncludeEnd(Instructions, End) :member(instruction(End, _), Instructions). instructionsIncludeEnd(Instructions, End) :member(endOfCode(End), Instructions). handlerExceptionClass(handler(_, _, _, 0), class('java/lang/Throwable', BL), _) :isBootstrapLoader(BL). handlerExceptionClass(handler(_, _, _, Name), class(Name, L), L) :Name \\= 0. THE CLASS FILE FORMAT Verification of class Files 4.10 207 initHandlerIsLegal(Environment, Handler) :notInitHandler(Environment, Handler). notInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isNotInit(Method). notInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isInit(Method), member(instruction(_, invokespecial(CP)), Instructions), CP = method(MethodClassName, MethodName, Descriptor), MethodName \\= '<init>'. initHandlerIsLegal(Environment, Handler) :isInitHandler(Environment, Handler), sublist(isApplicableInstruction(Target), Instructions, HandlerInstructions), noAttemptToReturnNormally(HandlerInstructions). isInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isInit(Method). member(instruction(_, invokespecial(CP)), Instructions), CP = method(MethodClassName, '<init>', Descriptor). isApplicableInstruction(HandlerStart, instruction(Offset, _)) :Offset >= HandlerStart. noAttemptToReturnNormally(Instructions) :notMember(instruction(_, return), Instructions). noAttemptToReturnNormally(Instructions) :member(instruction(_, athrow), Instructions). 4.10 Verification of class Files THE CLASS FILE FORMAT 208 Let us now turn to the stream of instructions and stack map frames. Merging instructions and stack map frames into a single stream involves four cases: • Merging an empty StackMap and a list of instructions yields the original list of instructions. mergeStackMapAndCode([], CodeList, CodeList). • Given a list of stack map frames beginning with the type state for the instruction at Offset, and a list of instructions beginning at Offset, the merged list is the head of the stack map frame list, followed by the head of the instruction list, followed by the merge of the tails of the two lists. mergeStackMapAndCode([stackMap(Offset, Map) | RestMap], [instruction(Offset, Parse) | RestCode], [stackMap(Offset, Map), instruction(Offset, Parse) | RestMerge]) :mergeStackMapAndCode(RestMap, RestCode, RestMerge). • Otherwise, given a list of stack map frames beginning with the type state for the instruction at OffsetM, and a list of instructions beginning at OffsetP, then, if OffsetP < OffsetM, the merged list consists of the head of the instruction list, followed by the merge of the stack map frame list and the tail of the instruction list. mergeStackMapAndCode([stackMap(OffsetM, Map) | RestMap], [instruction(OffsetP, Parse) | RestCode], [instruction(OffsetP, Parse) | RestMerge]) :OffsetP < OffsetM, mergeStackMapAndCode([stackMap(OffsetM, Map) | RestMap], RestCode, RestMerge). • Otherwise, the merge of the two lists is undefined. Since the instruction list has monotonically increasing offsets, the merge of the two lists is not defined unless every stack map frame offset has a corresponding instruction offset and the stack map frames are in monotonically ",
"title": ""
},
{
"docid": "bddd2a1bec31d75892bce94f2b6b6387",
"text": "We present a real-time system for 3D head pose estimation and facial landmark localization using a commodity depth sensor. We introduce a novel triangular surface patch (TSP) descriptor, which encodes the shape of the 3D surface of the face within a triangular area. The proposed descriptor is viewpoint invariant, and it is robust to noise and to variations in the data resolution. Using a fast nearest neighbor lookup, TSP descriptors from an input depth map are matched to the most similar ones that were computed from synthetic head models in a training phase. The matched triangular surface patches in the training set are used to compute estimates of the 3D head pose and facial landmark positions in the input depth map. By sampling many TSP descriptors, many votes for pose and landmark positions are generated which together yield robust final estimates. We evaluate our approach on the publicly available Biwi Kinect Head Pose Database to compare it against state-of-the-art methods. Our results show a significant improvement in the accuracy of both pose and landmark location estimates while maintaining real-time speed.",
"title": ""
},
{
"docid": "7dba7b28582845bf13d9f9373e39a2af",
"text": "The Internet and social media provide a major source of information about people's opinions. Due to the rapidly growing number of online documents, it becomes both time-consuming and hard task to obtain and analyze the desired opinionated information. Sentiment analysis is the classification of sentiments expressed in documents. To improve classification perfromance feature selection methods which help to identify the most valuable features are generally applied. In this paper, we compare the performance of four feature selection methods namely Chi-square, Information Gain, Query Expansion Ranking, and Ant Colony Optimization using Maximum Entropi Modeling classification algorithm over Turkish Twitter dataset. Therefore, the effects of feature selection methods over the performance of sentiment analysis of Turkish Twitter data are evaluated. Experimental results show that Query Expansion Ranking and Ant Colony Optimization methods outperform other traditional feature selection methods for sentiment analysis.",
"title": ""
},
{
"docid": "dde2211bd3e9cceb20cce63d670ebc4c",
"text": "This paper presents the design of a 60 GHz phase shifter integrated with a low-noise amplifier (LNA) and power amplifier (PA) in a 65 nm CMOS technology for phased array systems. The 4-bit digitally controlled RF phase shifter is based on programmable weighted combinations of I/Q paths using digitally controlled variable gain amplifiers (VGAs). With the combination of an LNA, a phase shifter and part of a combiner, each receiver path achieves 7.2 dB noise figure, a 360° phase shift range in steps of approximately 22.5°, an average insertion gain of 12 dB at 61 GHz, a 3 dB-bandwidth of 5.5 GHz and dissipates 78 mW. Consisting of a phase shifter and a PA, one transmitter path achieves a maximum output power of higher than +8.3 dBm, a 360° phase shift range in 22.5° steps, an average insertion gain of 7.7 dB at 62 GHz, a 3 dB-bandwidth of 6.5 GHz and dissipates 168 mW.",
"title": ""
},
{
"docid": "0837c9af9b69367a5a6e32b2f72cef0a",
"text": "Machine learning techniques are increasingly being used in making relevant predictions and inferences on individual subjects neuroimaging scan data. Previous studies have mostly focused on categorical discrimination of patients and matched healthy controls and more recently, on prediction of individual continuous variables such as clinical scores or age. However, these studies are greatly hampered by the large number of predictor variables (voxels) and low observations (subjects) also known as the curse-of-dimensionality or small-n-large-p problem. As a result, feature reduction techniques such as feature subset selection and dimensionality reduction are used to remove redundant predictor variables and experimental noise, a process which mitigates the curse-of-dimensionality and small-n-large-p effects. Feature reduction is an essential step before training a machine learning model to avoid overfitting and therefore improving model prediction accuracy and generalization ability. In this review, we discuss feature reduction techniques used with machine learning in neuroimaging studies.",
"title": ""
},
{
"docid": "db907780a2022761d2595a8ad5d03401",
"text": "This letter is concerned with the stability analysis of neural networks (NNs) with time-varying interval delay. The relationship between the time-varying delay and its lower and upper bounds is taken into account when estimating the upper bound of the derivative of Lyapunov functional. As a result, some improved delay/interval-dependent stability criteria for NNs with time-varying interval delay are proposed. Numerical examples are given to demonstrate the effectiveness and the merits of the proposed method.",
"title": ""
},
{
"docid": "d91afc5fdd46796808016323fb7b9a29",
"text": "The objective of this study is presenting the causal modeling of intention to use technology among university student. Correlation is used as the method of research. Instrument of this study is standard questionnaire. The collected data is analyzed with AMOS software. The result indicate that facilitative condition, cognitive absorption, perceived enjoyment, perceived ease of use, and perceived usefulness have significant and direct effect on intention to use technology. Also, facilitative condition, cognitive absorption, perceived enjoyment, perceived ease of use and computer playfulness have significant and direct of effect on perceived usefulness. Facilitative condition, cognitive absorption, perceived enjoyment, and playfulness have significant and direct effect on perceived ease of use. [Hossien Zare,Sedigheh Yazdanparast. The causal Model of effective factors on Intention to use of information technology among payam noor and Traditional universities students. Life Sci J 2013;10(2):46-50]. (ISSN:1097-8135). http:www.lifesciencesite.com. 8",
"title": ""
},
{
"docid": "1e8caa9f0a189bafebd65df092f918bc",
"text": "For several decades, the role of hormone-replacement therapy (HRT) has been debated. Early observational data on HRT showed many benefits, including a reduction in coronary heart disease (CHD) and mortality. More recently, randomized trials, including the Women's Health Initiative (WHI), studying mostly women many years after the the onset of menopause, showed no such benefit and, indeed, an increased risk of CHD and breast cancer, which led to an abrupt decrease in the use of HRT. Subsequent reanalyzes of data from the WHI with age stratification, newer randomized and observational data and several meta-analyses now consistently show reductions in CHD and mortality when HRT is initiated soon after menopause. HRT also significantly decreases the incidence of various symptoms of menopause and the risk of osteoporotic fractures, and improves quality of life. In younger healthy women (aged 50–60 years), the risk–benefit balance is positive for using HRT, with risks considered rare. As no validated primary prevention strategies are available for younger women (<60 years of age), other than lifestyle management, some consideration might be given to HRT as a prevention strategy as treatment can reduce CHD and all-cause mortality. Although HRT should be primarily oestrogen-based, no particular HRT regimen can be advocated.",
"title": ""
},
{
"docid": "dcef528dbd89bc2c26820bdbe52c3d8d",
"text": "The evolution of digital libraries and the Internet has dramatically transformed the processing, storage, and retrieval of information. Efforts to digitize text, images, video, and audio now consume a substantial portion of both academic anld industrial activity. Even when there is no shortage of textual materials on a particular topic, procedures for indexing or extracting the knowledge or conceptual information contained in them can be lacking. Recently developed information retrieval technologies are based on the concept of a vector space. Data are modeled as a matrix, and a user's query of the database is represented as a vector. Relevant documents in the database are then identified via simple vector operations. Orthogonal factorizations of the matrix provide mechanisms for handling uncertainty in the database itself. The purpose of this paper is to show how such fundamental mathematical concepts from linear algebra can be used to manage and index large text collections.",
"title": ""
},
{
"docid": "0eb75b719f523ca4e9be7fca04892249",
"text": "In this study 2,684 people evaluated the credibility of two live Web sites on a similar topic (such as health sites). We gathered the comments people wrote about each siteís credibility and analyzed the comments to find out what features of a Web site get noticed when people evaluate credibility. We found that the ìdesign lookî of the site was mentioned most frequently, being present in 46.1% of the comments. Next most common were comments about information structure and information focus. In this paper we share sample participant comments in the top 18 areas that people noticed when evaluating Web site credibility. We discuss reasons for the prominence of design look, point out how future studies can build on what we have learned in this new line of research, and outline six design implications for human-computer interaction professionals.",
"title": ""
},
{
"docid": "2bfe219ce52a44299178513d88721353",
"text": "This paper describes a spatio-temporal model of the human visual system (HVS) for video imaging applications, predicting the response of the neurons of the primary visual cortex. The model simulates the behavior of the HVS with a three-dimensional lter bank which decomposes the data into perceptual channels, each one being tuned to a speciic spatial frequency, orientation and temporal frequency. It further accounts for contrast sensitivity, inter-stimuli masking and spatio-temporal interaction. The free parameters of the model have been estimated by psychophysics. The model can then be used as the basis for many applications. As an example, a quality metric for coded video sequences is presented.",
"title": ""
},
{
"docid": "b94429b8f1a8bf06a4efe8305ecf430d",
"text": "Schizophrenia is a complex psychiatric disorder with a characteristic disease course and heterogeneous etiology. While substance use disorders and a family history of psychosis have individually been identified as risk factors for schizophrenia, it is less well understood if and how these factors are related. To address this deficiency, we examined the relationship between substance use disorders and family history of psychosis in a sample of 1219 unrelated patients with schizophrenia. The lifetime rate of substance use disorders in this sample was 50%, and 30% had a family history of psychosis. Latent class mixture modeling identified three distinct patient subgroups: (1) individuals with low probability of substance use disorders; (2) patients with drug and alcohol abuse, but no symptoms of dependence; and (3) patients with substance dependence. Substance use was related to being male, to a more severe disease course, and more acute symptoms at assessment, but not to an earlier age of onset of schizophrenia or a specific pattern of positive and negative symptoms. Furthermore, substance use in schizophrenia was not related to a family history of psychosis. The results suggest that substance use in schizophrenia is an independent risk factor for disease severity and onset.",
"title": ""
},
{
"docid": "57974e76bf29edb7c2ae54462aab839f",
"text": "UWB is a very attractive technology for many applications. It provides many advantages such as fine resolution and high power efficiency. Our interest in the current study is the use of UWB radar technique in microwave medical imaging systems, especially for early breast cancer detection. The Federal Communications Commission FCC allowed frequency bandwidth of 3.1 to 10.6 GHz for this purpose. In this paper we suggest an UWB Bowtie slot antenna with enhanced bandwidth. Effects of varying the geometry of the antenna on its performance and bandwidth are studied. The proposed antenna is simulated in CST Microwave Studio. Details of antenna design and simulation results such as return loss and radiation patterns are discussed in this paper. The final antenna structure exhibits good UWB characteristics and has surpassed the bandwidth requirements. Keywords—Ultra Wide Band (UWB), microwave imaging system, Bowtie antenna, return loss, impedance bandwidth enhancement.",
"title": ""
},
{
"docid": "ccfa5c06643cb3913b0813103a85e0b0",
"text": "We consider the problem of zero-shot recognition: learning a visual classifier for a category with zero training examples, just using the word embedding of the category and its relationship to other categories, which visual data are provided. The key to dealing with the unfamiliar or novel category is to transfer knowledge obtained from familiar classes to describe the unfamiliar class. In this paper, we build upon the recently introduced Graph Convolutional Network (GCN) and propose an approach that uses both semantic embeddings and the categorical relationships to predict the classifiers. Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category). After a series of graph convolutions, we predict the visual classifier for each category. During training, the visual classifiers for a few categories are given to learn the GCN parameters. At test time, these filters are used to predict the visual classifiers of unseen categories. We show that our approach is robust to noise in the KG. More importantly, our approach provides significant improvement in performance compared to the current state-of-the-art results (from 2 ~ 3% on some metrics to whopping 20% on a few).",
"title": ""
}
] |
scidocsrr
|
604cfd64dff860a0d6973518ea8be517
|
Neural Headline Generation on Abstract Meaning Representation
|
[
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
},
{
"docid": "00a0ab98af151a80fe7b51d6277cb996",
"text": "Meaning Representation for Sembanking",
"title": ""
},
{
"docid": "46b6b08d160a95e42c187f756a6c3977",
"text": "We have created layers of annotation on the English Gigaword v.5 corpus to render it useful as a standardized corpus for knowledge extraction and distributional semantics. Most existing large-scale work is based on inconsistent corpora which often have needed to be re-annotated by research teams independently, each time introducing biases that manifest as results that are only comparable at a high level. We provide to the community a public reference set based on current state-of-the-art syntactic analysis and coreference resolution, along with an interface for programmatic access. Our goal is to enable broader involvement in large-scale knowledge-acquisition efforts by researchers that otherwise may not have had the ability to produce such a resource on their own.",
"title": ""
}
] |
[
{
"docid": "663068bb3ff4d57e1609b2a337a34d7f",
"text": "Automated optic disk (OD) detection plays an important role in developing a computer aided system for eye diseases. In this paper, we propose an algorithm for the OD detection based on structured learning. A classifier model is trained based on structured learning. Then, we use the model to achieve the edge map of OD. Thresholding is performed on the edge map, thus a binary image of the OD is obtained. Finally, circle Hough transform is carried out to approximate the boundary of OD by a circle. The proposed algorithm has been evaluated on three public datasets and obtained promising results. The results (an area overlap and Dices coefficients of 0.8605 and 0.9181, respectively, an accuracy of 0.9777, and a true positive and false positive fraction of 0.9183 and 0.0102) show that the proposed method is very competitive with the state-of-the-art methods and is a reliable tool for the segmentation of OD.",
"title": ""
},
{
"docid": "3eebecff1cb89f5490602f43717902b7",
"text": "Radiation therapy (RT) is an integral part of prostate cancer treatment across all stages and risk groups. Immunotherapy using a live, attenuated, Listeria monocytogenes-based vaccines have been shown previously to be highly efficient in stimulating anti-tumor responses to impact on the growth of established tumors in different tumor models. Here, we evaluated the combination of RT and immunotherapy using Listeria monocytogenes-based vaccine (ADXS31-142) in a mouse model of prostate cancer. Mice bearing PSA-expressing TPSA23 tumor were divided to 5 groups receiving no treatment, ADXS31-142, RT (10 Gy), control Listeria vector and combination of ADXS31-142 and RT. Tumor growth curve was generated by measuring the tumor volume biweekly. Tumor tissue, spleen, and sera were harvested from each group for IFN-γ ELISpot, intracellular cytokine assay, tetramer analysis, and immunofluorescence staining. There was a significant tumor growth delay in mice that received combined ADXS31-142 and RT treatment as compared with mice of other cohorts and this combined treatment causes complete regression of their established tumors in 60 % of the mice. ELISpot and immunohistochemistry of CD8+ cytotoxic T Lymphocytes (CTL) showed a significant increase in IFN-γ production in mice with combined treatment. Tetramer analysis showed a fourfold and a greater than 16-fold increase in PSA-specific CTLs in animals receiving ADXS31-142 alone and combination treatment, respectively. A similar increase in infiltration of CTLs was observed in the tumor tissues. Combination therapy with RT and Listeria PSA vaccine causes significant tumor regression by augmenting PSA-specific immune response and it could serve as a potential treatment regimen for prostate cancer.",
"title": ""
},
{
"docid": "a58c708051c728754a00fa77a54be83c",
"text": "Vol. 44, No. 6, 2015 We developed a classroom observation protocol for quantitatively measuring student engagement in large university classes. The Behavioral Engagement Related to Instruction (BERI) protocol can be used to provide timely feedback to instructors as to how they can improve student engagement in their classrooms. We tested BERI on seven courses with different instructors and pedagogy. BERI achieved excellent interrater agreement (>95%) with a one-hour training session with new observers. It also showed consistent patterns of variation in engagement with instructor actions and classroom activity. Most notably, it showed that there was substantially higher engagement among the same group of students when interactive teaching methods were used compared with more traditional didactic methods. The same general variations in student engagement with instructional methods were present in all parts of the room and for different instructors. A New Tool for Measuring Student Behavioral Engagement in Large University Classes",
"title": ""
},
{
"docid": "d9d754d6ef106b4c421b5a4022cd3c9a",
"text": "This paper presents the research agenda that has been proposed to develop an integrated model to explain technology adoption of SMEs in Malaysia. SMEs form over 90% of all business entities in Malaysia and they have been contributing to the development of the nation. Technology adoption has been a thorn issue among SMEs as they require big outlay which might not be available to the SMEs. Although resource has been an issue among SMEs they cannot lie low and ignore the technological advancements that are taking place at a rapid pace. With that in mind this paper proposes a model to explain the technology adoption issue among SMEs. Keywords-Technology adoption, integrated model, Small and Medium Enterprises (SME), Malaysia",
"title": ""
},
{
"docid": "f360b0a83f257b61fccfb20077314101",
"text": "Supply chain management (SCM) is an emerging field that has commanded attention and support from the industrial community. Demand forecast taking inventory into consideration is an important issue in SCM. There are many diverse inventory systems, in theory or practice, which are operated by entities (companies) in a supply chain. In order to increase supply chain effectiveness, minimize total cost, and reduce the bullwhip effect, integration and coordination of these different systems in the supply chain (SC) are required using information technology and effective communication. The paper develops a multi-agent system to simulate a supply chain, where agents operate these entities with different inventory systems. Agents are coordinated to control inventory and minimize the total cost of a SC by sharing information and forecasting knowledge. The demand is forecasted with a genetic algorithm (GA) and the ordering quantity is offered at each echelon incorporating the perspective of bsystems thinkingQ. By using this agent-based system, the results show that total cost decreases and the ordering variation curve becomes smooth. D 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "55fdf6b013aa8e4082137a4c84a2873d",
"text": "The Named Data Networking (NDN) project is emerging as one of the most promising information-centric future Internet architectures. Besides NDN recognized potential as a content retrieval solution in wired and wireless domains, its innovative concepts, such as named content, name-based routing and in-network caching, particularly suit the requirements of Internet of Things (IoT) interconnecting billions of heterogeneous objects. IoT highly differs from today's Internet due to resource-constrained devices, massive volumes of small exchanged data, and traffic type diversity. The study in this paper addresses the design of a high-level NDN architecture, whose main components are overhauled to specifically meet the IoT challenges.",
"title": ""
},
{
"docid": "dd9425e46bb7583385842e929452e3d2",
"text": "This paper presents a high-performance low-ringing ultra-wideband monocycle picosecond pulse generator, formed using a step recovery diode (SRD), simulated in ADS software and generated through experimentation. The pulse generator comprises three parts, a step recovery diode, a field-effect transistor and a Schottky diode, used to eliminate the positive and negative ringing of pulse. Simulated results validate the design. Measured results indicate an output waveform of 1.88 peak-to-peak amplitude and 307ps pulse duration with a minimal ringing of -22.5 dB, providing good symmetry and low level of ringing. A high degree of coordination between the simulated and measured results is achieved.",
"title": ""
},
{
"docid": "053470c0115d17ffbcbeea313f2da702",
"text": "Although a significant number of public organizations have embraced the idea of open data, many are still reluctant to do this. One root cause is that the publicizing of data represents a shift from a closed to an open system of governance, which has a significant impact upon the relationships between public agencies and the users of open data. Yet no systematic research is available which compares the benefits of an open data with the barriers to its adoption. Based on interviews and a workshop, the benefits and adoption barriers for open data have been derived. The findings show that a gap exists between the promised benefits and barriers. They furthermore suggest that a conceptually simplistic view is often adopted with regard to open data, one which automatically correlates the publicizing of data with use and benefits. Five ‘myths’ are formulated promoting the use of open data and placing the expectations within a realistic perspective. Further, the recommendation is given to take a user’s view and to actively govern the relationship between government and its users.",
"title": ""
},
{
"docid": "9a1665cff530d93c84598e7df947099f",
"text": "The algorithmic Markov condition states that the most likely causal direction between two random variables X and Y can be identified as the direction with the lowest Kolmogorov complexity. This notion is very powerful as it can detect any causal dependency that can be explained by a physical process. However, due to the halting problem, it is also not computable. In this paper we propose an computable instantiation that provably maintains the key aspects of the ideal. We propose to approximate Kolmogorov complexity via the Minimum Description Length (MDL) principle, using a score that is mini-max optimal with regard to the model class under consideration. This means that even in an adversarial setting, the score degrades gracefully, and we are still maximally able to detect dependencies between the marginal and the conditional distribution. As a proof of concept, we propose CISC, a linear-time algorithm for causal inference by stochastic complexity, for pairs of univariate discrete variables. Experiments show that CISC is highly accurate on synthetic, benchmark, as well as real-world data, outperforming the state of the art by a margin, and scales extremely well with regard to sample and domain sizes.",
"title": ""
},
{
"docid": "28037e911859b3cc0221452e82cac3fe",
"text": "This paper proposes a real-time DSP- and FPGA-based implementation method of a space vector modulation (SVM) algorithm for an indirect matrix converter (IMC). Therefore, low-cost and compact control platform is built using a 32-bit fixed-point DSP (TMS320F2812) operating at 150 MHz and a SPARTAN 3E FPGA operating at 50 MHz. The method consists in using the event-manager modules of the DSP to build specified pulses at its PWM output peripherals, which are fed to the digital input ports of a FPGA. Moreover, a simple logical processing and delay times are thereafter implemented in the FPGA so as to synthesize the suitable gate pulse patterns for the semiconductor-controlled devices. It is shown that the proposed implementation method enables high switching frequency operation with high pulse resolution as well as a negligible propagation time for the generation of the gating pulses. Experimental results from an IMC prototype confirm the practical feasibility of the proposed technique.",
"title": ""
},
{
"docid": "c6a7c67fa77d2a5341b8e01c04677058",
"text": "Human brain imaging studies have shown that greater amygdala activation to emotional relative to neutral events leads to enhanced episodic memory. Other studies have shown that fearful faces also elicit greater amygdala activation relative to neutral faces. To the extent that amygdala recruitment is sufficient to enhance recollection, these separate lines of evidence predict that recognition memory should be greater for fearful relative to neutral faces. Experiment 1 demonstrated enhanced memory for emotionally negative relative to neutral scenes; however, fearful faces were not subject to enhanced recognition across a variety of delays (15 min to 2 wk). Experiment 2 demonstrated that enhanced delayed recognition for emotional scenes was associated with increased sympathetic autonomic arousal, indexed by the galvanic skin response, relative to fearful faces. These results suggest that while amygdala activation may be necessary, it alone is insufficient to enhance episodic memory formation. It is proposed that a sufficient level of systemic arousal is required to alter memory consolidation resulting in enhanced recollection of emotional events.",
"title": ""
},
{
"docid": "7a356a485b46c6fc712a0174947e142e",
"text": "A systematic review of the literature related to effective occupational therapy interventions in rehabilitation of individuals with work-related forearm, wrist, and hand injuries and illnesses was conducted as part of the Evidence-Based Literature Review Project of the American Occupational Therapy Association. This review provides a comprehensive overview and analysis of 36 studies that addressed many of the interventions commonly used in hand rehabilitation. Findings reveal that the use of occupation-based activities has reasonable yet limited evidence to support its effectiveness. This review supports the premise that many client factors can be positively affected through the use of several commonly used occupational therapy-related modalities and methods. The implications for occupational therapy practice, research, and education and limitations of reviewed studies are also discussed.",
"title": ""
},
{
"docid": "5cf07787668287a91c8b26b9ab9c67fa",
"text": "The term manifold learning encompasses a class of machine learning techniques that convert data from a high to lower dimensional representation while respecting the intrinsic geometry of the data. The intuition underlying the use of manifold learning in the context of image analysis is that, while each image may be viewed as a single point in a very high-dimensional space, a set of such points for a population of images may be well represented by a sub-manifold of the space that is likely to be non-linear and of a significantly lower dimension. Recently, manifold learning techniques have begun to be applied to the field of medical image analysis. This chapter will review the most popular manifold learning techniques such as Multi-Dimensional Scaling (MDS), Isomap, Local linear embedding, and Laplacian eigenmaps. It will also demonstrate how these techniques can be used for image registration, segmentation, and biomarker discovery from medical images.",
"title": ""
},
{
"docid": "8a41d0190ae25baf0a270d9524ea99d3",
"text": "Hybrid AC/DC microgrid is a compromised solution to cater for the increasing penetration of DC-compatible energy sources, storages and loads. In this paper, DC/DC converter with High Frequency Transformer (DHFT) is proposed to replace the conventional bulky transformer for bus voltage matching and galvanic isolation. Various DHFT topologies have been compared and CLLC-type has been recommended due to its capabilities of bidirectional power flow, seamless transition and low switching loss. Different operating scenarios of the hybrid AC/DC microgrid have been analyzed and DHFT open-loop control has been selected to simplify systematic coordination. DHFT are designed in order to maximize the conversion efficiency and minimize output voltage variations in different loading conditions. Lab-scale prototypes of the DHFT and hybrid AC/DC microgrid have been developed for experimental verifications. The performances of DHFT and system in both steady state and transient states have been confirmed.",
"title": ""
},
{
"docid": "a532dcd3dbaf3ba784d1f5f8623b600c",
"text": "Our long term interest is in building inference algorithms capable of answering questions and producing human-readable explanations by aggregating information from multiple sources and knowledge bases. Currently information aggregation (also referred to as “multi-hop inference”) is challenging for more than two facts due to “semantic drift”, or the tendency for natural language inference algorithms to quickly move off-topic when assembling long chains of knowledge. In this paper we explore the possibility of generating large explanations with an average of six facts by automatically extracting common explanatory patterns from a corpus of manually authored elementary science explanations represented as lexically-connected explanation graphs grounded in a semi-structured knowledge base of tables. We empirically demonstrate that there are sufficient common explanatory patterns in this corpus that it is possible in principle to reconstruct unseen explanation graphs by merging multiple explanatory patterns, then adapting and/or adding to their knowledge. This may ultimately provide a mechanism to allow inference algorithms to surpass the two-fact “aggregation horizon” in practice by using common explanatory patterns as constraints to limit the search space during information aggregation.",
"title": ""
},
{
"docid": "45a24862022bbc1cf3e33aea1e4f8b12",
"text": "Biohybrid consists of a living organism or cell and at least one engineered component. Designing robot-plant biohybrids is a great challenge: it requires interdisciplinary reconsideration of capabilities intimate specific to the biology of plants. Envisioned advances should improve agricultural/horticultural/social practice and could open new directions in utilization of plants by humans. Proper biohybrid cooperation depends upon effective communication. During evolution, plants developed many ways to communicate with each other, with animals, and with microorganisms. The most notable examples are: the use of phytohormones, rapid long-distance signaling, gravity, and light perception. These processes can now be intentionally re-shaped to establish plant-robot communication. In this article, we focus on plants physiological and molecular processes that could be used in bio-hybrids. We show phototropism and biomechanics as promising ways of effective communication, resulting in an alteration in plant architecture, and discuss the specifics of plants anatomy, physiology and development with regards to the bio-hybrids. Moreover, we discuss ways how robots could influence plants growth and development and present aims, ideas, and realized projects of plant-robot biohybrids.",
"title": ""
},
{
"docid": "b86165edb0321b876a6511ae1eda2ec6",
"text": "INTRODUCTION\nVulvar cancer has a lower incidence in high income countries, but is rising, in part, due to the high life expectancy in these societies. Radical vulvectomy is still the standard treatment in initial stages. Wound dehiscence contitututes one of the most common postoperative complications.\n\n\nPRESENTATION OF CASE\nA 76year old patient with a squamous cell carcinoma of the vulva, FIGO staged, IIIb is presented. Radical vulvectomy and bilateral inguinal lymph node dissection with lotus petal flaps reconstruction are performed as the first treatment. Wound infection and dehiscence of lotus petal flaps was seen postoperatively. Initial management consisted in antibiotics administration and removing necrotic tissue from surgical wound. After this initial treatment, negative wound pressure therapy was applied for 37days with good results.\n\n\nDISCUSSION\nWound dehiscence in radical vulvectomy remains the most frequent complication in the treatment of vulvar cancer. The treatment of this complications is still challenging for most gynecologic oncologist surgeons.\n\n\nCONCLUSION\nThe utilization of the negative wound pressure therapy could contribute to reduce hospitalization and the direct and indirect costs of these complications.",
"title": ""
},
{
"docid": "b25e35dd703d19860bbbd8f92d80bd26",
"text": "Business analytics (BA) systems are an important strategic investment for many organisations and can potentially contribute significantly to firm performance. Establishing strong BA capabilities is currently one of the major concerns of chief information officers. This research project aims to develop a BA capability maturity model (BACMM). The BACMM will help organisations to scope and evaluate their BA initiatives. This research-in-progress paper describes the current BACMM, relates it to existing capability maturity models and explains its theoretical base. It also discusses the design science research approach being used to develop the BACMM and provides details of further work within the research project. Finally, the paper concludes with a discussion of how the BACMM might be used in practice.",
"title": ""
},
{
"docid": "74e15be321ec4e2d207f3331397f0399",
"text": "Interoperability has been a basic requirement for the modern information systems environment for over two decades. How have key requirements for interoperability changed over that time? How can we understand the full scope of interoperability issues? What has shaped research on information system interoperability? What key progress has been made? This chapter provides some of the answers to these questions. In particular, it looks at different levels of information system interoperability, while reviewing the changing focus of interoperability research themes, past achievements and new challenges in the emerging global information infrastructure (GII). It divides the research into three generations, and discusses some of achievements of the past. Finally, as we move from managing data to information, and in future knowledge, the need for achieving semantic interoperability is discussed and key components of solutions are introduced. Data and information interoperability has gained increasing attention for several reasons, including: • excellent progress in interconnection afforded by the Internet, Web and distributed computing infrastructures, leading to easy access to a large number of independently created and managed information sources of broad variety;",
"title": ""
},
{
"docid": "cce2e8ee8e62bb5ef4b4fc36756a3f50",
"text": "For the development and operating efficiency of Web applications based on the Model-View-Controller (MVC) framework, and, according to the actual business environment and needs in the project practice, the framework of Web application system is studied in this paper. Through the research of Spring MVC framework and Mybatis framework as well as some related core techniques, combined with JSP and JSTL technology, this paper realizes the design of a lightweight Web application framework based on Spring MVC and Mybatis.",
"title": ""
}
] |
scidocsrr
|
b860daa9591e80ca275e87bb55fc4f42
|
PM Generational Differences in the Hospitality Industry : An issue of concern ?
|
[
{
"docid": "23a5d1aebe5e2f7dd5ed8dfde17ce374",
"text": "Today's workplace often includes workers from 4 distinct generations, and each generation brings a unique set of core values and characteristics to an organization. These generational differences can produce benefits, such as improved patient care, as well as challenges, such as conflict among employees. This article reviews current research on generational differences in educational settings and the workplace and discusses the implications of these findings for medical imaging and radiation therapy departments.",
"title": ""
}
] |
[
{
"docid": "d3e409b074c4c26eb208b27b7b58a928",
"text": "The increase in concern for carbon emission and reduction in natural resources for conventional power generation, the renewable energy based generation such as Wind, Photovoltaic (PV), and Fuel cell has gained importance. Out of which the PV based generation has gained significance due to availability of abundant sunlight. As the Solar power conversion is a low efficient conversion process, accurate and reliable, modeling of solar cell is important. Due to the non-linear nature of diode based PV model, the accurate design of PV cell is a difficult task. A built-in model of PV cell is available in Simscape, Simelectronics library, Matlab. The equivalent circuit parameters have to be computed from data sheet and incorporated into the model. However it acts as a stiff source when implemented with a MPPT controller. Henceforth, to overcome this drawback, in this paper a two-diode model of PV cell is implemented in Matlab Simulink with reduced four required parameters along with similar configuration of the built-in model. This model allows incorporation of MPPT controller. The I-V and P-V characteristics of these two models are investigated under different insolation levels. A PV based generation system feeding a DC load is designed and investigated using these two models and further implemented with MPPT based on P&O technique.",
"title": ""
},
{
"docid": "cf7c5cd5f4caa6ded09f8b91d9f0ea16",
"text": "Covariance matrix has recently received increasing attention in computer vision by leveraging Riemannian geometry of symmetric positive-definite (SPD) matrices. Originally proposed as a region descriptor, it has now been used as a generic representation in various recognition tasks. However, covariance matrix has shortcomings such as being prone to be singular, limited capability in modeling complicated feature relationship, and having a fixed form of representation. This paper argues that more appropriate SPD-matrix-based representations shall be explored to achieve better recognition. It proposes an open framework to use the kernel matrix over feature dimensions as a generic representation and discusses its properties and advantages. The proposed framework significantly elevates covariance representation to the unlimited opportunities provided by this new representation. Experimental study shows that this representation consistently outperforms its covariance counterpart on various visual recognition tasks. In particular, it achieves significant improvement on skeleton-based human action recognition, demonstrating the state-of-the-art performance over both the covariance and the existing non-covariance representations.",
"title": ""
},
{
"docid": "b17889bc5f4d4fb498a9b9c5d45bd560",
"text": "Photonic components are superior to electronic ones in terms of operational bandwidth, but the diffraction limit of light poses a significant challenge to the miniaturization and high-density integration of optical circuits. The main approach to circumvent this problem is to exploit the hybrid nature of surface plasmon polaritons (SPPs), which are light waves coupled to free electron oscillations in a metal that can be laterally confined below the diffraction limit using subwavelength metal structures. However, the simultaneous realization of strong confinement and a propagation loss sufficiently low for practical applications has long been out of reach. Channel SPP modes—channel plasmon polaritons (CPPs)—are electromagnetic waves that are bound to and propagate along the bottom of V-shaped grooves milled in a metal film. They are expected to exhibit useful subwavelength confinement, relatively low propagation loss, single-mode operation and efficient transmission around sharp bends. Our previous experiments showed that CPPs do exist and that they propagate over tens of micrometres along straight subwavelength grooves. Here we report the design, fabrication and characterization of CPP-based subwavelength waveguide components operating at telecom wavelengths: Y-splitters, Mach–Zehnder interferometers and waveguide–ring resonators. We demonstrate that CPP guides can indeed be used for large-angle bending and splitting of radiation, thereby enabling the realization of ultracompact plasmonic components and paving the way for a new class of integrated optical circuits.",
"title": ""
},
{
"docid": "620adbf7781be0147d3af2ea16e3b9dc",
"text": "Knowledge graphs have been used throughout the history of information retrieval for a variety of tasks. Technological advances in knowledge acquisition and alignment technology from the last few years gave rise to a body of new approaches for utilizing knowledge graphs in text retrieval tasks. It is therefore time to consolidate the community efforts in studying how knowledge graph technology can be employed in information retrieval systems in the most effective way. It is also time to start a dialogue with researchers working on knowledge acquisition and alignment to ensure that resulting technologies and algorithms meet the demands posed by information retrieval tasks. The goal of this workshop is to bring together a community of researchers and practitioners who are interested in using, aligning, and constructing knowledge graphs and similar semantic resources for information retrieval applications.",
"title": ""
},
{
"docid": "47ba94d7ea8b6dc6ea287654288904b4",
"text": "We propose a VR video conferencing system over named data networks (NDN). The system is designed to support real-time, multi-party streaming and playback of 360 degree video on a web player. A centralized architecture is used, with a signaling server to coordinate multiple participants. To ensure real-time requirement, a protocol featuring prefetching is used for producer-consumer communication. Along with the native support of multicast in NDN, this design is expected to better support large amount of data streaming between multiple users.\n As a proof of concept, a protoype of the system is implemented with one-way real-time 360 video streaming. Experiments show that seamless streaming and interactive playback of 360 video can be achieved with low latency. Therefore, the proposed system has the potential to provide immersive VR experience for real-time multi-party video conferencing.",
"title": ""
},
{
"docid": "a8da8a2d902c38c6656ea5db841a4eb1",
"text": "The uses of the World Wide Web on the Internet for commerce and information access continue to expand. The e-commerce business has proven to be a promising channel of choice for consumers as it is gradually transforming into a mainstream business activity. However, lack of trust has been identified as a major obstacle to the adoption of online shopping. Empirical study of online trust is constrained by the shortage of high-quality measures of general trust in the e-commence contexts. Based on theoretical or empirical studies in the literature of marketing or information system, nine factors have sound theoretical sense and support from the literature. A survey method was used for data collection in this study. A total of 172 usable questionnaires were collected from respondents. This study presents a new set of instruments for use in studying online trust of an individual. The items in the instrument were analyzed using a factors analysis. The results demonstrated reliable reliability and validity in the instrument.This study identified seven factors has a significant impact on online trust. The seven dominant factors are reputation, third-party assurance, customer service, propensity to trust, website quality, system assurance and brand. As consumers consider that doing business with online vendors involves risk and uncertainty, online business organizations need to overcome these barriers. Further, implication of the finding also provides e-commerce practitioners with guideline for effectively engender online customer trust.",
"title": ""
},
{
"docid": "e2d63fece5536aa4668cd5027a2f42b9",
"text": "To ensure integrity, trust, immutability and authenticity of software and information (cyber data, user data and attack event data) in a collaborative environment, research is needed for cross-domain data communication, global software collaboration, sharing, access auditing and accountability. Blockchain technology can significantly automate the software export auditing and tracking processes. It allows to track and control what data or software components are shared between entities across multiple security domains. Our blockchain-based solution relies on role-based and attribute-based access control and prevents unauthorized data accesses. It guarantees integrity of provenance data on who updated what software module and when. Furthermore, our solution detects data leakages, made behind the scene by authorized blockchain network participants, to unauthorized entities. Our approach is used for data forensics/provenance, when the identity of those entities who have accessed/ updated/ transferred the sensitive cyber data or sensitive software is determined. All the transactions in the global collaborative software development environment are recorded in the blockchain public ledger and can be verified any time in the future. Transactions can not be repudiated by invokers. We also propose modified transaction validation procedure to improve performance and to protect permissioned IBM Hyperledger-based blockchains from DoS attacks, caused by bursts of invalid transactions.",
"title": ""
},
{
"docid": "3240607824a6dace92925e75df92cc09",
"text": "We propose a framework to model general guillotine restrictions in two-dimensional cutting problems formulated as Mixed Integer Linear Programs (MIP). The modeling framework requires a pseudo-polynomial number of variables and constraints, which can be effectively enumerated for medium-size instances. Our modeling of general guillotine cuts is the first one that, once it is implemented within a state-of-the-art MIP solver, can tackle instances of challenging size. We mainly concentrate our analysis on the Guillotine Two Dimensional Knapsack Problem (G2KP), for which a model, and an exact procedure able to significantly improve the computational performance, are given. We also show how the modeling of general guillotine cuts can be extended to other relevant problems such as the Guillotine Two Dimensional Cutting Stock Problem (G2CSP) and the Guillotine Strip Packing Problem (GSPP). Finally, we conclude the paper discussing an extensive set of computational experiments on G2KP and GSPP benchmark instances from the literature.",
"title": ""
},
{
"docid": "6a2fa5998bf51eb40c1fd2d8f3dd8277",
"text": "In this paper, we propose a new descriptor for texture classification that is robust to image blurring. The descriptor utilizes phase information computed locally in a window for every image position. The phases of the four low-frequency coefficients are decorrelated and uniformly quantized in an eight-dimensional space. A histogram of the resulting code words is created and used as a feature in texture classification. Ideally, the low-frequency phase components are shown to be invariant to centrally symmetric blur. Although this ideal invariance is not completely achieved due to the finite window size, the method is still highly insensitive to blur. Because only phase information is used, the method is also invariant to uniform illumination changes. According to our experiments, the classification accuracy of blurred texture images is much higher with the new method than with the well-known LBP or Gabor filter bank methods. Interestingly, it is also slightly better for textures that are not blurred.",
"title": ""
},
{
"docid": "3534e4321560c826057e02c52d4915dd",
"text": "While hexahedral mesh elements are preferred by a variety of simulation techniques, constructing quality all-hex meshes of general shapes remains a challenge. An attractive hex-meshing approach, often referred to as submapping, uses a low distortion mapping between the input model and a PolyCube (a solid formed from a union of cubes), to transfer a regular hex grid from the PolyCube to the input model. Unfortunately, the construction of suitable PolyCubes and corresponding volumetric maps for arbitrary shapes remains an open problem. Our work introduces a new method for computing low-distortion volumetric PolyCube deformations of general shapes and for subsequent all-hex remeshing. For a given input model, our method simultaneously generates an appropriate PolyCube structure and mapping between the input model and the PolyCube. From these we automatically generate good quality all-hex meshes of complex natural and man-made shapes.",
"title": ""
},
{
"docid": "d188fbbf6824ad913af00639408cc987",
"text": "We measure the impact of the UK's initial 2009–10 Quantitative Easing (QE) Programme on bonds and other assets. First, we use a macro-finance yield curve both to create a counterfactual path for bond yields and to estimate the impact of QE directly. Second, we analyse the impact of individual QE operations on a range of asset prices. We find that QE significantly lowered government bond yields through the portfolio balance channel – by around 50 to 100 basis points. We also uncover significant effects of individual operations but limited pass through to other assets.",
"title": ""
},
{
"docid": "28e9bb0eef126b9969389068b6810073",
"text": "This paper presents the task specifications for designing a novel Insertable Robotic Effectors Platform (IREP) with integrated stereo vision and surgical intervention tools for Single Port Access Surgery (SPAS). This design provides a compact deployable mechanical architecture that may be inserted through a single Ø15 mm access port. Dexterous surgical intervention and stereo vision are achieved via the use of two snake-like continuum robots and two controllable CCD cameras. Simulations and dexterity evaluation of our proposed design are compared to several design alternatives with different kinematic arrangements. Results of these simulations show that dexterity is improved by using an independent revolute joint at the tip of a continuum robot instead of achieving distal rotation by transmission of rotation about the backbone of the continuum robot. Further, it is shown that designs with two robotic continuum robots as surgical arms have diminished dexterity if the bases of these arms are close to each other. This result justifies our design and points to ways of improving the performance of existing designs that use continuum robots as surgical arms.",
"title": ""
},
{
"docid": "c0350ac9bd1c38252e04a3fd097ae6ee",
"text": "In contrast to the increasing popularity of REpresentational State Transfer (REST), systematic testing of RESTful Application Programming Interfaces (API) has not attracted much attention so far. This paper describes different aspects of automated testing of RESTful APIs. Later, we focus on functional and security tests, for which we apply a technique called model-based software development. Based on an abstract model of the RESTful API that comprises resources, states and transitions a software generator not only creates the source code of the RESTful API but also creates a large number of test cases that can be immediately used to test the implementation. This paper describes the process of developing a software generator for test cases using state-of-the-art tools and provides an example to show the feasibility of our approach.",
"title": ""
},
{
"docid": "811080d1bf24f041792d6895791242bb",
"text": "We survey the use of weighted nite state transducers WFSTs in speech recognition We show that WFSTs provide a common and natural rep resentation for HMM models context dependency pronunciation dictio naries grammars and alternative recognition outputs Furthermore gen eral transducer operations combine these representations exibly and e ciently Weighted determinization and minimization algorithms optimize their time and space requirements and a weight pushing algorithm dis tributes the weights along the paths of a weighted transducer optimally for speech recognition As an example we describe a North American Business News NAB recognition system built using these techniques that combines the HMMs full cross word triphones a lexicon of forty thousand words and a large trigram grammar into a single weighted transducer that is only somewhat larger than the trigram word grammar and that runs NAB in real time on a very simple decoder In another example we show that the same techniques can be used to optimize lattices for second pass recognition In a third example we show how general automata operations can be used to assemble lattices from di erent recognizers to improve recognition performance Introduction Much of current large vocabulary speech recognition is based on models such as HMMs tree lexicons or n gram language models that can be represented by weighted nite state transducers Even when richer models are used for instance context free grammars for spoken dialog applications they are often restricted for e ciency reasons to regular subsets either by design or by approximation Pereira and Wright Nederhof Mohri and Nederhof M Mohri Weighted FSTs in Speech Recognition A nite state transducer is a nite automaton whose state transitions are labeled with both input and output symbols Therefore a path through the transducer encodes a mapping from an input symbol sequence to an output symbol sequence A weighted transducer puts weights on transitions in addition to the input and output symbols Weights may encode probabilities durations penalties or any other quantity that accumulates along paths to compute the overall weight of mapping an input sequence to an output sequence Weighted transducers are thus a natural choice to represent the probabilistic nite state models prevalent in speech processing We present a survey of the recent work done on the use of weighted nite state transducers WFSTs in speech recognition Mohri et al Pereira and Riley Mohri Mohri et al Mohri and Riley Mohri et al Mohri and Riley We show that common methods for combin ing and optimizing probabilistic models in speech processing can be generalized and e ciently implemented by translation to mathematically well de ned op erations on weighted transducers Furthermore new optimization opportunities arise from viewing all symbolic levels of ASR modeling as weighted transducers Thus weighted nite state transducers de ne a common framework with shared algorithms for the representation and use of the models in speech recognition that has important algorithmic and software engineering bene ts We start by introducing the main de nitions and notation for weighted nite state acceptors and transducers used in this work We then present introductory speech related examples and describe the most important weighted transducer operations relevant to speech applications Finally we give examples of the ap plication of transducer representations and operations on transducers to large vocabulary speech recognition with results that meet certain optimality criteria Weighted Finite State Transducer De nitions and Al gorithms The de nitions that follow are based on the general algebraic notion of semiring Kuich and Salomaa The semiring abstraction permits the de nition of automata representations and algorithms over a broad class of weight sets and algebraic operations A semiring K consists of a set K equipped with an associative and com mutative operation and an associative operation with identities and respectively such that distributes over and a a In other words a semiring is similar to the more familiar ring algebraic structure such as the ring of polynomials over the reals except that the additive operation may not have an inverse For example N is a semiring The weights used in speech recognition often represent probabilities the cor responding semiring is then the probability semiring R For numerical stability implementations may replace probabilities with log probabilities The appropriate semiring is then the image by log of the semiring R M Mohri Weighted FSTs in Speech Recognition and is called the log semiring When using log probabilities with a Viterbi best path approximation the appropriate semiring is the tropical semiring R f g min In the following de nitions we assume an arbitrary semiring K We will give examples with di erent semirings to illustrate the variety of useful computations that can be carried out in this framework by a judicious choice of semiring",
"title": ""
},
{
"docid": "ac8dfb227545260e468957185acd6faa",
"text": "Writing mostly is a solitary activity. Right now, I sit in front of a computer screen. On my desk are piles of paper; notes for what I want to say; unfinished projects waiting to be attended to; books on shelves nearby to be consulted. I need to be alone when I write. Whether writing on a computer, on a typewriter or by hand, most writers I know prefer a secluded place without distractions from telephones and other people who",
"title": ""
},
{
"docid": "338950e1c2ef5db0c611e4e65e51da76",
"text": "Single arm rectangular spiral antenna with four open circuit switches over a high-impedance surface (HIS) is proposed for pattern reconfigurable applications. The HIS plane without vias is utilized to achieve a low-profile antenna design with a net thickness of 5.08 mm. This is equivalent to ~ lambdao/17 for the intended operating frequency of 3.3 GHz. By using the possible sixteen switching combinations a near 360deg beam steering is achieved, and the switched beams do not have a polarization variation from one pattern to another. The realized pattern reconfigurable antenna has both the tilted (thetasmax ges 25deg) and axial (5deg < thetasmax < 10deg) beams, which have an average directivity of 6.9 dBi.",
"title": ""
},
{
"docid": "80f88101ea4d095a0919e64b7db9cadb",
"text": "The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets.",
"title": ""
},
{
"docid": "22d8bfa59bb8e25daa5905dbb9e1deea",
"text": "BACKGROUND\nSubacromial impingement syndrome (SAIS) is a painful condition resulting from the entrapment of anatomical structures between the anteroinferior corner of the acromion and the greater tuberosity of the humerus.\n\n\nOBJECTIVE\nThe aim of this study was to evaluate the short-term effectiveness of high-intensity laser therapy (HILT) versus ultrasound (US) therapy in the treatment of SAIS.\n\n\nDESIGN\nThe study was designed as a randomized clinical trial.\n\n\nSETTING\nThe study was conducted in a university hospital.\n\n\nPATIENTS\nSeventy patients with SAIS were randomly assigned to a HILT group or a US therapy group.\n\n\nINTERVENTION\nStudy participants received 10 treatment sessions of HILT or US therapy over a period of 2 consecutive weeks.\n\n\nMEASUREMENTS\nOutcome measures were the Constant-Murley Scale (CMS), a visual analog scale (VAS), and the Simple Shoulder Test (SST).\n\n\nRESULTS\nFor the 70 study participants (42 women and 28 men; mean [SD] age=54.1 years [9.0]; mean [SD] VAS score at baseline=6.4 [1.7]), there were no between-group differences at baseline in VAS, CMS, and SST scores. At the end of the 2-week intervention, participants in the HILT group showed a significantly greater decrease in pain than participants in the US therapy group. Statistically significant differences in change in pain, articular movement, functionality, and muscle strength (force-generating capacity) (VAS, CMS, and SST scores) were observed after 10 treatment sessions from the baseline for participants in the HILT group compared with participants in the US therapy group. In particular, only the difference in change of VAS score between groups (1.65 points) surpassed the accepted minimal clinically important difference for this tool.\n\n\nLIMITATIONS\nThis study was limited by sample size, lack of a control or placebo group, and follow-up period.\n\n\nCONCLUSIONS\nParticipants diagnosed with SAIS showed greater reduction in pain and improvement in articular movement functionality and muscle strength of the affected shoulder after 10 treatment sessions of HILT than did participants receiving US therapy over a period of 2 consecutive weeks.",
"title": ""
},
{
"docid": "6e28ce874571ef5db8f5e44ff78488d2",
"text": "The importance of the maintenance function has increased because of its role in keeping and improving system availability and safety, as well as product quality. To support this role, the development of the communication and information technologies has allowed the emergence of the concept of e-maintenance. Within the era of e-manufacturing and e-business, e-maintenance provides the opportunity for a new maintenance generation. As we will discuss later in this paper, e-maintenance integrates existing telemaintenance principles, with Web services and modern e-collaboration principles. Collaboration allows to share and exchange not only information but also knowledge and (e)-intelligence. By means of a collaborative environment, pertinent knowledge and intelligence become available and usable at the right place and time, in order to facilitate reaching the best maintenance decisions. This paper outlines the basic ideas within the e-maintenance concept and then provides an overview of the current research and challenges in this emerging field. An underlying objective is to identify the industrial/academic actors involved in the technological, organizational or management issues related to the development of e-maintenance. Today, this heterogeneous community has to be federated in order to bring up e-maintenance as a new scientific discipline. r 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "aabae18789f9aab997ea7e1a92497de7",
"text": "We develop, in this paper, a representation of time and events that supports a range of reasoning tasks such as monitoring and detection of event patterns which may facilitate the explanation of root cause(s) of faults. We shall compare two approaches to event definition: the active database approach in which events are defined in terms of the conditions for their detection at an instant, and the knowledge representation approach in which events are defined in terms of the conditions for their occurrence over an interval. We shall show the shortcomings of the former definition and employ a three-valued temporal first order nonmonotonic logic, extended with events, in order to integrate both definitions.",
"title": ""
}
] |
scidocsrr
|
660337f1ab9ed1ab07ef473701a70bb4
|
Clothoid-based model predictive control for autonomous driving
|
[
{
"docid": "ccc4b8f75e39488068293540aeb508e2",
"text": "We present a novel approach to sketching 2D curves with minimally varying curvature as piecewise clothoids. A stable and efficient algorithm fits a sketched piecewise linear curve using a number of clothoid segments with G2 continuity based on a specified error tolerance. Further, adjacent clothoid segments can be locally blended to result in a G3 curve with curvature that predominantly varies linearly with arc length. We also handle intended sharp corners or G1 discontinuities, as independent rotations of clothoid pieces. Our formulation is ideally suited to conceptual design applications where aesthetic fairness of the sketched curve takes precedence over the precise interpolation of geometric constraints. We show the effectiveness of our results within a system for sketch-based road and robot-vehicle path design, where clothoids are already widely used.",
"title": ""
}
] |
[
{
"docid": "a7c79045bcbd9fac03015295324745e3",
"text": "Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, none of the existing methods is able to identify object instances in the detected salient regions. In this paper, we present a salient instance segmentation method that produces a saliency mask with distinct object instance labels for an input image. Our method consists of three steps, estimating saliency map, detecting salient object contours and identifying salient object instances. For the first two steps, we propose a multiscale saliency refinement network, which generates high-quality salient region masks and salient object contours. Once integrated with multiscale combinatorial grouping and a MAP-based subset optimization framework, our method can generate very promising salient object instance segmentation results. To promote further research and evaluation of salient instance segmentation, we also construct a new database of 1000 images and their pixelwise salient instance annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks for salient region detection as well as on our new dataset for salient instance segmentation.",
"title": ""
},
{
"docid": "e0079af0b45bf8d6fc194e59217e2a53",
"text": "Acral peeling skin syndrome (APSS) is an autosomal recessive skin disorder characterized by acral blistering and peeling of the outermost layers of the epidermis. It is caused by mutations in the gene for transglutaminase 5, TGM5. Here, we report on clinical and molecular findings in 11 patients and extend the TGM5 mutation database by four, to our knowledge, previously unreported mutations: p.M1T, p.L41P, p.L214CfsX15, and p.S604IfsX9. The recurrent mutation p.G113C was found in 9 patients, but also in 3 of 100 control individuals in a heterozygous state, indicating that APSS might be more widespread than hitherto expected. Using quantitative real-time PCR, immunoblotting, and immunofluorescence analysis, we demonstrate that expression and distribution of several epidermal differentiation markers and corneodesmosin (CDSN) is altered in APSS keratinocytes and skin. Although the expression of transglutaminases 1 and 3 was not changed, we found an upregulation of keratin 1, keratin 10, involucrin, loricrin, and CDSN, probably as compensatory mechanisms for stabilization of the epidermal barrier. Our results give insights into the consequences of TGM5 mutations on terminal epidermal differentiation.",
"title": ""
},
{
"docid": "54c66f2021f055d3fb09f733ab1c2c39",
"text": "In December 2013, sixteen teams from around the world gathered at Homestead Speedway near Miami, FL to participate in the DARPA Robotics Challenge (DRC) Trials, an aggressive robotics competition, partly inspired by the aftermath of the Fukushima Daiichi reactor incident. While the focus of the DRC Trials is to advance robotics for use in austere and inhospitable environments, the objectives of the DRC are to progress the areas of supervised autonomy and mobile manipulation for everyday robotics. NASA’s Johnson Space Center led a team comprised of numerous partners to develop Valkyrie, NASA’s first bipedal humanoid robot. Valkyrie is a 44 degree-of-freedom, series elastic actuator-based robot that draws upon over 18 years of humanoid robotics design heritage. Valkyrie’s application intent is aimed at not only responding to events like Fukushima, but also advancing human spaceflight endeavors in extraterrestrial planetary settings. This paper presents a brief system overview, detailing Valkyrie’s mechatronic subsystems, followed by a summarization of the inverse kinematics-based walking algorithm employed at the Trials. Next, the software and control architectures are highlighted along with a description of the operator interface tools. Finally, some closing remarks are given about the competition and a vision of future work is provided.",
"title": ""
},
{
"docid": "e3218926a5a32d2c44d5aea3171085e2",
"text": "The present study sought to determine the effects of Mindful Sport Performance Enhancement (MSPE) on runners. Participants were 25 recreational long-distance runners openly assigned to either the 4-week intervention or to a waiting-list control group, which later received the same program. Results indicate that the MSPE group showed significantly more improvement in organizational demands (an aspect of perfectionism) compared with controls. Analyses of preto postworkshop change found a significant increase in state mindfulness and trait awareness and decreases in sport-related worries, personal standards perfectionism, and parental criticism. No improvements in actual running performance were found. Regression analyses revealed that higher ratings of expectations and credibility of the workshop were associated with lower postworkshop perfectionism, more years running predicted higher ratings of perfectionism, and more life stressors predicted lower levels of worry. Findings suggest that MSPE may be a useful mental training intervention for improving mindfulness, sport-anxiety related worry, and aspects of perfectionism in long-distance runners.",
"title": ""
},
{
"docid": "e708fc43b5ac8abf8cc2707195e8a45e",
"text": "We develop analytical models for predicting the magnetic field distribution in Halbach magnetized machines. They are formulated in polar coordinates and account for the relative recoil permeability of the magnets. They are applicable to both internal and external rotor permanent-magnet machines with either an iron-cored or air-cored stator and/or rotor. We compare predicted results with those obtained by finite-element analyses and measurements. We show that the air-gap flux density varies significantly with the pole number and that an optimal combination of the magnet thickness and the pole number exists for maximum air-gap flux density, while the back iron can enhance the air-gap field and electromagnetic torque when the radial thickness of the magnet is small.",
"title": ""
},
{
"docid": "64687b4df5001e0bca42dc92e8e4915a",
"text": "The articles published in Landscape and Urban Planning during the past 16 years provide valuable insights into how humans interact with outdoor urban environments. This review paper explores the wide spectrum of human dimensions and issues, or human needs, addressed by 90 of these studies. As a basis for analysis, the major themes tapped by the findings were classified into two overarching groups containing three categories each. The Nature needs, directly linked with the physical features of the environmental setting, were categorized in terms of contact with nature, aesthetic preference, and recreation and play. The role of the environment is less immediate in the Human-interaction group, which includes the issues of social interaction, citizen participation in the design process, and community identity. Most significantly, the publications offer strong support for the important role nearby natural environments play in human well-being. Urban settings that provide nature contact are valuable not only in their own right, but also for meeting other needs in a manner unique to these more natural settings. In addition, although addressed in different ways, remarkable similarities exist concerning these six people requirements across diverse cultures and political systems. Urban residents worldwide express a desire for contact with nature and each other, attractive environments, places in which to recreate and play, privacy, a more active role in the design of their community, and a sense of community identity. The studies reviewed here offer continued evidence that the design of urban landscapes strongly influences the well-being and behavior of users and nearby inhabitants. © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "db20d821e1a517c5996897b8653bf192",
"text": "Building on recent prior work that combines Google Street View (GSV) and crowdsourcing to remotely collect information on physical world accessibility, we present the first 'smart' system, Tohme, that combines machine learning, computer vision (CV), and custom crowd interfaces to find curb ramps remotely in GSV scenes. Tohme consists of two workflows, a human labeling pipeline and a CV pipeline with human verification, which are scheduled dynamically based on predicted performance. Using 1,086 GSV scenes (street intersections) from four North American cities and data from 403 crowd workers, we show that Tohme performs similarly in detecting curb ramps compared to a manual labeling approach alone (F- measure: 84% vs. 86% baseline) but at a 13% reduction in time cost. Our work contributes the first CV-based curb ramp detection system, a custom machine-learning based workflow controller, a validation of GSV as a viable curb ramp data source, and a detailed examination of why curb ramp detection is a hard problem along with steps forward.",
"title": ""
},
{
"docid": "5af801ca029fa3a0517ef9d32e7baab0",
"text": "Gender is one of the most common attributes used to describe an individual. It is used in multiple domains such as human computer interaction, marketing, security, and demographic reports. Research has been performed to automate the task of gender recognition in constrained environment using face images, however, limited attention has been given to gender classification in unconstrained scenarios. This work attempts to address the challenging problem of gender classification in multi-spectral low resolution face images. We propose a robust Class Representative Autoencoder model, termed as AutoGen for the same. The proposed model aims to minimize the intra-class variations while maximizing the inter-class variations for the learned feature representations. Results on visible as well as near infrared spectrum data for different resolutions and multiple databases depict the efficacy of the proposed model. Comparative results with existing approaches and two commercial off-the-shelf systems further motivate the use of class representative features for classification.",
"title": ""
},
{
"docid": "f6feb6789c0c9d2d5c354e73d2aaf9ad",
"text": "In this paper we present SimpleElastix, an extension of SimpleITK designed to bring the Elastix medical image registration library to a wider audience. Elastix is a modular collection of robust C++ image registration algorithms that is widely used in the literature. However, its command-line interface introduces overhead during prototyping, experimental setup, and tuning of registration algorithms. By integrating Elastix with SimpleITK, Elastix can be used as a native library in Python, Java, R, Octave, Ruby, Lua, Tcl and C# on Linux, Mac and Windows. This allows Elastix to intregrate naturally with many development environments so the user can focus more on the registration problem and less on the underlying C++ implementation. As means of demonstration, we show how to register MR images of brains and natural pictures of faces using minimal amount of code. SimpleElastix is open source, licensed under the permissive Apache License Version 2.0 and available at https://github.com/kaspermarstal/SimpleElastix.",
"title": ""
},
{
"docid": "e0f18f58aca88cd6486e2ca3365cfe76",
"text": "Given a query graph $$q$$ q and a data graph $$G$$ G , subgraph similarity matching is to retrieve all matches of $$q$$ q in $$G$$ G with the number of missing edges bounded by a given threshold $$\\epsilon $$ ϵ . Many works have been conducted to study the problem of subgraph similarity matching due to its ability to handle applications involved with noisy or erroneous graph data. In practice, a data graph can be extremely large, e.g., a web-scale graph containing hundreds of millions of vertices and billions of edges. The state-of-the-art approaches employ centralized algorithms to process the subgraph similarity queries, and thus, they are infeasible for such a large graph due to the limited computational power and storage space of a centralized server. To address this problem, in this paper, we investigate subgraph similarity matching for a web-scale graph deployed in a distributed environment. We propose distributed algorithms and optimization techniques that exploit the properties of subgraph similarity matching, so that we can well utilize the parallel computing power and lower the communication cost among the distributed data centers for query processing. Specifically, we first relax and decompose $$q$$ q into a minimum number of sub-queries. Next, we send each sub-query to conduct the exact matching in parallel. Finally, we schedule and join the exact matches to obtain final query answers. Moreover, our workload-balance strategy further speeds up the query processing. Our experimental results demonstrate the feasibility of our proposed approach in performing subgraph similarity matching over web-scale graph data.",
"title": ""
},
{
"docid": "50fdc7454c5590cfc4bf151a3637a99c",
"text": "Named Entity Recognition (NER) is the task of locating and classifying names in text. In previous work, NER was limited to a small number of predefined entity classes (e.g., people, locations, and organizations). However, NER on the Web is a far more challenging problem. Complex names (e.g., film or book titles) can be very difficult to pick out precisely from text. Further, the Web contains a wide variety of entity classes, which are not known in advance. Thus, hand-tagging examples of each entity class is impractical. This paper investigates a novel approach to the first step in Web NER: locating complex named entities in Web text. Our key observation is that named entities can be viewed as a species of multiword units, which can be detected by accumulating n-gram statistics over the Web corpus. We show that this statistical method’s F1 score is 50% higher than that of supervised techniques including Conditional Random Fields (CRFs) and Conditional Markov Models (CMMs) when applied to complex names. The method also outperforms CMMs and CRFs by 117% on entity classes absent from the training data. Finally, our method outperforms a semi-supervised CRF by 73%.",
"title": ""
},
{
"docid": "f5d92a445b2d4ecfc55393794258582c",
"text": "This paper presents a multi-modulus frequency divider (MMD) based on the Extended True Single-Phase Clock (E-TSPC) Logic. The MMD consists of four cascaded divide-by-2/3 E-TSPC cells. The basic functionality of the MMD and the E-TSPC 2/3 divider are explained. The whole design was implemented in an [0.13] m CMOS process from IBM. Simulation and measurement results of the MMD are shown. Measurement results indicates a maximum operating frequency of [10] GHz and a power consumption of [4] mW for each stage. These results are compared to other state of the art dual modulus E-TSPC dividers, showing the good position of this design relating to operating frequency and power consumption.",
"title": ""
},
{
"docid": "359b6308a6e6e3d6857cb6b4f59fd1bc",
"text": "Significant research has been devoted to detecting people in images and videos. In this paper we describe a human detection method that augments widely used edge-based features with texture and color information, providing us with a much richer descriptor set. This augmentation results in an extremely high-dimensional feature space (more than 170,000 dimensions). In such high-dimensional spaces, classical machine learning algorithms such as SVMs are nearly intractable with respect to training. Furthermore, the number of training samples is much smaller than the dimensionality of the feature space, by at least an order of magnitude. Finally, the extraction of features from a densely sampled grid structure leads to a high degree of multicollinearity. To circumvent these data characteristics, we employ Partial Least Squares (PLS) analysis, an efficient dimensionality reduction technique, one which preserves significant discriminative information, to project the data onto a much lower dimensional subspace (20 dimensions, reduced from the original 170,000). Our human detection system, employing PLS analysis over the enriched descriptor set, is shown to outperform state-of-the-art techniques on three varied datasets including the popular INRIA pedestrian dataset, the low-resolution gray-scale DaimlerChrysler pedestrian dataset, and the ETHZ pedestrian dataset consisting of full-length videos of crowded scenes.",
"title": ""
},
{
"docid": "e06433abc3fe0e25e65339e50746d50f",
"text": "Context: Current software systems have increasingly implemented context-aware adaptations to handle the diversity of conditions of their surrounding environment. Therefore, people are becoming used to a variety of context-aware software systems (CASS). This context-awareness brings challenges to the software construction and testing because the context is unpredictable and may change at any time. Therefore, software engineers need to consider the dynamic context changes while testing CASS. Different test case design techniques (TCDT) have been proposed to support the testing of CASS. However, to the best of our knowledge, there is no analysis of these proposals on the advantages, limitations and their effective support to context variation during testing. Objective: To gather empirical evidence on TCDT concerned with CASS by identifying, evaluating and synthesizing knowledge available in the literature. Method: To undertake a secondary study (quasi -Systematic Literature Review) on TCDT for CASS regarding their assessed quality characteristics, used coverage criteria, test type, and test technique. Results: From 833 primary studies published between 2004 and 2014, just 17 studies regard the design of test cases for CASS. Most of them focus on functional suitability. Furthermore, some of them take into account the changes in the context by providing specific test cases for each context configuration (static perspective) during the test execution. These 17 studies revealed five challenges affecting the design of test cases and 20 challenges regarding the testing of CASS. Besides, seven TCDT are not empirically evaluated. Conclusion: A few TCDT partially support the testing of CASS. However, it has not been observed evidence on any TCDT supporting the truly context-aware testing, which that can adapt the expected output based on the context variation (dynamic perspective) during the test execution. It is an open issue deserving greater attention from researchers to increase the testing coverage and ensure users confidence in CASS.",
"title": ""
},
{
"docid": "a0e4080652269445c6e36b76d5c8cd09",
"text": "Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Hermann et al. (2015) seek to solve this problem by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and show that a neural network can then be trained to give good performance on this task. In this paper, we conduct a thorough examination of this new reading comprehension task. Our primary aim is to understand what depth of language understanding is required to do well on this task. We approach this from one side by doing a careful hand-analysis of a small subset of the problems and from the other by showing that simple, carefully designed systems can obtain accuracies of 72.4% and 75.8% on these two datasets, exceeding current state-of-the-art results by over 5% and approaching what we believe is the ceiling for performance on this task.1",
"title": ""
},
{
"docid": "334510797355ca654d01dc45b65693ef",
"text": "Liquid crystal displays (LCDs) hold a large share of the flat-panel display market because LCDs offer advantages such as low power consumption, low radiation, and good image quality. However, image defects, such as spotlight, uniformity, and Mura defects, can impair the quality of an LCD. This research examined human perceptions of region-Mura and used Response Time and subjective markdown price to indicate the various severity levels of region-Mura that appeared at different display locations. The results indicate that, within a specific Mura Level range, the Mura’s location has a considerable impact on perceived quality (p < 0.001). Mura on the centers of LCDs have more impact than Mura on the corners of LCDs. Not all peripheral Mura were considered to be equal; participants chose different price markdown prices for LCDs with Mura in lower corners than they chose for LCDs with Mura in upper corners. These findings suggest that a manufacturer should establish a scraping threshold for LCDs based on information regarding Mura location to avoid the production waste from scrapping those LCDs, and should rotate the panel to position the most severe Mura in the lower part of the display to obtain a better perceived quality. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "85719d4bc86c7c8bbe5799a716d6533b",
"text": "We propose Sparse Neural Network architectures that are based on random or structured bipartite graph topologies. Sparse architectures provide compression of the models learned and speed-ups of computations, they can also surpass their unstructured or fully connected counterparts. As we show, even more compact topologies of the so-called SNN (Sparse Neural Network) can be achieved with the use of structured graphs of connections between consecutive layers of neurons. In this paper, we investigate how the accuracy and training speed of the models depend on the topology and sparsity of the neural network. Previous approaches using sparcity are all based on fully connected neural network models and create sparcity during training phase, instead we explicitly define a sparse architectures of connections before the training. Building compact neural network models is coherent with empirical observations showing that there is much redundancy in learned neural network models. We show experimentally that the accuracy of the models learned with neural networks depends on ”expander-like” properties of the underlying topologies such as the spectral gap and algebraic connectivity rather than the density of the graphs of connections. 1 ar X iv :1 70 6. 05 68 3v 1 [ cs .L G ] 1 8 Ju n 20 17",
"title": ""
},
{
"docid": "84337f21721a6aaae65061f677d11678",
"text": "This paper deals with the implementation of a stochastic flash ADC with the presence of comparator metastability, in a field-programmable gate array. Stochastic flash ADC exploits comparator threshold variation and can be implemented with simple and highly digital structures. We show that such designs is also prone to comparator metastability, therefore we propose an averaging scheme as a simple means to handle the situation. Experimental results from a prototype system based on an FPGA is given which shows the effectiveness of the averaging technique, resulting in a maximum measured SNDR of 22.24 dB with a sampling rate of 98 kHz.",
"title": ""
},
{
"docid": "e21f4c327c0006196fde4cf53ed710a7",
"text": "To focus the efforts of security experts, the goals of this empirical study are to analyze which security vulnerabilities can be discovered by code review, identify characteristics of vulnerable code changes, and identify characteristics of developers likely to introduce vulnerabilities. Using a three-stage manual and automated process, we analyzed 267,046 code review requests from 10 open source projects and identified 413 Vulnerable Code Changes (VCC). Some key results include: (1) code review can identify common types of vulnerabilities; (2) while more experienced contributors authored the majority of the VCCs, the less experienced contributors' changes were 1.8 to 24 times more likely to be vulnerable; (3) the likelihood of a vulnerability increases with the number of lines changed, and (4) modified files are more likely to contain vulnerabilities than new files. Knowing which code changes are more prone to contain vulnerabilities may allow a security expert to concentrate on a smaller subset of submitted code changes. Moreover, we recommend that projects should: (a) create or adapt secure coding guidelines, (b) create a dedicated security review team, (c) ensure detailed comments during review to help knowledge dissemination, and (d) encourage developers to make small, incremental changes rather than large changes.",
"title": ""
},
{
"docid": "12363d704fcfe9fef767c5e27140c214",
"text": "The application range of UAVs (unmanned aerial vehicles) is expanding along with performance upgrades. Vertical take-off and landing (VTOL) aircraft has the merits of both fixed-wing and rotary-wing aircraft. Tail-sitting is the simplest way for the VTOL maneuver since it does not need extra actuators. However, conventional hovering control for a tail-sitter UAV is not robust enough against large disturbance such as a blast of wind, a bird strike, and so on. It is experimentally observed that the conventional quaternion feedback hovering control often fails to keep stability when the control compensates large attitude errors. This paper proposes a novel hovering control strategy for a tail-sitter VTOL UAV that increases stability against large disturbance. In order to verify the proposed hovering control strategy, simulations and experiments on hovering of the UAV are performed giving large attitude errors. The results show that the proposed control strategy successfully compensates initial large attitude errors keeping stability, while the conventional quaternion feedback controller fails.",
"title": ""
}
] |
scidocsrr
|
57f64cec1e90f515cf7dd268fb57366f
|
Integrating Stereo Vision with a CNN Tracker for a Person-Following Robot
|
[
{
"docid": "e14d1f7f7e4f7eaf0795711fb6260264",
"text": "In this paper, we treat tracking as a learning problem of estimating the location and the scale of an object given its previous location, scale, as well as current and previous image frames. Given a set of examples, we train convolutional neural networks (CNNs) to perform the above estimation task. Different from other learning methods, the CNNs learn both spatial and temporal features jointly from image pairs of two adjacent frames. We introduce multiple path ways in CNN to better fuse local and global information. A creative shift-variant CNN architecture is designed so as to alleviate the drift problem when the distracting objects are similar to the target in cluttered environment. Furthermore, we employ CNNs to estimate the scale through the accurate localization of some key points. These techniques are object-independent so that the proposed method can be applied to track other types of object. The capability of the tracker of handling complex situations is demonstrated in many testing sequences.",
"title": ""
},
{
"docid": "f25dfc98473b09744d237d85d9aec0b5",
"text": "Object tracking has been one of the most important and active research areas in the field of computer vision. A large number of tracking algorithms have been proposed in recent years with demonstrated success. However, the set of sequences used for evaluation is often not sufficient or is sometimes biased for certain types of algorithms. Many datasets do not have common ground-truth object positions or extents, and this makes comparisons among the reported quantitative results difficult. In addition, the initial conditions or parameters of the evaluated tracking algorithms are not the same, and thus, the quantitative results reported in literature are incomparable or sometimes contradictory. To address these issues, we carry out an extensive evaluation of the state-of-the-art online object-tracking algorithms with various evaluation criteria to understand how these methods perform within the same framework. In this work, we first construct a large dataset with ground-truth object positions and extents for tracking and introduce the sequence attributes for the performance analysis. Second, we integrate most of the publicly available trackers into one code library with uniform input and output formats to facilitate large-scale performance evaluation. Third, we extensively evaluate the performance of 31 algorithms on 100 sequences with different initialization settings. By analyzing the quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.",
"title": ""
}
] |
[
{
"docid": "dd06c1c39e9b4a1ae9ee75c3251f27dc",
"text": "Magnetoencephalographic measurements (MEG) were used to examine the effect on the human auditory cortex of removing specific frequencies from the acoustic environment. Subjects listened for 3 h on three consecutive days to music \"notched\" by removal of a narrow frequency band centered on 1 kHz. Immediately after listening to the notched music, the neural representation for a 1-kHz test stimulus centered on the notch was found to be significantly diminished compared to the neural representation for a 0.5-kHz control stimulus centered one octave below the region of notching. The diminished neural representation for 1 kHz reversed to baseline between the successive listening sessions. These results suggest that rapid changes can occur in the tuning of neurons in the adult human auditory cortex following manipulation of the acoustic environment. A dynamic form of neural plasticity may underlie the phenomenon observed here.",
"title": ""
},
{
"docid": "c197e1ab49287fc571f2a99a9501bf84",
"text": "X-rays are commonly performed imaging tests that use small amounts of radiation to produce pictures of the organs, tissues, and bones of the body. X-rays of the chest are used to detect abnormalities or diseases of the airways, blood vessels, bones, heart, and lungs. In this work we present a stochastic attention-based model that is capable of learning what regions within a chest X-ray scan should be visually explored in order to conclude that the scan contains a specific radiological abnormality. The proposed model is a recurrent neural network (RNN) that learns to sequentially sample the entire X-ray and focus only on informative areas that are likely to contain the relevant information. We report on experiments carried out with more than 100, 000 X-rays containing enlarged hearts or medical devices. The model has been trained using reinforcement learning methods to learn task-specific policies.",
"title": ""
},
{
"docid": "ad1000d0975bb0c605047349267c5e47",
"text": "A systematic review of randomized clinical trials was conducted to evaluate the acceptability and usefulness of computerized patient education interventions. The Columbia Registry, MEDLINE, Health, BIOSIS, and CINAHL bibliographic databases were searched. Selection was based on the following criteria: (1) randomized controlled clinical trials, (2) educational patient-computer interaction, and (3) effect measured on the process or outcome of care. Twenty-two studies met the selection criteria. Of these, 13 (59%) used instructional programs for educational intervention. Five studies (22.7%) tested information support networks, and four (18%) evaluated systems for health assessment and history-taking. The most frequently targeted clinical application area was diabetes mellitus (n = 7). All studies, except one on the treatment of alcoholism, reported positive results for interactive educational intervention. All diabetes education studies, in particular, reported decreased blood glucose levels among patients exposed to this intervention. Computerized educational interventions can lead to improved health status in several major areas of care, and appear not to be a substitute for, but a valuable supplement to, face-to-face time with physicians.",
"title": ""
},
{
"docid": "ccedb6cff054254f3427ab0d45017d2a",
"text": "Traffic and power generation are the main sources of urban air pollution. The idea that outdoor air pollution can cause exacerbations of pre-existing asthma is supported by an evidence base that has been accumulating for several decades, with several studies suggesting a contribution to new-onset asthma as well. In this Series paper, we discuss the effects of particulate matter (PM), gaseous pollutants (ozone, nitrogen dioxide, and sulphur dioxide), and mixed traffic-related air pollution. We focus on clinical studies, both epidemiological and experimental, published in the previous 5 years. From a mechanistic perspective, air pollutants probably cause oxidative injury to the airways, leading to inflammation, remodelling, and increased risk of sensitisation. Although several pollutants have been linked to new-onset asthma, the strength of the evidence is variable. We also discuss clinical implications, policy issues, and research gaps relevant to air pollution and asthma.",
"title": ""
},
{
"docid": "02df2dde321bb81220abdcff59418c66",
"text": "Monitoring aquatic debris is of great interest to the ecosystems, marine life, human health, and water transport. This paper presents the design and implementation of SOAR - a vision-based surveillance robot system that integrates an off-the-shelf Android smartphone and a gliding robotic fish for debris monitoring. SOAR features real-time debris detection and coverage-based rotation scheduling algorithms. The image processing algorithms for debris detection are specifically designed to address the unique challenges in aquatic environments. The rotation scheduling algorithm provides effective coverage of sporadic debris arrivals despite camera's limited angular view. Moreover, SOAR is able to dynamically offload computation-intensive processing tasks to the cloud for battery power conservation. We have implemented a SOAR prototype and conducted extensive experimental evaluation. The results show that SOAR can accurately detect debris in the presence of various environment and system dynamics, and the rotation scheduling algorithm enables SOAR to capture debris arrivals with reduced energy consumption.",
"title": ""
},
{
"docid": "910c42c4737d38db592f7249c2e0d6d2",
"text": "This document presents the Enterprise Ontology a collection of terms and de nitions relevant to business enterprises It was developed as part of the Enterprise Project a collaborative e ort to provide a framework for enterprise modelling The Enterprise Ontology will serve as a basis for this framework which includes methods and a computer toolset for enterprise modelling We give an overview of the Enterprise Project elaborate on the intended use of the Ontology and discuss the process we went through to build it The scope of the Enterprise Ontology is limited to those core concepts required for the project however it is expected that it will appeal to a wider audience It should not be considered static during the course of the project the Enterprise Ontology will be further re ned and extended",
"title": ""
},
{
"docid": "e6dcae244f91dc2d7e843d9860ac1cfd",
"text": "After Disney's Michael Eisner, Miramax's Harvey Weinstein, and Hewlett-Packard's Carly Fiorina fell from their heights of power, the business media quickly proclaimed thatthe reign of abrasive, intimidating leaders was over. However, it's premature to proclaim their extinction. Many great intimidators have done fine for a long time and continue to thrive. Their modus operandi runs counter to a lot of preconceptions about what it takes to be a good leader. They're rough, loud, and in your face. Their tactics include invading others' personal space, staging tantrums, keeping people guessing, and possessing an indisputable command of facts. But make no mistake--great intimidators are not your typical bullies. They're driven by vision, not by sheer ego or malice. Beneath their tough exteriors and sharp edges are some genuine, deep insights into human motivation and organizational behavior. Indeed, these leaders possess political intelligence, which can make the difference between paralysis and successful--if sometimes wrenching--organizational change. Like socially intelligent leaders, politically intelligent leaders are adept at sizing up others, but they notice different things. Those with social intelligence assess people's strengths and figure out how to leverage them; those with political intelligence exploit people's weaknesses and insecurities. Despite all the obvious drawbacks of working under them, great intimidators often attract the best and brightest. And their appeal goes beyond their ability to inspire high performance. Many accomplished professionals who gravitate toward these leaders want to cultivate a little \"inner intimidator\" of their own. In the author's research, quite a few individuals reported having positive relationships with intimidating leaders. In fact, some described these relationships as profoundly educational and even transformational. So before we throw out all the great intimidators, the author argues, we should stop to consider what we would lose.",
"title": ""
},
{
"docid": "eb6ee2fd1f7f1d0d767e4dde2d811bed",
"text": "This paper presents a mathematical framework for performance analysis of Behavior Trees (BTs). BTs are a recent alternative to Finite State Machines (FSMs), for doing modular task switching in robot control architectures. By encoding the switching logic in a tree structure, instead of distributing it in the states of a FSM, modularity and reusability are improved. In this paper, we compute performance measures, such as success/failure probabilities and execution times, for plans encoded and executed by BTs. To do this, we first introduce Stochastic Behavior Trees (SBT), where we assume that the probabilistic performance measures of the basic action controllers are given. We then show how Discrete Time Markov Chains (DTMC) can be used to aggregate these measures from one level of the tree to the next. The recursive structure of the tree then enables us to step by step propagate such estimates from the leaves (basic action controllers) to the root (complete task execution). Finally, we verify our analytical results using massive Monte Carlo simulations, and provide an illustrative example of the results for a complex robotic task.",
"title": ""
},
{
"docid": "e0f797ff66a81b88bbc452e86864d7bc",
"text": "A key challenge in radar micro-Doppler classification is the difficulty in obtaining a large amount of training data due to costs in time and human resources. Small training datasets limit the depth of deep neural networks (DNNs), and, hence, attainable classification accuracy. In this work, a novel method for diversifying Kinect-based motion capture (MOCAP) simulations of human micro-Doppler to span a wider range of potential observations, e.g. speed, body size, and style, is proposed. By applying three transformations, a small set of MOCAP measurements is expanded to generate a large training dataset for network initialization of a 30-layer deep residual neural network. Results show that the proposed training methodology and residual DNN yield improved bottleneck feature performance and the highest overall classification accuracy among other DNN architectures, including transfer learning from the 1.5 million sample ImageNet database.",
"title": ""
},
{
"docid": "c41efa28806b3ac3d2b23d9e52b85193",
"text": "The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.",
"title": ""
},
{
"docid": "3b97d25d0a0e07d4b4fccc64ff251cce",
"text": "Consider a centralized hierarchical cloud-based multimedia system (CMS) consisting of a resource manager, cluster heads, and server clusters, in which the resource manager assigns clients' requests for multimedia service tasks to server clusters according to the task characteristics, and then each cluster head distributes the assigned task to the servers within its server cluster. For such a complicated CMS, however, it is a research challenge to design an effective load balancing algorithm that spreads the multimedia service task load on servers with the minimal cost for transmitting multimedia data between server clusters and clients, while the maximal load limit of each server cluster is not violated. Unlike previous work, this paper takes into account a more practical dynamic multiservice scenario in which each server cluster only handles a specific type of multimedia task, and each client requests a different type of multimedia service at a different time. Such a scenario can be modelled as an integer linear programming problem, which is computationally intractable in general. As a consequence, this paper further solves the problem by an efficient genetic algorithm with an immigrant scheme, which has been shown to be suitable for dynamic problems. Simulation results demonstrate that the proposed genetic algorithm can efficiently cope with dynamic multiservice load balancing in CMS.",
"title": ""
},
{
"docid": "f4b5a2584833466fa26da00b07a7f261",
"text": "This paper describes the development of the technology threat avoidance theory (TTAT), which explains individual IT users’ behavior of avoiding the threat of malicious information technologies. We articulate that avoidance and adoption are two qualitatively different phenomena and contend that technology acceptance theories provide a valuable, but incomplete, understanding of users’ IT threat avoidance behavior. Drawing from cybernetic theory and coping theory, TTAT delineates the avoidance behavior as a dynamic positive feedback loop in which users go through two cognitive processes, threat appraisal and coping appraisal, to decide how to cope with IT threats. In the threat appraisal, users will perceive an IT threat if they believe that they are susceptible Alan Dennis was the accepting senior editor for this paper. to malicious IT and that the negative consequences are severe. The threat perception leads to coping appraisal, in which users assess the degree to which the IT threat can be avoided by taking safeguarding measures based on perceived effectiveness and costs of the safeguarding measure and selfefficacy of taking the safeguarding measure. TTAT posits that users are motivated to avoid malicious IT when they perceive a threat and believe that the threat is avoidable by taking safeguarding measures; if users believe that the threat cannot be fully avoided by taking safeguarding measures, they would engage in emotion-focused coping. Integrating process theory and variance theory, TTAT enhances our understanding of human behavior under IT threats and makes an important contribution to IT security research and practice.",
"title": ""
},
{
"docid": "d485607db19e3defa000b24a59b1074a",
"text": "In the past years we have witnessed an explosive growth of the data and information on the World Wide Web, which makes it difficult for normal users to find the information that they are interested in. On the other hand, the majority of the data and resources are very unpopular, which can be considered as “hidden information”, and are very difficult to find. By building a bridge between the users and the objects and constructing their similarities, the Personal Recommender System (PRS) can recommend the objects that the users are potentially interested in. PRS plays an important role in not only social and economic life but also scientific analysis. The interdisciplinary PRS attracts attention from the communities of information science, computational mathematics, statistical physics, management science, and consumer behaviors, etc. In fact, PRS is one of the most efficient tools to solve the information overload problem. According to the recommendation algorithms, we introduce four typical systems, including the collaborating filtering system, the content-based system, the structure-based system, and the hybrid system. In addition, some improved algorithms are proposed to overcome the limitations of traditional systems. This review article may shed some light on the study of PRS from different backgrounds.",
"title": ""
},
{
"docid": "fad8cf15678cccbc727e9fba6292474d",
"text": "OBJECTIVE\nClinical records contain significant medical information that can be useful to researchers in various disciplines. However, these records also contain personal health information (PHI) whose presence limits the use of the records outside of hospitals. The goal of de-identification is to remove all PHI from clinical records. This is a challenging task because many records contain foreign and misspelled PHI; they also contain PHI that are ambiguous with non-PHI. These complications are compounded by the linguistic characteristics of clinical records. For example, medical discharge summaries, which are studied in this paper, are characterized by fragmented, incomplete utterances and domain-specific language; they cannot be fully processed by tools designed for lay language.\n\n\nMETHODS AND RESULTS\nIn this paper, we show that we can de-identify medical discharge summaries using a de-identifier, Stat De-id, based on support vector machines and local context (F-measure=97% on PHI). Our representation of local context aids de-identification even when PHI include out-of-vocabulary words and even when PHI are ambiguous with non-PHI within the same corpus. Comparison of Stat De-id with a rule-based approach shows that local context contributes more to de-identification than dictionaries combined with hand-tailored heuristics (F-measure=85%). Comparison with two well-known named entity recognition (NER) systems, SNoW (F-measure=94%) and IdentiFinder (F-measure=36%), on five representative corpora show that when the language of documents is fragmented, a system with a relatively thorough representation of local context can be a more effective de-identifier than systems that combine (relatively simpler) local context with global context. Comparison with a Conditional Random Field De-identifier (CRFD), which utilizes global context in addition to the local context of Stat De-id, confirms this finding (F-measure=88%) and establishes that strengthening the representation of local context may be more beneficial for de-identification than complementing local with global context.",
"title": ""
},
{
"docid": "16a6c26d6e185be8383c062c6aa620f8",
"text": "In this research, we suggested a vision-based traffic accident detection system for automatically detecting, recording, and reporting traffic accidents at intersections. This model first extracts the vehicles from the video image of CCD camera, tracks the moving vehicles, and extracts features such as the variation rate of the velocity, position, area, and direction of moving vehicles. The model then makes decisions on the traffic accident based on the extracted features. And we suggested and designed the metadata registry for the system to improve the interoperability. In the field test, 4 traffic accidents were detected and recorded by the system. The video clips are invaluable for intersection safety analysis.",
"title": ""
},
{
"docid": "da5f44562df4d13f2f8687344d4c4fd0",
"text": "Location finding by using wireless technology is one of the emerging and important technologies of wireless sensor networks. GPS can be utilized for outdoor areas only it cannot be used for tracking the user inside the building. The main motivation of this paper is to implement the system which can locate and track the user inside the building. Indoor locations include buildings like an airport, huge malls, supermarkets, universities and large infrastructures. The significant problem that this system solves is of tracking the user inside the building. The accurate indoor location can be found out by using the Received Signal Strength Indication (RSSI). The additional hardware is not required for RSSI, and moreover, it is easy to understand. The RSS (Received Signal Strength) values are calculated with the help of WiFi Access points and the mobile device. The system should provide the exact location of the user and also track the user. This paper presents a system that helps in finding out the exact location and tracking of the mobile device in the indoor environment. It can also be used to navigate the user to a required destination using the navigation function.",
"title": ""
},
{
"docid": "60e16b0c5bff9f7153c64a38193b8759",
"text": "The “Flash Crash” of May 6th, 2010 comprised an unprecedented 1,000 point, five-minute decline in the Dow Jones Industrial Average that was followed by a rapid, disorderly recovery of prices. We illuminate the causes of this singular event with the first analysis that tracks the full order book activity at millisecond granularity. We document previously overlooked market data anomalies and establish that these anomalies Granger-caused liquidity withdrawal. We offer a simulation model that formalizes the process by which large sell orders, combined with widespread liquidity withdrawal, can generate Flash Crash-like events in the absence of fundamental information arrival. ∗This work was supported by the Hellman Fellows Fund and the Rock Center for Corporate Governance at Stanford University. †Email: ealdrich@ucsc.edu. ‡Email: grundfest@stanford.edu §Email: gregory.laughlin@yale.edu",
"title": ""
},
{
"docid": "fa292adbad54c22fce27afbc5467efad",
"text": "This paper presents the results of a case study on the impacts of implementing Enterprise Content Management Systems (ECMSs) in an organization. It investigates how these impacts are influenced by the functionalities of an ECMS and by the nature of the ECMS-supported processes. The results confirm that both factors do influence the impacts. Further, the results indicate that the implementation of an ECMS can change the nature of ECMS-supported processes. It is also demonstrated that the functionalities of an ECMS need to be aligned with the nature of the processes of the implementing organization. This finding confirms previous research from the Workflow Management domain and extends it to the ECM domain. Finally, the case study results show that implementing an ECMS to support rather ‘static’ processes can be expected to cause more and stronger impacts than the support of ‘flexible’ processes.",
"title": ""
},
{
"docid": "a1f4b4c6e98e6b5e8b7f939318a5e808",
"text": "A new hardware scheme for computing the transition and control matrix of a parallel cyclic redundancy checksum is proposed. This opens possibilities for parallel high-speed cyclic redundancy checksum circuits that reconfigure very rapidly to new polynomials. The area requirements are lower than those for a realization storing a precomputed matrix. An additional simplification arises as only the polynomial needs to be supplied. The derived equations allow the width of the data to be processed in parallel to be selected independently of the degree of the polynomial. The new design has been simulated and outperforms a recently proposed architecture significantly in speed, area, and energy efficiency.",
"title": ""
},
{
"docid": "3a834b5c9f5621c1801c7650b33f1e41",
"text": "Human-to-human infection, as a type of fatal public health threats, can rapidly spread, resulting in a large amount of labor and health cost for treatment, control and prevention. To slow down the spread of infection, social network is envisioned to provide detailed contact statistics to isolate susceptive people who has frequent contacts with infected patients. In this paper, we propose a novel human-to-human infection analysis approach by exploiting social network data and health data that are collected by social network and e-healthcare technologies. We enable the social cloud server and health cloud server to exchange social contact information of infected patients and user's health condition in a privacy-preserving way. Specifically, we propose a privacy-preserving data query method based on conditional oblivious transfer to guarantee that only the authorized entities can query users’ social data and the social cloud server cannot infer anything during the query. In addition, we propose a privacy-preserving classification-based infection analysis method that can be performed by untrusted cloud servers without accessing the users’ health data. The performance evaluation shows that the proposed approach achieves higher infection analysis accuracy with the acceptable computational overhead.",
"title": ""
}
] |
scidocsrr
|
6b4ac0273c202d4590da020f0591d10c
|
Requirements for Data Quality Metrics
|
[
{
"docid": "3e805d6724dc400d681b3b42393d5ebe",
"text": "This paper introduces a framework for conducting and writing an effective literature review. The target audience for the framework includes information systems (IS) doctoral students, novice IS researchers, and other IS researchers who are constantly struggling with the development of an effective literature-based foundation for a proposed research. The proposed framework follows the systematic data processing approach comprised of three major stages: 1) inputs (literature gathering and screening), 2) processing (following Bloom’s Taxonomy), and 3) outputs (writing the literature review). This paper provides the rationale for developing a solid literature review including detailed instructions on how to conduct each stage of the process proposed. The paper concludes by providing arguments for the value of an effective literature review to IS research.",
"title": ""
}
] |
[
{
"docid": "15999217dea6ba3ab33ed193f83a42a3",
"text": "This paper describes a very low cost MMIC high power amplifier (HPA) with output power of over 7W. The MMIC was fabricated using a GaAs PHEMT process with a state-of-the-art compact die area of 13.7mm2. The HPA MMIC contains a phase and amplitude compensated output power combiner and super low loss phase compensated inter-stage matching networks. A four stage amplifier demonstrated commercially available GaN PHEMT based HPA equivalent performance with 7W saturated output power and 24dB small signal gain from 27.5GHz to 30GHz with peak output power of 8.3W and power added efficiency (PAE) of 27%. This low cost MMIC HPA achieved approximately 10-times lower production cost than GaN PHEMT based MMIC HPAs.",
"title": ""
},
{
"docid": "749f79007256f570b73983b8d3f36302",
"text": "This paper addresses some of the potential benefits of using fuzzy logic controllers to control an inverted pendulum system. The stages of the development of a fuzzy logic controller using a four input Takagi-Sugeno fuzzy model were presented. The main idea of this paper is to implement and optimize fuzzy logic control algorithms in order to balance the inverted pendulum and at the same time reducing the computational time of the controller. In this work, the inverted pendulum system was modeled and constructed using Simulink and the performance of the proposed fuzzy logic controller is compared to the more commonly used PID controller through simulations using Matlab. Simulation results show that the fuzzy logic controllers are far more superior compared to PID controllers in terms of overshoot, settling time and response to parameter changes.",
"title": ""
},
{
"docid": "3b54f22dd95670f618650f2d71e58068",
"text": "This paper proposes a novel multi-view human action recognition method by discovering and sharing common knowledge among different video sets captured in multiple viewpoints. To our knowledge, we are the first to treat a specific view as target domain and the others as source domains and consequently formulate the multi-view action recognition into the cross-domain learning framework. First, the classic bag-of-visual word framework is implemented for visual feature extraction in individual viewpoints. Then, we propose a cross-domain learning method with block-wise weighted kernel function matrix to highlight the saliency components and consequently augment the discriminative ability of the model. Extensive experiments are implemented on IXMAS, the popular multi-view action dataset. The experimental results demonstrate that the proposed method can consistently outperform the state of the arts.",
"title": ""
},
{
"docid": "6f56d10f90b1b3ba0c1700fa06c9199e",
"text": "Finding human faces automatically in an image is a dif cult yet important rst step to a fully automatic face recognition system This paper presents an example based learning approach for locating unoccluded frontal views of human faces in complex scenes The technique represents the space of human faces by means of a few view based face and non face pattern prototypes At each image location a value distance measure is com puted between the local image pattern and each prototype A trained classi er determines based on the set of dis tance measurements whether a human face exists at the current image location We show empirically that our distance metric is critical for the success of our system",
"title": ""
},
{
"docid": "91a56dbdefc08d28ff74883ec10a5d6e",
"text": "A truly autonomous guided vehicle (AGV) must sense its surrounding environment and react accordingly. In order to maneuver an AGV autonomously, it has to overcome navigational and collision avoidance problems. Previous AGV control systems have relied on hand-coded algorithms for processing sensor information. An intelligent distributed fuzzy logic control system (IDFLCS) has been implemented in a mecanum wheeled AGV system in order to achieve improved reliability and to reduce complexity of the development of control systems. Fuzzy logic controllers have been used to achieve robust control of mechatronic systems by fusing multiple signals from noisy sensors, integrating the representation of human knowledge and implementing behaviour-based control using if-then rules. This paper presents an intelligent distributed controller that implements fuzzy logic on an AGV that uses four independently driven mecanum wheels, incorporating laser, inertial and ultrasound sensors. Distributed control system, fuzzy control strategy, navigation and motion control of such an AGV are presented.",
"title": ""
},
{
"docid": "1f5c52945d83872a93749adc0e1a0909",
"text": "Turmeric, derived from the plant Curcuma longa, is a gold-colored spice commonly used in the Indian subcontinent, not only for health care but also for the preservation of food and as a yellow dye for textiles. Curcumin, which gives the yellow color to turmeric, was first isolated almost two centuries ago, and its structure as diferuloylmethane was determined in 1910. Since the time of Ayurveda (1900 B.C) numerous therapeutic activities have been assigned to turmeric for a wide variety of diseases and conditions, including those of the skin, pulmonary, and gastrointestinal systems, aches, pains, wounds, sprains, and liver disorders. Extensive research within the last half century has proven that most of these activities, once associated with turmeric, are due to curcumin. Curcumin has been shown to exhibit antioxidant, antiinflammatory, antiviral, antibacterial, antifungal, and anticancer activities and thus has a potential against various malignant diseases, diabetes, allergies, arthritis, Alzheimer’s disease, and other chronic illnesses. Curcumin can be considered an ideal “Spice for Life”. Curcumin is the most important fraction of turmeric which is responsible for its biological activity. In the present work we have investigated the qualitative and quantitative determination of curcumin in the ethanolic extract of C.longa. Qualitative estimation was carried out by thin layer chromatographic (TLC) method. The total phenolic content of the ethanolic extract of C.longa was found to be 11.24 as mg GAE/g. The simultaneous determination of the pharmacologically important active curcuminoids viz. curcumin, demethoxycurcumin and bisdemethoxycurcumin in Curcuma longa was carried out by spectrophotometric and HPLC techniques. HPLC separation was performed on a Cyber Lab C-18 column (250 x 4.0 mm, 5μ) using acetonitrile and 0.1 % orthophosphoric acid solution in water in the ratio 60 : 40 (v/v) at flow rate of 0.5 mL/min. Detection of curcuminoids were performed at 425 nm.",
"title": ""
},
{
"docid": "f435edc49d4907e8132f436cc43338db",
"text": "OBJECTIVE\nDepression is common among patients with diabetes, but its relationship to glycemic control has not been systematically reviewed. Our objective was to determine whether depression is associated with poor glycemic control.\n\n\nRESEARCH DESIGN AND METHODS\nMedline and PsycINFO databases and published reference lists were used to identify studies that measured the association of depression with glycemic control. Meta-analytic procedures were used to convert the findings to a common metric, calculate effect sizes (ESs), and statistically analyze the collective data.\n\n\nRESULTS\nA total of 24 studies satisfied the inclusion and exclusion criteria for the meta-analysis. Depression was significantly associated with hyperglycemia (Z = 5.4, P < 0.0001). The standardized ES was in the small-to-moderate range (0.17) and was consistent, as the 95% CI was narrow (0.13-0.21). The ES was similar in studies of either type 1 or type 2 diabetes (ES 0.19 vs. 0.16) and larger when standardized interviews and diagnostic criteria rather than self-report questionnaires were used to assess depression (ES 0.28 vs. 0.15).\n\n\nCONCLUSIONS\nDepression is associated with hyperglycemia in patients with type 1 or type 2 diabetes. Additional studies are needed to establish the directional nature of this relationship and to determine the effects of depression treatment on glycemic control and the long-term course of diabetes.",
"title": ""
},
{
"docid": "c94abfc9bac978544366f43788843bbe",
"text": "In this paper, we propose a new feature extraction approach for face recognition based on Curvelet transform and local binary pattern operator. The motivation of this approach is based on two observations. One is that Curvelet transform is a new anisotropic multi-resolution analysis tool, which can effectively represent image edge discontinuities; the other is that local binary pattern operator is one of the best current texture descriptors for face images. As the curvelet features in different frequency bands represent different information of the original image, we extract such features using different methods for different frequency bands. Technically, the lowest frequency band component is processed using the local binary urvelet transform ocal binary pattern ocal property preservation pattern method, and only the medium frequency band components are normalized. And then, we combine them to create a feature set, and use the local preservation projection to reduce its dimension. Finally, we classify the test samples using the nearest neighbor classifier in the reduced space. Extensive experiments on the Yale database, the extended Yale B database, the PIE pose 09 database, and the FRGC database illustrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "8785e51ebe39057012b81c37a6ddc097",
"text": "In this paper, we present a set of distributed algorithms for estimating the electro-mechanical oscillation modes of large power system networks using synchrophasors. With the number of phasor measurement units (PMUs) in the North American grid scaling up to the thousands, system operators are gradually inclining toward distributed cyber-physical architectures for executing wide-area monitoring and control operations. Traditional centralized approaches, in fact, are anticipated to become untenable soon due to various factors such as data volume, security, communication overhead, and failure to adhere to real-time deadlines. To address this challenge, we propose three different communication and computational architectures by which estimators located at the control centers of various utility companies can run local optimization algorithms using local PMU data, and thereafter communicate with other estimators to reach a global solution. Both synchronous and asynchronous communications are considered. Each architecture integrates a centralized Prony-based algorithm with several variants of alternating direction method of multipliers (ADMM). We discuss the relative advantages and bottlenecks of each architecture using simulations of IEEE 68-bus and IEEE 145-bus power system, as well as an Exo-GENI-based software defined network.",
"title": ""
},
{
"docid": "edb17cb58e7fd5862c84b53e9c9f2915",
"text": "0747-5632/$ see front matter 2012 Elsevier Ltd. A doi:10.1016/j.chb.2011.12.003 ⇑ Corresponding author. Tel.: +49 40 41346826; fax E-mail addresses: sabine.trepte@uni-hamburg.de ( uni-hamburg.de (L. Reinecke), keno.juechems@stu Juechems). Online gaming has gained millions of users around the globe, which have been shown to virtually connect, to befriend, and to accumulate online social capital. Today, as online gaming has become a major leisure time activity, it seems worthwhile asking for the underlying factors of online social capital acquisition and whether online social capital increases offline social support. In the present study, we proposed that the online game players’ physical and social proximity as well as their mutual familiarity influence bridging and bonding social capital. Physical proximity was predicted to positively influence bonding social capital online. Social proximity and familiarity were hypothesized to foster both online bridging and bonding social capital. Additionally, we hypothesized that both social capital dimensions are positively related to offline social support. The hypotheses were tested with regard to members of e-sports clans. In an online survey, participants (N = 811) were recruited via the online portal of the Electronic Sports League (ESL) in several countries. The data confirmed all hypotheses, with the path model exhibiting an excellent fit. The results complement existing research by showing that online gaming may result in strong social ties, if gamers engage in online activities that continue beyond the game and extend these with offline activities. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c8bdd764d2609af3bd06cb0427afb3ff",
"text": "This article investigates suitable biometrics for the identification of children. It describes the different properties of a good biometric, and concludes with some recommendations regarding ethics and the use of biometrics for children.",
"title": ""
},
{
"docid": "d03abae94005c27aa46c66e1cdc77b23",
"text": "The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multi-modality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement.",
"title": ""
},
{
"docid": "a4d3cebea4be0bbb7890c033e7f252c1",
"text": "In this paper, we investigate continuum manipulators that are analogous to conventional rigid-link parallel robot designs. These “parallel continuum manipulators” have the potential to inherit some of the compactness and compliance of continuum robots while retaining some of the precision, stability, and strength of rigid-link parallel robots, yet they represent a relatively unexplored area of the broad manipulator design space. We describe the construction of a prototype manipulator structure with six compliant legs connected in a parallel pattern similar to that of a Stewart-Gough platform. We formulate the static forward and inverse kinematics problems for such manipulators as the solution to multiple Cosserat-rod models with coupled boundary conditions, and we test the accuracy of this approach in a set of experiments, including the prediction of leg buckling. An inverse kinematics simulation of slices through the 6 degree-of-freedom (DOF) workspace illustrates the kinematic mapping, range of motion, and force required for actuation, which sheds light on the potential advantages and tradeoffs that parallel continuum manipulators may bring. Potential applications include miniature wrists and arms for endoscopic medical procedures, and lightweight compliant arms for safe interaction with humans.",
"title": ""
},
{
"docid": "d5f905fb66ba81ecde0239a4cc3bfe3f",
"text": "Bidirectional path tracing (BDPT) can render highly realistic scenes with complicated lighting scenarios. The Light Vertex Cache (LVC) based BDPT method by Davidovic et al. [Davidovič et al. 2014] provided good performance on scenes with simple materials in a progressive rendering scenario. In this paper, we propose a new bidirectional path tracing formulation based on the LVC approach that handles scenes with complex, layered materials efficiently on the GPU. We achieve coherent material evaluation while conserving GPU memory requirements using sorting. We propose a modified method for selecting light vertices using the contribution importance which improves the image quality for a given amount of work. Progressive rendering can empower artists in the production pipeline to iterate and preview their work quickly. We hope the work presented here will enable the use of GPUs in the production pipeline with complex materials and complicated lighting scenarios.",
"title": ""
},
{
"docid": "7411be59eacb3ecad53204b300e17c24",
"text": "In this study, a finger exoskeleton robot has been designed and presented. The prototype device was designed to be worn on the dorsal side of the hand to assist in the movement and rehabilitation of the fingers. The finger exoskeleton is 3D-printed to be low-cost and has a transmission mechanism consisting of rigid serial links which is actuated by a stepper motor. The actuation of the robotic finger is by a sliding motion and mimics the movement of the human finger. To make it possible for the patient to use the rehabilitation device anywhere and anytime, an ArduinoTM control board and a speech recognition board were used to allow voice control. As the robotic finger follows the patients voice commands the actual motion is analyzed by Tracker image analysis software. The finger exoskeleton is designed to flex and extend the fingers, and has a rotation range of motion (ROM) of 44.2◦.",
"title": ""
},
{
"docid": "f645a4dc6d3eba8536dac317770f43c6",
"text": "We propose a method for removing marked dynamic objects from videos captured with a free-moving camera, so long as the objects occlude parts of the scene with a static background. Our approach takes as input a video, a mask marking the object to be removed, and a mask marking the dynamic objects to remain in the scene. To inpaint a frame, we align other candidate frames in which parts of the missing region are visible. Among these candidates, a single source is chosen to fill each pixel so that the final arrangement is color-consistent. Intensity differences between sources are smoothed using gradient domain fusion. Our frame alignment process assumes that the scene can be approximated using piecewise planar geometry: A set of homographies is estimated for each frame pair, and one each is selected for aligning pixels such that the color-discrepancy is minimized and the epipolar constraints are maintained. We provide experimental validation with several real-world video sequences to demonstrate that, unlike in previous work, inpainting videos shot with free-moving cameras does not necessarily require estimation of absolute camera positions and per-frame per-pixel depth maps.",
"title": ""
},
{
"docid": "3b7ac492add26938636ae694ebb14b65",
"text": "This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (OO) design metrics introduced by [Chidamber&Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Li&Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to clas es. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber&Kemerer’s OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than “traditional” code metrics, which can only be collected at a later phase of the software development processes. Key-words: Object-Oriented Design Metrics; Error Prediction Model; Object-Oriented Software Development; C++ Programming Language. * V. Basili and W. Melo are with the University of Maryland, Institute for Advanced Computer Studies and Computer Science Dept., A. V. Williams Bldg., College Park, MD 20742 USA. {basili | melo}@cs.umd.edu L. Briand is with the CRIM, 1801 McGill College Av., Montréal (Québec), H3A 2N4, Canada. lbriand@crim.ca Technical Report, Univ. of Maryland, Dep. of Computer Science, College Park, MD, 20742 USA. April 1995. CS-TR-3443 2 UMIACS-TR-95-40 1 . Introduction",
"title": ""
},
{
"docid": "3a5be5b365cfdc6f29646bf97953fc18",
"text": "Fuzzy set methods have been used to model and manage uncertainty in various aspects of image processing, pattern recognition, and computer vision. High-level computer vision applications hold a great potential for fuzzy set theory because of its links to natural language. Linguistic scene description, a language-based interpretation of regions and their relationships, is one such application that is starting to bear the fruits of fuzzy set theoretic involvement. In this paper, we are expanding on two earlier endeavors. We introduce new families of fuzzy directional relations that rely on the computation of histograms of forces. These families preserve important relative position properties. They provide inputs to a fuzzy rule base that produces logical linguistic descriptions along with assessments as to the validity of the descriptions. Each linguistic output uses hedges from a dictionary of about 30 adverbs and other terms that can be tailored to individual users. Excellent results from several synthetic and real image examples show the applicability of this approach.",
"title": ""
},
{
"docid": "e32a7f4ee9330cffbcb56b0ff2b7fb8c",
"text": "Outsourcing has increasingly been recognized as a source of great competitive advantage. By moving away from vertical integration and towards outsourcing, firms face a new challenge in integrating the various activities of the supply chain. The advent of novel information technologies, however, has made the integration of supply chain activities more manageable. In this study, we evaluate the effects of the usage of different information technologies on supplier and logistics integration in supply chains. A cross-sectional mail survey of ISM members in the United States was utilized to collect empirical data. ANOVA analysis was conducted to delineate the differences in the integration constructs across different levels of information technology usage. The results provide empirical evidence supporting the fact that information technology engenders supplier as well as logistics integration.",
"title": ""
},
{
"docid": "f40125e7cc8279a5514deaf1146684de",
"text": "Summary Several models explain how a complex integrated system like the rodent mandible can arise from multiple developmental modules. The models propose various integrating mechanisms, including epigenetic effects of muscles on bones. We test five for their ability to predict correlations found in the individual (symmetric) and fluctuating asymmetric (FA) components of shape variation. We also use exploratory methods to discern patterns unanticipated by any model. Two models fit observed correlation matrices from both components: (1) parts originating in same mesenchymal condensation are integrated, (2) parts developmentally dependent on the same muscle form an integrated complex as do those dependent on teeth. Another fits the correlations observed in FA: each muscle insertion site is an integrated unit. However, no model fits well, and none predicts the complex structure found in the exploratory analyses, best described as a reticulated network. Furthermore, no model predicts the correlation between proximal parts of the condyloid and coronoid, which can exceed the correlations between proximal and distal parts of the same process. Additionally, no model predicts the correlation between molar alveolus and ramus and/or angular process, one of the highest correlations found in the FA component. That correlation contradicts the basic premise of all five developmental models, yet it should be anticipated from the epigenetic effects of mastication, possibly the primary morphogenetic process integrating the jaw coupling forces generated by muscle contraction with those experienced at teeth.",
"title": ""
}
] |
scidocsrr
|
f91e39e3c133869f055c3cb983c6a06d
|
End-to-End Deep Knowledge Tracing by Learning Binary Question-Embedding
|
[
{
"docid": "e6c32d3fd1bdbfb2cc8742c9b670ce97",
"text": "A framework for skill acquisition is proposed that includes two major stages in the development of a cognitive skill: a declarative stage in which facts about the skill domain are interpreted and a procedural stage in which the domain knowledge is directly embodied in procedures for performing the skill. This general framework has been instantiated in the ACT system in which facts are encoded in a propositional network and procedures are encoded as productions. Knowledge compilation is the process by which the skill transits from the declarative stage to the procedural stage. It consists of the subprocesses of composition, which collapses sequences of productions into single productions, and proceduralization, which embeds factual knowledge into productions. Once proceduralized, further learning processes operate on the skill to make the productions more selective in their range of applications. These processes include generalization, discrimination, and strengthening of productions. Comparisons are made to similar concepts from past learning theories. How these learning mechanisms apply to produce the power law speedup in processing time with practice is discussed.",
"title": ""
},
{
"docid": "b7d13c090e6d61272f45b1e3090f0341",
"text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"title": ""
},
{
"docid": "4b249892383502155243585934f1af3e",
"text": "Modeling a student's knowledge state while she is solving exercises is a crucial stepping stone towards providing better personalized learning experiences at scale. This task, also referred to as \"knowledge tracing\", has been explored extensively on exercises where student submissions fall into a finite discrete solution space, e.g. a multiple-choice answer. However, we believe that rich information about a student's learning is captured within their responses to open-ended problems with unbounded solution spaces, such as programming exercises. In addition, sequential snapshots of a student's progress while she is solving a single exercise can provide valuable insights into her learning behavior. In this setting, creating representations for a student's knowledge state is a challenging task, but with recent advances in machine learning, there are more promising techniques to learn representations for complex entities. In our work, we feed the embedded program submissions into a recurrent neural network and train it on the task of predicting the student's success on the subsequent programming exercise. By training on this task, the model learns nuanced representations of a student's knowledge, and reliably predicts future student performance.",
"title": ""
},
{
"docid": "7209596ad58da21211bfe0ceaaccc72b",
"text": "Knowledge tracing (KT)[1] has been used in various forms for adaptive computerized instruction for more than 40 years. However, despite its long history of application, it is difficult to use in domain model search procedures, has not been used to capture learning where multiple skills are needed to perform a single action, and has not been used to compute latencies of actions. On the other hand, existing models used for educational data mining (e.g. Learning Factors Analysis (LFA)[2]) and model search do not tend to allow the creation of a “model overlay” that traces predictions for individual students with individual skills so as to allow the adaptive instruction to automatically remediate performance. Because these limitations make the transition from model search to model application in adaptive instruction more difficult, this paper describes our work to modify an existing data mining model so that it can also be used to select practice adaptively. We compare this new adaptive data mining model (PFA, Performance Factors Analysis) with two versions of LFA and then compare PFA with standard KT.",
"title": ""
},
{
"docid": "2cf13325c8901f25418f6c6266106075",
"text": "Knowledge tracing—where a machine models the knowledge of a student as they interact with coursework—is a well established problem in computer supported education. Though effectively modeling student knowledge would have high educational impact, the task has many inherent challenges. In this paper we explore the utility of using Recurrent Neural Networks (RNNs) to model student learning. The RNN family of models have important advantages over previous methods in that they do not require the explicit encoding of human domain knowledge, and can capture more complex representations of student knowledge. Using neural networks results in substantial improvements in prediction performance on a range of knowledge tracing datasets. Moreover the learned model can be used for intelligent curriculum design and allows straightforward interpretation and discovery of structure in student tasks. These results suggest a promising new line of research for knowledge tracing and an exemplary application task for RNNs.",
"title": ""
}
] |
[
{
"docid": "630f5845ee863b5ba42c19ae3f8011dd",
"text": "INTRODUCTION\nThis is the first clinical long-term pilot study that tested the biomimetic concept of using more flexible, dentine-like (low Young modulus) glass fiber-reinforced epoxy resin posts (GFREPs) compared with rather rigid, stiff (higher Young modulus) titanium posts (TPs) in order to improve the survival rate of severely damaged endodontically treated teeth.\n\n\nMETHODS\nNinety-one subjects in need of postendodontic restorations in teeth with 2 or less remaining cavity walls were randomly assigned to receive either a tapered TP (n = 46) or a tapered GFREP (n = 45). The posts were adhesively luted using self-adhesive resin cement. The composite core build-ups were prepared ensuring a circumferential 2-mm ferrule. The primary endpoint was a loss of restoration for any reason. To study group differences, the log-rank test was calculated (P < .05). Hazard plots were constructed.\n\n\nRESULTS\nAfter 84 months of observation (mean = 71.2 months), 7 restorations failed (ie, 4 GFREPs and 3 TPs). The failure modes were as follows: GFREP:root fracture (n = 3), core fracture (n = 1) and TP:endodontic failure (n = 3). No statistical difference was found between the survival rates (GFREPs = 90.2%, TPs = 93.5%, P = .642). The probability of no failure was comparable for both post materials (risk ratio; 95% confidence interval, 0.965-0.851/1.095).\n\n\nCONCLUSIONS\nWhen using self-adhesive luted prefabricated posts in severely destroyed abutment teeth with 2 or less cavity walls and a 2-mm ferrule, postendodontic restorations achieved high long-term survival rates irrespective of the post material used (ie, glass fiber vs titanium).",
"title": ""
},
{
"docid": "dfb68d81ed159e82b6c9f2e930436e97",
"text": "The last decade has seen the fields of molecular biology and genetics transformed by the development of CRISPR-based gene editing technologies. These technologies were derived from bacterial defense systems that protect against viral invasion. Elegant studies focused on the evolutionary battle between CRISPR-encoding bacteria and the viruses that infect and kill them revealed the next step in this arms race, the anti-CRISPR proteins. Investigation of these proteins has provided important new insight into how CRISPR-Cas systems work and how bacterial genomes evolve. They have also led to the development of important biotechnological tools that can be used for genetic engineering, including off switches for CRISPR-Cas9 genome editing in human cells.",
"title": ""
},
{
"docid": "01d8f6e022099977bdcf92ee5735e11d",
"text": "We present a novel deep learning based image inpainting system to complete images with free-form masks and inputs. e system is based on gated convolutions learned from millions of images without additional labelling efforts. e proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shapes, global and local GANs designed for a single rectangular mask are not suitable. To this end, we also present a novel GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminators on dense image patches. It is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more exible results than previous methods. We show that our system helps users quickly remove distracting objects, modify image layouts, clear watermarks, edit faces and interactively create novel objects in images. Furthermore, visualization of learned feature representations reveals the eectiveness of gated convolution and provides an interpretation of how the proposed neural network lls in missing regions. More high-resolution results and video materials are available at hp://jiahuiyu.com/deepll2.",
"title": ""
},
{
"docid": "b00ade5badb574074aa15bc995999f1e",
"text": "VPN solutions can be deployed on a wireless network infrastructure to secure transmission between wireless clients and their wired enterprise network. There are many software platforms that can be used to implement software-based VPN solution such as windows, Linux, Solaris, Mac, and BSD. In this paper, the performance evaluation of some remote access VPN solutions, namely Point to Point Tunneling Protocol (PPTP), Layer 2 Tunneling Protocol over Internet Protocol Security (L2TP/IPSec), and Secure Socket Layer (SSL) will be empirically investigated on wireless networks. Some of QoS performance metrics like throughput, latency, jitter, and packet loss are measured to explore the impact of these VPNs on the ultimate performance perceived by end user applications. All experiments were conducted using wireless VPN client (vpn01Client) connected to domain controller server (dc01Server) through VPN server (vpn01Server).",
"title": ""
},
{
"docid": "dcecf343d3bc76a212a47ea231786c51",
"text": "BACKGROUND AND PURPOSE\nStroke prevention clinics (SPCs) are not usually involved with the active management of hypertension, hyperlipidemia, diabetes, and smoking. The effect of consultations generated at SPCs on the adequacy of the management of these risk factors for stroke has not been well described, and few studies have long-term follow-up.\n\n\nMETHODS\nWe performed a prospective study of 119 consecutive patients referred to an SPC for secondary prevention. One year after their baseline visit, patients were re-evaluated for the adequacy of the management of the above risk factors, and the proportion of improvement was assessed.\n\n\nRESULTS\nOne-hundred twelve patients returned for their 1-year follow-up visit. Sixty-six were male, and the average age was 65 years. Hypertension was present in 83 patients, hyperlipidemia in 92, diabetes in 26, and smoking in 38, and 80 had multiple risk factors. At baseline, 66% of patients with hypertension, 17% of patients with hyperlipidemia, and 23% of diabetics had adequate management of their respective risk factors. During 1 year of follow-up, hypertension management improved 20% (P<0.001) and lipid management improved 32% (P<0.001). There was no significant improvement in diabetes management or smoking cessation.\n\n\nCONCLUSIONS\nAlthough our understanding of the benefit of addressing hypertension, hyperlipidemia, diabetes, and smoking for secondary prevention of stroke is evolving, we found marked room for improvement in the management of these four risk factors. SPCs may need to be more actively involved in the management of these modifiable risk factors, if we are to significantly impact the risk of recurrent stroke.",
"title": ""
},
{
"docid": "19c3bd8d434229d98741b04d3041286b",
"text": "The availability of powerful microprocessors and high-speed networks as commodity components has enabled high performance computing on distributed systems (wide-area cluster computing). In this environment, as the resources are usually distributed geographically at various levels (department, enterprise, or worldwide) there is a great challenge in integrating, coordinating and presenting them as a single resource to the user; thus forming a computational grid. Another challenge comes from the distributed ownership of resources with each resource having its own access policy, cost, and mechanism. The proposed Nimrod/G grid-enabled resource management and scheduling system builds on our earlier work on Nimrod and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components. It uses the Globus toolkit services and can be easily extended to operate with any other emerging grid middleware services. It focuses on the management and scheduling of computations over dynamic resources scattered geographically across the Internet at department, enterprise, or global level with particular emphasis on developing scheduling schemes based on the concept of computational economy for a real test bed, namely, the Globus testbed (GUSTO).",
"title": ""
},
{
"docid": "791cc656afc2d36e1f491c5a80b77b97",
"text": "With the wide diffusion of smartphones and their usage in a plethora of processes and activities, these devices have been handling an increasing variety of sensitive resources. Attackers are hence producing a large number of malware applications for Android (the most spread mobile platform), often by slightly modifying existing applications, which results in malware being organized in families. Some works in the literature showed that opcodes are informative for detecting malware, not only in the Android platform. In this paper, we investigate if frequencies of ngrams of opcodes are effective in detecting Android malware and if there is some significant malware family for which they are more or less effective. To this end, we designed a method based on state-of-the-art classifiers applied to frequencies of opcodes ngrams. Then, we experimentally evaluated it on a recent dataset composed of 11120 applications, 5560 of which are malware belonging to several different families. Results show that an accuracy of 97% can be obtained on the average, whereas perfect detection rate is achieved for more than one malware family.",
"title": ""
},
{
"docid": "815fe60934f0313c56e631d73b998c95",
"text": "The scientific credibility of findings from clinical trials can be undermined by a range of problems including missing data, endpoint switching, data dredging, and selective publication. Together, these issues have contributed to systematically distorted perceptions regarding the benefits and risks of treatments. While these issues have been well documented and widely discussed within the profession, legislative intervention has seen limited success. Recently, a method was described for using a blockchain to prove the existence of documents describing pre-specified endpoints in clinical trials. Here, we extend the idea by using smart contracts - code, and data, that resides at a specific address in a blockchain, and whose execution is cryptographically validated by the network - to demonstrate how trust in clinical trials can be enforced and data manipulation eliminated. We show that blockchain smart contracts provide a novel technological solution to the data manipulation problem, by acting as trusted administrators and providing an immutable record of trial history.",
"title": ""
},
{
"docid": "2ab6bc212e45c3d5775e760e5a01c0ef",
"text": "The face recognition systems are used to recognize the person by using merely a person’s image. The face detection scheme is the primary method which is used to extract the region of interest (ROI). The ROI is further processed under the face recognition scheme. In the proposed model, we are going to use the cross-correlation algorithm along with the viola jones for the purpose of face recognition to recognize the person. The proposed model is proposed using the Cross-correlation algorithm along with cross correlation scheme in order to recognize the person by evaluating the facial features.",
"title": ""
},
{
"docid": "5b545c14a8784383b8d921eb27991749",
"text": "In this chapter, neural networks are used to predict the future stock prices and develop a suitable trading system. Wavelet analysis is used to de-noise the time series and the results are compared with the raw time series prediction without wavelet de-noising. Standard and Poor 500 (S&P 500) is used in experiments. We use a gradual data sub-sampling technique, i.e., training the network mostly with recent data, but without neglecting past data. In addition, effects of NASDAQ 100 are studied on prediction of S&P 500. A daily trading strategy is employed to buy/sell according to the predicted prices and to calculate the directional efficiency and the rate of returns for different periods. There are numerous exchange traded funds (ETF’s), which attempt to replicate the performance of S&P 500 by holding the same stocks in the same proportions as the index, and therefore, giving the same percentage returns as S&P 500. Therefore, this study can be used to help invest in any of the various ETFs, which replicates the performance of S&P 500. The experimental results show that neural networks, with appropriate training and input data, can be used to achieve high profits by investing in ETFs based on S&P 500.",
"title": ""
},
{
"docid": "03f913234dc6d41aada7ce3fe8de1203",
"text": "Epicanthoplasty is commonly performed on Asian eyelids. Consequently, overcorrection may appear. The aim of this study was to introduce a method of reconstructing the epicanthal fold and to apply this method to the patients. A V flap with an extension (eagle beak shaped) was designed on the medial canthal area. The upper incision line started near the medial end of the double-fold line, and it followed its curvature inferomedially. For the lower incision, starting at the tip (medial end) of the flap, a curvilinear incision was designed first diagonally and then horizontally along the lower blepharoplasty line. The V flap was elevated as thin as possible. Then, the upper flap was deeply undermined to make it thick. The lower flap was made a little thinner than the upper flap. Then, the upper and lower flaps were approximated to form the anteromedial surface of the epicanthal fold in a fashion sufficient to cover the red caruncle. The V flap was rotated inferolaterally over the caruncle. The tip of the V flap was sutured to the medial one-third point of the lower margin. The inferior border of the V flap and the residual lower margin were approximated. Thereafter, the posterolateral surface of the epicanthal fold was made. From 1999 to 2011, 246 patients were operated on using this method. Among them, 62 patients were followed up. The mean intercanthal distance was increased from 31.7 to 33.8 mm postoperatively. Among the 246 patients operated on, reoperation was performed for 6 patients. Among the 6 patients reoperated on, 3 cases were due to epicanthus inversus, 1 case was due to insufficient reconstruction, 1 case was due to making an infold, and 1 case was due to reopening the epicanthal fold.This V-Y and rotation flap can be a useful method for reconstruction of the epicanthal fold.",
"title": ""
},
{
"docid": "2f7a0eaf15515a9cf8cbbebc4d734072",
"text": "Rifampicin (Rif) is one of the most potent and broad spectrum antibiotics against bacterial pathogens and is a key component of anti-tuberculosis therapy, stemming from its inhibition of the bacterial RNA polymerase (RNAP). We determined the crystal structure of Thermus aquaticus core RNAP complexed with Rif. The inhibitor binds in a pocket of the RNAP beta subunit deep within the DNA/RNA channel, but more than 12 A away from the active site. The structure, combined with biochemical results, explains the effects of Rif on RNAP function and indicates that the inhibitor acts by directly blocking the path of the elongating RNA when the transcript becomes 2 to 3 nt in length.",
"title": ""
},
{
"docid": "81ca5239dbd60a988e7457076aac05d7",
"text": "OBJECTIVE\nFrontline health professionals need a \"red flag\" tool to aid their decision making about whether to make a referral for a full diagnostic assessment for an autism spectrum condition (ASC) in children and adults. The aim was to identify 10 items on the Autism Spectrum Quotient (AQ) (Adult, Adolescent, and Child versions) and on the Quantitative Checklist for Autism in Toddlers (Q-CHAT) with good test accuracy.\n\n\nMETHOD\nA case sample of more than 1,000 individuals with ASC (449 adults, 162 adolescents, 432 children and 126 toddlers) and a control sample of 3,000 controls (838 adults, 475 adolescents, 940 children, and 754 toddlers) with no ASC diagnosis participated. Case participants were recruited from the Autism Research Centre's database of volunteers. The control samples were recruited through a variety of sources. Participants completed full-length versions of the measures. The 10 best items were selected on each instrument to produce short versions.\n\n\nRESULTS\nAt a cut-point of 6 on the AQ-10 adult, sensitivity was 0.88, specificity was 0.91, and positive predictive value (PPV) was 0.85. At a cut-point of 6 on the AQ-10 adolescent, sensitivity was 0.93, specificity was 0.95, and PPV was 0.86. At a cut-point of 6 on the AQ-10 child, sensitivity was 0.95, specificity was 0.97, and PPV was 0.94. At a cut-point of 3 on the Q-CHAT-10, sensitivity was 0.91, specificity was 0.89, and PPV was 0.58. Internal consistency was >0.85 on all measures.\n\n\nCONCLUSIONS\nThe short measures have potential to aid referral decision making for specialist assessment and should be further evaluated.",
"title": ""
},
{
"docid": "5be55ce7d8f97689bf54028063ba63d7",
"text": "Early diagnosis, playing an important role in preventing progress and treating the Alzheimer's disease (AD), is based on classification of features extracted from brain images. The features have to accurately capture main AD-related variations of anatomical brain structures, such as, e.g., ventricles size, hippocampus shape, cortical thickness, and brain volume. This paper proposed to predict the AD with a deep 3D convolutional neural network (3D-CNN), which can learn generic features capturing AD biomarkers and adapt to different domain datasets. The 3D-CNN is built upon a 3D convolutional autoencoder, which is pre-trained to capture anatomical shape variations in structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then fine-tuned for each task-specific AD classification. Experiments on the CADDementia MRI dataset with no skull-stripping preprocessing have shown our 3D-CNN outperforms several conventional classifiers by accuracy. Abilities of the 3D-CNN to generalize the features learnt and adapt to other domains have been validated on the ADNI dataset.",
"title": ""
},
{
"docid": "f84011e3b4c8b1e80d4e79dee3ccad53",
"text": "What is the future of fashion? Tackling this question from a data-driven vision perspective, we propose to forecast visual style trends before they occur. We introduce the first approach to predict the future popularity of styles discovered from fashion images in an unsupervised manner. Using these styles as a basis, we train a forecasting model to represent their trends over time. The resulting model can hypothesize new mixtures of styles that will become popular in the future, discover style dynamics (trendy vs. classic), and name the key visual attributes that will dominate tomorrow’s fashion. We demonstrate our idea applied to three datasets encapsulating 80,000 fashion products sold across six years on Amazon. Results indicate that fashion forecasting benefits greatly from visual analysis, much more than textual or meta-data cues surrounding products.",
"title": ""
},
{
"docid": "366a539d763ba80954279b3657e9e562",
"text": "Modeling has become a common practice in modern software engineering. Since the mid 1990s the Unified Modeling Language (UML) has become the de facto standard for modeling software systems. The UML is used in all phases of software development: ranging from the requirement phase to the maintenance phase. However, empirical evidence regarding the effectiveness of modeling in software development is few and far apart. This paper aims to synthesize empirical evidence regarding the effectiveness of modeling using UML in software development, with a special focus on the cost and benefits.",
"title": ""
},
{
"docid": "ccecd2617d9db04e1fe2c275643e6662",
"text": "Multi-step temporal-difference (TD) learning, where the update targets contain information from multiple time steps ahead, is one of the most popular forms of TD learning for linear function approximation. The reason is that multi-step methods often yield substantially better performance than their single-step counter-parts, due to a lower bias of the update targets. For non-linear function approximation, however, single-step methods appear to be the norm. Part of the reason could be that on many domains the popular multi-step methods TD(λ) and Sarsa(λ) do not perform well when combined with non-linear function approximation. In particular, they are very susceptible to divergence of value estimates. In this paper, we identify the reason behind this. Furthermore, based on our analysis, we propose a new multi-step TD method for non-linear function approximation that addresses this issue. We confirm the effectiveness of our method using two benchmark tasks with neural networks as function approximation.",
"title": ""
},
{
"docid": "d0bf246feac1b5e6924719b5b7c76189",
"text": "This paper proposes implicit CF-NADE, a neural autoregressive model for collaborative filtering tasks using implicit feedback( e.g. click/watch/browse behaviors). We first convert a user's implicit feedback into a \"like\" vector and a confidence vector, and then model the probability of the \"like\" vector, weighted by the confidence vector. The training objective of implicit CF-NADE is to maximize a weighted negative log-likelihood. We test the performance of implicit CF-NADE on a dataset collected from a popular digital TV streaming service. More specifically, in the experiments, we describe how to convert watch counts into implicit \"relative rating\", and feed into implicit CF-NADE. Then we compare the performance of implicit CF-NADE model with the popular implicit matrix factorization approach. Experimental results show that implicit CF-NADE significantly outperforms the baseline.",
"title": ""
},
{
"docid": "3dc1598f8653c540e6e61daf2994b8ed",
"text": "Labeled graphs provide a natural way of representing entities, relationships and structures within real datasets such as knowledge graphs and protein interactions. Applications such as question answering, semantic search, and motif discovery entail efficient approaches for subgraph matching involving both label and structural similarities. Given the NP-completeness of subgraph isomorphism and the presence of noise, approximate graph matching techniques are required to handle queries in a robust and real-time manner. This paper presents a novel technique to characterize the subgraph similarity based on statistical significance captured by chi-square statistic. The statistical significance model takes into account the background structure and label distribution in the neighborhood of vertices to obtain the best matching subgraph and, therefore, robustly handles partial label and structural mismatches. Based on the model, we propose two algorithms, VELSET and NAGA, that, given a query graph, return the top-k most similar subgraphs from a (large) database graph. While VELSET is more accurate and robust to noise, NAGA is faster and more applicable for scenarios with low label noise. Experiments on large real-life graph datasets depict significant improvements in terms of accuracy and running time in comparison to the state-of-the-art methods.",
"title": ""
},
{
"docid": "d1850f62f3198838762c1f0dc92db2fc",
"text": "To examine the effects of dietary factor and Helicobacter pylori (H. pylori) infection with emphasis on vitamin intake on the risk of gastric cancer (GC), we conducted a case-control study in South Korea, a high-risk area for GC. Trained dietitians interviewed 136 cases histologically diagnosed with GC. An equal number of hospital controls was selected by matching sex and age. High dietary intakes of vegetable fat [odds ratio (OR) = 0.35; 95% confidence interval (CI) = 0.15-0.83], folate (OR = 0.35; 95% CI = 0.13-0.96), and antioxidants, such as vitamin A (OR = 0.34; 95% CI = 0.13-0.83), beta-carotene (OR = 0.33; 95% CI = 0.13-0.82), vitamin C (OR = 0.26; 95% CI = 0.09-0.72), and vitamin E (OR = 0.41; 95% CI = 0.17-1.01), were shown to have a protective effect on GC risk using a multivariate model adjusting for foods significantly related to GC in our previous study (charcoal grilled beef, spinach, garlic, mushroom, and a number of types of kimchi) and supplement use. When stratified according to H. pylori infection, high intakes of vitamin C (OR = 0.10; 95% CI = 0.02-0.63) and vitamin E (OR = 0.16; 95% CI = 0.03-0.83) exhibited highly significant inverse associations with GC among the H. pylori-infected subjects compared with noninfected individuals. GC risk was significantly decreased only when consumption levels for two of these vitamins were high. Our findings suggest that high intake of antioxidant vitamins contribute to the reduction of GC risk and that GC risk in Korea may be decreased by encouraging those with H. pylori infection to increase their intake of antioxidant vitamins.",
"title": ""
}
] |
scidocsrr
|
62fe653c8af1f74605ac3b607c97b223
|
End-fire phased array 5G antenna design using leaf-shaped bow-tie elements for 28/38 GHz MIMO applications
|
[
{
"docid": "09cf4ac9504132c32fe885715e58adf1",
"text": "A first-of-the-kind 28 GHz antenna solution for the upcoming 5G cellular communication is presented in detail. Extensive measurements and simulations ascertain the proposed 28 GHz antenna solution to be highly effective for cellular handsets operating in realistic propagating environments.",
"title": ""
},
{
"docid": "c67010d61ec7f9ea839bbf7d2dce72a1",
"text": "Almost all cellular mobile communications including first generation analog systems, second generation digital systems, third generation WCDMA, and fourth generation OFDMA systems use Ultra High Frequency (UHF) band of radio spectrum with frequencies in the range of 300MHz-3GHz. This band of spectrum is becoming increasingly crowded due to spectacular growth in mobile data and other related services. More recently, there have been proposals to explore mmWave spectrum (3-300GHz) for commercial mobile applications due to its unique advantages such as spectrum availability and small component sizes. In this paper, we discuss system design aspects such as antenna array design, base station and mobile station requirements. We also provide system performance and SINR geometry results to demonstrate the feasibility of an outdoor mmWave mobile broadband communication system. We note that with adaptive antenna array beamforming, multi-Gbps data rates can be supported for mobile cellular deployments.",
"title": ""
}
] |
[
{
"docid": "9081f9ab762e4ae55b1455c8feb60987",
"text": "Energy efficiency in Wireless Sensor Networks (WSNs) has always been a hot issue and has been studied for many years. Sleep Scheduling (SS) mechanism is an efficient method to manage energy of each node and is capable to prolong the lifetime of the entire network. In this paper a Software-defined Network (SDN) based Sleep Scheduling algorithm SDN-ECCKN is proposed to manage the energy of the network. EC-CKN is adopted as the fundamental algorithm when implementing our algorithm. In the proposed SDN-ECCKN algorithm, every computation is completed in the controller rather than the sensors themselves and there is no broadcasting between each two nodes, which are the main features of the traditional EC-CKN technique. The results of our SDN-ECCKN show its advantages in energy management, such as network lifetime, the number of live nodes and the number of solo nodes in the network. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "05f25a2de55907773c9ff13b8a2fe5f6",
"text": "Deep learning (DL) creates impactful advances following a virtuous recipe: model architecture search, creating large training data sets, and scaling computation. It is widely believed that growing training sets and models should improve accuracy and result in better products. As DL application domains grow, we would like a deeper understanding of the relationships between training set size, computational scale, and model accuracy improvements to advance the state-of-the-art. This paper presents a large scale empirical characterization of generalization error and model size growth as training sets grow. We introduce a methodology for this measurement and test four machine learning domains: machine translation, language modeling, image processing, and speech recognition. Our empirical results show power-law generalization error scaling across a breadth of factors, resulting in power-law exponents—the \"steepness\" of the learning curve—yet to be explained by theoretical work. Further, model improvements only shift the error but do not appear to affect the power-law exponent. We also show that model size scales sublinearly with data size. These scaling relationships have significant implications on deep learning research, practice, and systems. They can assist model debugging, setting accuracy targets, and decisions about data set growth. They can also guide computing system design and underscore the importance of continued computational scaling.",
"title": ""
},
{
"docid": "b9b8a55afc751d77d1322de0746cc48b",
"text": "One week of solitary confinement of prison inmates produced significant changes in their EEG frequency and visual evoked potentials (VEP) that parallel those reported in laboratory studies of sensory deprivation. EEG frequency declined in a nonlinear manner over the period. VEP latency, which decreased with continued solitary confinement, was shorter for these 5s than for control 5s whose VEP latency did not change over the same period. Experimental 5s who had been in prison longer had shorter VEP latencies than relative newcomers to the prison.",
"title": ""
},
{
"docid": "cec046aa647ece5f9449c470c6c6edcf",
"text": "In this article we survey ambient intelligence (AmI), including its applications, some of the technologies it uses, and its social and ethical implications. The applications include AmI at home, care of the elderly, healthcare, commerce, and business, recommender systems, museums and tourist scenarios, and group decision making. Among technologies, we focus on ambient data management and artificial intelligence; for example planning, learning, event-condition-action rules, temporal reasoning, and agent-oriented technologies. The survey is not intended to be exhaustive, but to convey a broad range of applications, technologies, and technical, social, and ethical challenges.",
"title": ""
},
{
"docid": "2f18b0f2ae52955723bb056a1ce2bdb1",
"text": "Cognitive control is defined by a set of neural processes that allow us to interact with our complex environment in a goal-directed manner. Humans regularly challenge these control processes when attempting to simultaneously accomplish multiple goals (multitasking), generating interference as the result of fundamental information processing limitations. It is clear that multitasking behaviour has become ubiquitous in today’s technologically dense world, and substantial evidence has accrued regarding multitasking difficulties and cognitive control deficits in our ageing population. Here we show that multitasking performance, as assessed with a custom-designed three-dimensional video game (NeuroRacer), exhibits a linear age-related decline from 20 to 79 years of age. By playing an adaptive version of NeuroRacer in multitasking training mode, older adults (60 to 85 years old) reduced multitasking costs compared to both an active control group and a no-contact control group, attaining levels beyond those achieved by untrained 20-year-old participants, with gains persisting for 6 months. Furthermore, age-related deficits in neural signatures of cognitive control, as measured with electroencephalography, were remediated by multitasking training (enhanced midline frontal theta power and frontal–posterior theta coherence). Critically, this training resulted in performance benefits that extended to untrained cognitive control abilities (enhanced sustained attention and working memory), with an increase in midline frontal theta power predicting the training-induced boost in sustained attention and preservation of multitasking improvement 6 months later. These findings highlight the robust plasticity of the prefrontal cognitive control system in the ageing brain, and provide the first evidence, to our knowledge, of how a custom-designed video game can be used to assess cognitive abilities across the lifespan, evaluate underlying neural mechanisms, and serve as a powerful tool for cognitive enhancement.",
"title": ""
},
{
"docid": "2f4a4c223c13c4a779ddb546b3e3518c",
"text": "Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (nonpoisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.",
"title": ""
},
{
"docid": "8f5af0964740d734cc03a4bfa030ee48",
"text": "In present scenario, the security concerns have grown tremendously. The security of restricted areas such as borders or buffer zones is of utmost importance; in particular with the worldwide increase of military conflicts, illegal immigrants, and terrorism over the past decade. Monitoring such areas rely currently on technology and man power, however automatic monitoring has been advancing in order to avoid potential human errors that can be caused by different reasons. The purpose of this project is to design a surveillance system which would detect motion in a live video feed and record the video feed only at the moment where the motion was detected also to track moving object based on background subtraction using video surveillance. The moving object is identified using the image subtraction method.",
"title": ""
},
{
"docid": "064aba7f2bd824408bd94167da5d7b3a",
"text": "Online comments submitted by readers of news articles can provide valuable feedback and critique, personal views and perspectives, and opportunities for discussion. The varying quality of these comments necessitates that publishers remove the low quality ones, but there is also a growing awareness that by identifying and highlighting high quality contributions this can promote the general quality of the community. In this paper we take a user-centered design approach towards developing a system, CommentIQ, which supports comment moderators in interactively identifying high quality comments using a combination of comment analytic scores as well as visualizations and flexible UI components. We evaluated this system with professional comment moderators working at local and national news outlets and provide insights into the utility and appropriateness of features for journalistic tasks, as well as how the system may enable or transform journalistic practices around online comments.",
"title": ""
},
{
"docid": "48d778934127343947b494fe51f56a33",
"text": "In this paper, we present a simple method for animating natural phenomena such as erosion, sedimentation, and acidic corrosion. We discretize the appropriate physical or chemical equations using finite differences, and we use the results to modify the shape of a solid body. We remove mass from an object by treating its surface as a level set and advecting it inward, and we deposit the chemical and physical byproducts into simulated fluid. Similarly, our technique deposits sediment onto a surface by advecting the level set outward. Our idea can be used for off-line high quality animations as well as interactive applications such as games, and we demonstrate both in this paper.",
"title": ""
},
{
"docid": "03e6fab6da3644d64081387018012599",
"text": "High dimensionality of POMDP's belief state space is one major cause that makes the underlying optimal policy computation intractable. Belief compression refers to the methodology that projects the belief state space to a low-dimensional one to alleviate the problem. In this paper, we propose a novel orthogonal non-negative matrix factorization (O-NMF) for the projection. The proposed O-NMF not only factors the belief state space by minimizing the reconstruction error, but also allows the compressed POMDP formulation to be efficiently computed (due to its orthogonality) in a value-directed manner so that the value function will take same values for corresponding belief states in the original and compressed state spaces. We have tested the proposed approach using a number of benchmark problems and the empirical results confirms its effectiveness in achieving substantial computational cost saving in policy computation.",
"title": ""
},
{
"docid": "75bdb67cb62002457a588ae3a82699e3",
"text": "Deep convolutional neural networks (CNNs) have shown superior performance on the task of single-label image classification. However, the applicability of CNNs to multi-label images still remains an open problem, mainly because of two reasons. First, each image is usually treated as an inseparable entity and represented as one instance, which mixes the visual information corresponding to different labels. Second, the correlations amongst labels are often overlooked. To address these limitations, we propose a deep multi-modal CNN for multi-instance multi-label image classification, called MMCNN-MIML. By combining CNNs with multi-instance multi-label (MIML) learning, our model represents each image as a bag of instances for image classification and inherits the merits of both CNNs and MIML. In particular, MMCNN-MIML has three main appealing properties: 1) it can automatically generate instance representations for MIML by exploiting the architecture of CNNs; 2) it takes advantage of the label correlations by grouping labels in its later layers; and 3) it incorporates the textual context of label groups to generate multi-modal instances, which are effective in discriminating visually similar objects belonging to different groups. Empirical studies on several benchmark multi-label image data sets show that MMCNN-MIML significantly outperforms the state-of-the-art baselines on multi-label image classification tasks.",
"title": ""
},
{
"docid": "ede6bef7b623e95cf99b1d7c85332abb",
"text": "The design of a temperature compensated IC on-chip oscillator and a low voltage detection circuitry sharing the bandgap reference is described. The circuit includes a new bandgap isolation strategy to reduce oscillator noise coupled through the current sources. The IC oscillator provides a selectable clock (11.6 MHz or 21.4 MHz) with digital trimming to minimize process variations. After fine-tuning the oscillator to the target frequency, the temperature compensated voltage and current references guarantees less than /spl plusmn/2.5% frequency variation from -40 to 125/spl deg/C, when operating from 3 V to 5 V of power supply. The low voltage detection circuit monitors the supply voltage applied to the system and generates the appropriate warning or even initiates a system shutdown before the in-circuit SoC presents malfunction. The module was implemented in a 0.5 /spl mu/m CMOS technology, occupies an area of 360 /spl times/ 530 /spl mu/m/sub 2/ and requires no external reference or components.",
"title": ""
},
{
"docid": "ccefef1618c7fa637de366e615333c4b",
"text": "Context: Systems development normally takes place in a specific organizational context, including organizational culture. Previous research has identified organizational culture as a factor that potentially affects the deployment systems development methods. Objective: The purpose is to analyze the relationship between organizational culture and the postadoption deployment of agile methods. Method: This study is a theory development exercise. Based on the Competing Values Model of organizational culture, the paper proposes a number of hypotheses about the relationship between organizational culture and the deployment of agile methods. Results: Inspired by the agile methods thirteen new hypotheses are introduced and discussed. They have interesting implications, when contrasted with ad hoc development and with traditional systems devel-",
"title": ""
},
{
"docid": "16cac565c6163db83496c41ea98f61f9",
"text": "The rapid increase in multimedia data transmission over the Internet necessitates the multi-modal summarization (MMS) from collections of text, image, audio and video. In this work, we propose an extractive multi-modal summarization method that can automatically generate a textual summary given a set of documents, images, audios and videos related to a specific topic. The key idea is to bridge the semantic gaps between multi-modal content. For audio information, we design an approach to selectively use its transcription. For visual information, we learn the joint representations of text and images using a neural network. Finally, all of the multimodal aspects are considered to generate the textual summary by maximizing the salience, non-redundancy, readability and coverage through the budgeted optimization of submodular functions. We further introduce an MMS corpus in English and Chinese, which is released to the public1. The experimental results obtained on this dataset demonstrate that our method outperforms other competitive baseline methods.",
"title": ""
},
{
"docid": "871386b0aa9f04eeb622617e241fc6f0",
"text": "I show that for any number of oracle lookups up to about π/4 √ N , Grover’s quantum searching algorithm gives the maximal possible probability of finding the desired element. I explain why this is also true for quantum algorithms which use measurements during the computation. I also show that unfortunately quantum searching cannot be parallelized better than by assigning different parts of the search space to independent quantum computers. 1 Quantum searching Imagine we have N cases of which only one fulfills our conditions. E.g. we have a function which gives 1 only for one out of N possible input values and gives 0 otherwise. Often an analysis of the algorithm for calculating the function will allow us to find quickly the input value for which the output is 1. Here we consider the case where we do not know better than to repeatedly calculate the function without looking at the algorithm, e.g. because the function is calculated in a black box subroutine into which we are not allowed to look. In computer science this is called an oracle. Here I consider only oracles which give 1 for exactly one input. Quantum searching for the case with several inputs which give 1 and even with an unknown number of such inputs is treated in [4]. Obviously on a classical computer we have to query the oracle on average N/2 times before we find the answer. Grover [1] has given a quantum algorithm which can solve the problem in about π/4 √ N steps. Bennett et al. [3] have shown that asymptotically no quantum algorithm can solve the problem in less than a number of steps proportional to √ N . Boyer et al. [4] have improved this result to show that e.g. for a 50% success probability no quantum algorithm can do better than only a few percent faster than Grover’s algorithm. I improve ∗Supported by Schweizerischer Nationalfonds and LANL",
"title": ""
},
{
"docid": "1bf8cc02cf21015385cd1fd20ffb2f4e",
"text": "© 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. 1Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA. 2Berkeley Sensor and Actuator Center, University of California, Berkeley, CA, USA. 3Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA. *e-mail: ajavey@eecs.berkeley.edu Healthcare systems today are mostly reactive. Patients contact doctors after they have developed ailments with noticeable symptoms, and are thereafter passive recipients of care and monitoring by specialists. This approach largely fails in preventing the onset of health conditions, prioritizing diagnostics and treatment over proactive healthcare. It further occludes individuals from being active agents in monitoring their own health. The growing field of wearable sensors (or wearables) aims to tackle the limitations of centralized, reactive healthcare by giving individuals insight into the dynamics of their own physiology. The long-term vision is to develop sensors that can be integrated into wearable formats like clothing, wristbands, patches, or tattoos to continuously probe a range of body indicators. By relaying physiological information as the body evolves over healthy and sick states, these sensors will enable individuals to monitor themselves without expensive equipment or trained professionals (Fig. 1). Various physical and chemical sensors will need to be integrated to obtain a complete picture of dynamic health. These sensors will generate vast time series of data that will need to be parsed with big-data techniques to generate personalized baselines indicative of the user’s health1–4. Sensor readings that cohere with the established baseline can then indicate that the body is in a healthy, equilibrium state, while deviations from the baseline can provide early warnings about developing health conditions. Eventually, deviations caused by different pathologies can be ‘fingerprinted’ to make diagnosis more immediate and autonomous. Together, the integration of wearables with big-data analytics can enable individualized fitness monitoring, early detection of developing health conditions, and better management of chronic diseases. This envisioned medical landscape built around continuous, point-of-care sensing spearheads personalized, predictive, and ultimately preventive healthcare.",
"title": ""
},
{
"docid": "c560dd620b3c9c6718ce717ac33f0c21",
"text": "This paper investigates the autocalibration of microelectromechanical systems (MEMS) triaxial accelerometer (TA) based on experimental design (DoE). First, for a special 6-parameter second-degree model, a six-point experimental scheme is proposed, and its G-optimality has been proven based on optimal DoE. Then, a new linearization approach is introduced, by which the TA model for autocalibration can be simplified as the expected second-degree form so that the proposed optimal experimental scheme can be applied. To reliably estimate the model parameter, a convergence-guaranteed iterative algorithm is also proposed, which can significantly reduce the bias caused by linearization. Thereafter, the effectiveness and robustness of the proposed approach have been demonstrated by simulation. Finally, the proposed calibration method has been experimentally verified using two typical types of MEMS TA, and desired experimental results effectively demonstrate the efficiency and accuracy of the proposed calibration approach.",
"title": ""
},
{
"docid": "6f6667e4c485978b566d25837083b565",
"text": "Topic models provide a powerful tool for analyzing large text collections by representing high dimensional data in a low dimensional subspace. Fitting a topic model given a set of training documents requires approximate inference techniques that are computationally expensive. With today's large-scale, constantly expanding document collections, it is useful to be able to infer topic distributions for new documents without retraining the model. In this paper, we empirically evaluate the performance of several methods for topic inference in previously unseen documents, including methods based on Gibbs sampling, variational inference, and a new method inspired by text classification. The classification-based inference method produces results similar to iterative inference methods, but requires only a single matrix multiplication. In addition to these inference methods, we present SparseLDA, an algorithm and data structure for evaluating Gibbs sampling distributions. Empirical results indicate that SparseLDA can be approximately 20 times faster than traditional LDA and provide twice the speedup of previously published fast sampling methods, while also using substantially less memory.",
"title": ""
},
{
"docid": "54502fdfcce5344583a8f4651a86cca2",
"text": "Resolution of inflammation and the return of tissues to homeostasis are essential. Efforts to identify molecular events governing termination of self-limited inflammation uncovered pathways in resolving exudates that actively generate, from essential omega fatty acids, new families of local-acting mediators. These chemical mediator families, termed resolvins and protectins, are potent stereoselective agonists that control the duration and magnitude of inflammation, joining the lipoxins as signals in resolution. This review examines the mapping of these circuits and recent advances in our understanding of the biosynthesis and actions of these novel proresolving lipid mediators. Aspirin jump-starts resolution by triggering biosynthesis of specific epimers of these mediators. In addition to their origins in inflammation resolution, these compounds also display potent protective roles in neural systems, liver, lung, and eye. Given the potent actions of lipoxins, resolvins, and protectins in models of human disease, deficiencies in resolution pathways may contribute to many diseases and offer exciting new potential for therapeutic control via resolution.",
"title": ""
},
{
"docid": "f6333ab767879cf1673bb50aeeb32533",
"text": "Github facilitates the pull-request mechanism as an outstanding social coding paradigm by integrating with social media. The review process of pull-requests is a typical crowd sourcing job which needs to solicit opinions of the community. Recommending appropriate reviewers can reduce the time between the submission of a pull-request and the actual review of it. In this paper, we firstly extend the traditional Machine Learning (ML) based approach of bug triaging to reviewer recommendation. Furthermore, we analyze social relations between contributors and reviewers, and propose a novel approach to recommend highly relevant reviewers by mining comment networks (CN) of given projects. Finally, we demonstrate the effectiveness of these two approaches with quantitative evaluations. The results show that CN-based approach achieves a significant improvement over the ML-based approach, and on average it reaches a precision of 78% and 67% for top-1 and top-2 recommendation respectively, and a recall of 77% for top-10 recommendation.",
"title": ""
}
] |
scidocsrr
|
314b23cad25424223a18e33cf1d86036
|
Introduction to Nonnegative Matrix Factorization
|
[
{
"docid": "2924bf341e11ecb332c34749e2cd051e",
"text": "Non-negative matrix factorization (NMF) has found numerous applications, due to its ability to provide interpretable decompositions. Perhaps surprisingly, existing results regarding its uniqueness properties are rather limited, and there is much room for improvement in terms of algorithms as well. Uniqueness aspects of NMF are revisited here from a geometrical point of view. Both symmetric and asymmetric NMF are considered, the former being tantamount to element-wise non-negative square-root factorization of positive semidefinite matrices. New uniqueness results are derived, e.g., it is shown that a sufficient condition for uniqueness is that the conic hull of the latent factors is a superset of a particular second-order cone. Checking this condition is shown to be NP-complete; yet this and other results offer insights on the role of latent sparsity in this context. On the computational side, a new algorithm for symmetric NMF is proposed, which is very different from existing ones. It alternates between Procrustes rotation and projection onto the non-negative orthant to find a non-negative matrix close to the span of the dominant subspace. Simulation results show promising performance with respect to the state-of-art. Finally, the new algorithm is applied to a clustering problem for co-authorship data, yielding meaningful and interpretable results.",
"title": ""
}
] |
[
{
"docid": "8fa9a91bb08c82830140e484456c5a16",
"text": "Artificial intelligence (AI) is an extensive scientific discipline which enables computer systems to solve problems by emulating complex biological processes such as learning, reasoning and self-correction. This paper presents a comprehensive review of the application of AI techniques for improving performance of optical communication systems and networks. The use of AI-based techniques is first studied in applications related to optical transmission, ranging from the characterization and operation of network components to performance monitoring, mitigation of nonlinearities, and quality of transmission estimation. Then, applications related to optical network control and management are also reviewed, including topics like optical network planning and operation in both transport and access networks. Finally, the paper also presents a summary of opportunities and challenges in optical networking where AI is expected to play a key role in the near future.",
"title": ""
},
{
"docid": "e63b48cbe317719f6ed02f82da26d0af",
"text": "Ensuring the quality of service (QoS) for latency-sensitive applications while allowing co-locations of multiple applications on servers is critical for improving server utilization and reducing cost in modern warehouse-scale computers (WSCs). Recent work relies on static profiling to precisely predict the QoS degradation that results from performance interference among co-running applications to increase the number of \"safe\" co-locations. However, these static profiling techniques have several critical limitations: 1) a priori knowledge of all workloads is required for profiling, 2) it is difficult for the prediction to capture or adapt to phase or load changes of applications, and 3) the prediction technique is limited to only two co-running applications.\n To address all of these limitations, we present Bubble-Flux, an integrated dynamic interference measurement and online QoS management mechanism to provide accurate QoS control and maximize server utilization. Bubble-Flux uses a Dynamic Bubble to probe servers in real time to measure the instantaneous pressure on the shared hardware resources and precisely predict how the QoS of a latency-sensitive job will be affected by potential co-runners. Once \"safe\" batch jobs are selected and mapped to a server, Bubble-Flux uses an Online Flux Engine to continuously monitor the QoS of the latency-sensitive application and control the execution of batch jobs to adapt to dynamic input, phase, and load changes to deliver satisfactory QoS. Batch applications remain in a state of flux throughout execution. Our results show that the utilization improvement achieved by Bubble-Flux is up to 2.2x better than the prior static approach.",
"title": ""
},
{
"docid": "de71bef095a0ef7fb4fb1b10d4136615",
"text": "Active learning—a class of algorithms that iteratively searches for the most informative samples to include in a training dataset—has been shown to be effective at annotating data for image classification. However, the use of active learning for object detection is still largely unexplored as determining informativeness of an object-location hypothesis is more difficult. In this paper, we address this issue and present two metrics for measuring the informativeness of an object hypothesis, which allow us to leverage active learning to reduce the amount of annotated data needed to achieve a target object detection performance. Our first metric measures “localization tightness” of an object hypothesis, which is based on the overlapping ratio between the region proposal and the final prediction. Our second metric measures “localization stability” of an object hypothesis, which is based on the variation of predicted object locations when input images are corrupted by noise. Our experimental results show that by augmenting a conventional active-learning algorithm designed for classification with the proposed metrics, the amount of labeled training data required can be reduced up to 25%. Moreover, on PASCAL 2007 and 2012 datasets our localization-stability method has an average relative improvement of 96.5% and 81.9% over the base-line method using classification only. Asian Conference on Computer Vision This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2018 201 Broadway, Cambridge, Massachusetts 02139 Localization-Aware Active Learning for Object",
"title": ""
},
{
"docid": "7786fac57e0c1392c6a5101681baecb0",
"text": "We deployed 72 sensors of 10 modalities in 15 wireless and wired networked sensor systems in the environment, in objects, and on the body to create a sensor-rich environment for the machine recognition of human activities. We acquired data from 12 subjects performing morning activities, yielding over 25 hours of sensor data. We report the number of activity occurrences observed during post-processing, and estimate that over 13000 and 14000 object and environment interactions occurred. We describe the networked sensor setup and the methodology for data acquisition, synchronization and curation. We report on the challenges and outline lessons learned and best practice for similar large scale deployments of heterogeneous networked sensor systems. We evaluate data acquisition quality for on-body and object integrated wireless sensors; there is less than 2.5% packet loss after tuning. We outline our use of the dataset to develop new sensor network self-organization principles and machine learning techniques for activity recognition in opportunistic sensor configurations. Eventually this dataset will be made public.",
"title": ""
},
{
"docid": "7aad80319743ac72d2c4e117e5f831fa",
"text": "In this letter, we propose a novel method for classifying ambulatory activities using eight plantar pressure sensors within smart shoes. Using these sensors, pressure data of participants can be collected regarding level walking, stair descent, and stair ascent. Analyzing patterns of the ambulatory activities, we present new features with which to describe the ambulatory activities. After selecting critical features, a multi-class support vector machine algorithm is applied to classify these activities. Applying the proposed method to the experimental database, we obtain recognition rates up to 95.2% after six steps.",
"title": ""
},
{
"docid": "d194d474676e5ee3113c588de30496c7",
"text": "While studies of social movements have mostly examined prevalent public discourses, undercurrents' the backstage practices consisting of meaning-making processes, narratives, and situated work-have received less attention. Through a qualitative interview study with sixteen participants, we examine the role of social media in supporting the undercurrents of the Umbrella Movement in Hong Kong. Interviews focused on an intense period of the movement exemplified by sit-in activities inspired by Occupy Wall Street in the USA. Whereas the use of Facebook for public discourse was similar to what has been reported in other studies, we found that an ecology of social media tools such as Facebook, WhatsApp, Telegram, and Google Docs mediated undercurrents that served to ground the public discourse of the movement. We discuss how the undercurrents sustained and developed public discourses in concrete ways.",
"title": ""
},
{
"docid": "d3fe815386eaee149859461031eaed5e",
"text": "The theoretical distinction between goal intentions (\"I intend to achieve -c\") and implementation intentions (\"I intend to perform goal-directed behavior y when I encounter situation z\" ; P. M. Gollwitzer, 1993) is explored by assessing the completion rate of various goal projects. In correlational Study 1, difficult goal intentions were completed about 3 times more often when participants had furnished them with implementation intentions. In experimental Study 2, all participants were assigned the same difficult goal intention, and half were instructed to form implementation intentions. The beneficial effects of implementation intentions paralleled diose of Study 1. In experimental Study 3, implementation intentions were observed to facilitate the immediate initiation of goaldirected action when the intended opportunity was encountered. Implementation intentions are interpreted to be powerful self-regulatory tools for overcoming the typical obstacles associated with the initiation of goal-directed actions.",
"title": ""
},
{
"docid": "de6ceb9c9a1c06e6aa879e8af79b4075",
"text": "Human pose estimation requires a versatile yet well-constrained spatial model for grouping locally ambiguous parts together to produce a globally consistent hypothesis. Previous works either use local deformable models deviating from a certain template, or use a global mixture representation in the pose space. In this paper, we propose a new hierarchical spatial model that can capture an exponential number of poses with a compact mixture representation on each part. Using latent nodes, it can represent high-order spatial relationship among parts with exact inference. Different from recent hierarchical models that associate each latent node to a mixture of appearance templates (like HoG), we use the hierarchical structure as a pure spatial prior avoiding the large and often confounding appearance space. We verify the effectiveness of this model in three ways. First, samples representing human-like poses can be drawn from our model, showing its ability to capture high-order dependencies of parts. Second, our model achieves accurate reconstruction of unseen poses compared to a nearest neighbor pose representation. Finally, our model achieves state-of-art performance on three challenging datasets, and substantially outperforms recent hierarchical models.",
"title": ""
},
{
"docid": "a4dd8ab8b45a8478ca4ac7e19debf777",
"text": "Most sensory, cognitive and motor functions depend on the interactions of many neurons. In recent years, there has been rapid development and increasing use of technologies for recording from large numbers of neurons, either sequentially or simultaneously. A key question is what scientific insight can be gained by studying a population of recorded neurons beyond studying each neuron individually. Here, we examine three important motivations for population studies: single-trial hypotheses requiring statistical power, hypotheses of population response structure and exploratory analyses of large data sets. Many recent studies have adopted dimensionality reduction to analyze these populations and to find features that are not apparent at the level of individual neurons. We describe the dimensionality reduction methods commonly applied to population activity and offer practical advice about selecting methods and interpreting their outputs. This review is intended for experimental and computational researchers who seek to understand the role dimensionality reduction has had and can have in systems neuroscience, and who seek to apply these methods to their own data.",
"title": ""
},
{
"docid": "c533f121483bfd8de0cf20c319af5ff1",
"text": "This article revisits the concept of biologic width, in particular its clinical consequences for treatment options and decisions in light of modern dentistry approaches such as biomimetics and minimally invasive procedures. In the past, due to the need to respect biologic width, clinicians were used to removing periodontal tissue around deep cavities, bone, and gum so that the limits of restorations were placed far away from the epithelium and connective attachments, in order to prevent tissue loss, root exposure, opening of the proximal area (leading to black holes), and poor esthetics. Furthermore, no material was placed subgingivally in case it led to periodontal inflammation and attachment loss. Today, with the more conservative approach to restorative dentistry, former subtractive procedures are being replaced with additive ones. In view of this, one could propose deep margin elevation (DME) instead of crown lengthening as a change of paradigm for deep cavities. The intention of this study was to overview the literature in search of scientific evidence regarding the consequences of DME with different materials, particularly on the surrounding periodontium, from a clinical and histologic point of view. A novel approach is to extrapolate results obtained during root coverage procedures on restored roots to hypothesize the nature of the healing of proximal attachment tissue on a proper bonded material during a DME. Three clinical cases presented here illustrate these procedures. The hypothesis of this study was that even though crown lengthening is a valuable procedure, its indications should decrease in time, given that DME, despite being a very demanding procedure, seems to be well tolerated by the surrounding periodontium, clinically and histologically.",
"title": ""
},
{
"docid": "7c10a44e5fa0f9e01951e89336c4b4d6",
"text": "Previous studies have examined the online research behaviors of graduate students in terms of how they seek and retrieve research-related information on the Web across diverse disciplines. However, few have focused on graduate students’ searching activities, and particularly for their research tasks. Drawing on Kuiper, Volman, and Terwel’s (2008) three aspects of web literacy skills (searching, reading, and evaluating), this qualitative study aims to better understand a group of graduate engineering students’ searching, reading, and evaluating processes for research purposes. Through in-depth interviews and the think-aloud protocol, we compared the strategies employed by 22 Taiwanese graduate engineering students. The results showed that the students’ online research behaviors included seeking and obtaining, reading and interpreting, and assessing and evaluating sources. The findings suggest that specialized training for preparing novice researchers to critically evaluate relevant information or scholarly work to fulfill their research purposes is needed. Implications for enhancing the information literacy of engineering students are discussed.",
"title": ""
},
{
"docid": "28cba5bf535dabdfadfd1f634a574d52",
"text": "There are several complex business processes in the higher education. As the number of university students has been tripled in Hungary the automation of these task become necessary. The Near Field Communication (NFC) technology provides a good opportunity to support the automated execution of several education related processes. Recently a new challenge is identified at the Budapest University of Technology and Economics. As most of the lecture notes had become available in electronic format the students especially the inexperienced freshman ones did not attend to the lectures significantly decreasing the rate of successful exams. This drove to the decision to elaborate an accurate and reliable information system for monitoring the student's attendance at the lectures. Thus we have developed a novel, NFC technology based business use case of student attendance monitoring. In order to meet the requirements of the use case we have implemented a highly autonomous distributed environment assembled by NFC enabled embedded devices, so-called contactless terminals and a scalable backoffice. Beside the opportunity of contactless card based student identification the terminals support biometric identification by fingerprint reading. These features enable the implementation of flexible and secure identification scenarios. The attendance monitoring use case has been tested in a pilot project involving about 30 access terminals and more that 1000 students. In this paper we are introducing the developed attendance monitoring use case, the implemented NFC enabled system, and the experiences gained during the pilot project.",
"title": ""
},
{
"docid": "862018e61fc5a33f5661ad47e5ab0821",
"text": "AIM\nThe purpose of this study was to examine the role of staff nurse emotional intelligence between transformational leadership and nurse intent to stay.\n\n\nBACKGROUND\nNurse intent to stay and transformational leadership are widely recognized as vital components of nurse retention. Staff nurse emotional intelligence that has been confirmed improvable has been recently recognized in the nursing literature as correlated with retention. Yet, the nature of the relationships among these three variables is not known.\n\n\nMETHODS\nCross-sectional data for 535 Chinese nurses were analysed using Structural Equation Modelling.\n\n\nRESULTS\nTransformational leadership and staff nurse emotional intelligence were significant predictors of nurse intent to stay, accounting for 34.3% of the variance in nurse intent to stay. Staff nurse emotional intelligence partially mediates the relationship between transformational leadership and nurse intent to stay.\n\n\nCONCLUSION\nThe findings of the study emphasized the importance of transformational leadership in enhancing nurse emotional intelligence and to provide a deeper understanding of the mediating role of emotional intelligence in the relationship between nurse manager's transformational leadership and nurse's intent to stay.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\nNurse leaders should develop training programmes to improve nursing manager transformational leadership and staff nurse emotional intelligence in the workplace.",
"title": ""
},
{
"docid": "687b8d68cd2fe687dff2edb77fec0f63",
"text": "MicroRNAs (miRNAs) are an abundant class of small non-protein-coding RNAs that function as negative gene regulators. They regulate diverse biological processes, and bioinformatic data indicates that each miRNA can control hundreds of gene targets, underscoring the potential influence of miRNAs on almost every genetic pathway. Recent evidence has shown that miRNA mutations or mis-expression correlate with various human cancers and indicates that miRNAs can function as tumour suppressors and oncogenes. miRNAs have been shown to repress the expression of important cancer-related genes and might prove useful in the diagnosis and treatment of cancer.",
"title": ""
},
{
"docid": "578e069a88a6f885d5b5fcbfb9d1d658",
"text": "While a photograph is a visual artifact, studies reveal that a number of people with visual impairments are also interested in being able to share their memories and experiences with their sighted counterparts in the form of a photograph. We conducted an online survey to better understand the challenges faced by people with visual impairments in sharing and organizing photos, and reviewed existing tools and their limitations. Based on our analysis, we developed an accessible mobile application that enables a visually impaired user to capture photos along with audio recordings for the ambient sound and memo description and to browse through them eyes-free. Five visually impaired participants took part in a study in which they used our app to take photographs in naturalistic settings and to share them later with a sighted viewer. The participants were able to use our app to identify each photograph on their own during the photo sharing session, and reported high satisfaction in having been able to take the initiative during the process.",
"title": ""
},
{
"docid": "13173c37670511963b23a42a3cc7e36b",
"text": "In patients having a short nose with a short septal length and/or severe columellar retraction, a septal extension graft is a good solution, as it allows the dome to move caudally and pushes down the columellar base. Fixing the medial crura of the alar cartilages to a septal extension graft leads to an uncomfortably rigid nasal tip and columella, and results in unnatural facial animation. Further, because of the relatively small and weak septal cartilage in the East Asian population, undercorrection of a short nose is not uncommon. To overcome these shortcomings, we have used the septal extension graft combined with a derotation graft. Among 113 patients who underwent the combined procedure, 82 patients had a short nose deformity alone; the remaining 31 patients had a short nose with columellar retraction. Thirty-two patients complained of nasal tip stiffness caused by a septal extension graft from previous operations. In addition to the septal extension graft, a derotation graft was used for bridging the gap between the alar cartilages and the septal extension graft for tip lengthening. Satisfactory results were obtained in 102 (90%) patients. Eleven (10%) patients required revision surgery. This combination method is a good surgical option for patients who have a short nose with small septal cartilages and do not have sufficient cartilage for tip lengthening by using a septal extension graft alone. It can also overcome the postoperative nasal tip rigidity of a septal extension graft.",
"title": ""
},
{
"docid": "9f504d6a64a4770d2efb09a711e60279",
"text": "Large scale optimization problems are ubiquitous in machine learning and data analysis and there is a plethora of algorithms for solving such problems. Many of these algorithms employ sub-sampling, as a way to either speed up the computations and/or to implicitly implement a form of statistical regularization. In this paper, we consider second-order iterative optimization algorithms, i.e., those that use Hessian as well as gradient information, and we provide bounds on the convergence of the variants of Newton’s method that incorporate uniform sub-sampling as a means to estimate the gradient and/or Hessian. Our bounds are non-asymptotic, i.e., they hold for finite number of data points in finite dimensions for finite number of iterations. In addition, they are quantitative and depend on the quantities related to the problem, i.e., the condition number. However, our algorithms are global and are guaranteed to converge from any initial iterate. Using random matrix concentration inequalities, one can sub-sample the Hessian in a way that the curvature information is preserved. Our first algorithm incorporates such sub-sampled Hessian while using the full gradient. We also give additional convergence results for when the sub-sampled Hessian is regularized by modifying its spectrum or ridge-type regularization. Next, in addition to Hessian sub-sampling, we also consider sub-sampling the gradient as a way to further reduce the computational complexity per iteration. We use approximate matrix multiplication results from randomized numerical linear algebra (RandNLA) to obtain the proper sampling strategy. In all these algorithms, computing the update boils down to solving a large scale linear system, which can be computationally expensive. As a remedy, for all of our algorithms, we also give global convergence results for the case of inexact updates where such linear system is solved only approximately. This paper has a more advanced companion paper [42] in which we demonstrate that, by doing a finer-grained analysis, we can get problem-independent bounds for local convergence of these algorithms and explore tradeoffs to improve upon the basic results of the present paper. ∗International Computer Science Institute, Berkeley, CA 94704 and Department of Statistics, University of California at Berkeley, Berkeley, CA 94720. farbod/mmahoney@stat.berkeley.edu.",
"title": ""
},
{
"docid": "9c3218ce94172fd534e2a70224ee564f",
"text": "Author ambiguity mainly arises when several different authors express their names in the same way, generally known as the namesake problem, and also when the name of an author is expressed in many different ways, referred to as the heteronymous name problem. These author ambiguity problems have long been an obstacle to efficient information retrieval in digital libraries, causing incorrect identification of authors and impeding correct classification of their publications. It is a nontrivial task to distinguish those authors, especially when there is very limited information about them. In this paper, we propose a graph based approach to author name disambiguation, where a graph model is constructed using the co-author relations, and author ambiguity is resolved by graph operations such as vertex (or node) splitting and merging based on the co-authorship. In our framework, called a Graph Framework for Author Disambiguation (GFAD), the namesake problem is solved by splitting an author vertex involved in multiple cycles of co-authorship, and the heteronymous name problem is handled by merging multiple author vertices having similar names if those vertices are connected to a common vertex. Experiments were carried out with the real DBLP and Arnetminer collections and the performance of GFAD is compared with three representative unsupervised author name disambiguation systems. We confirm that GFAD shows better overall performance from the perspective of representative evaluation metrics. An additional contribution is that we released the refined DBLP collection to the public to facilitate organizing a performance benchmark for future systems on author disambiguation.",
"title": ""
},
{
"docid": "fecacef7460517ddb4f1d8dc66a089ea",
"text": "Recognizing materials in real-world images is a challenging task. Real-world materials have rich surface texture, geometry, lighting conditions, and clutter, which combine to make the problem particularly difficult. In this paper, we introduce a new, large-scale, open dataset of materials in the wild, the Materials in Context Database (MINC), and combine this dataset with deep learning to achieve material recognition and segmentation of images in the wild. MINC is an order of magnitude larger than previous material databases, while being more diverse and well-sampled across its 23 categories. Using MINC, we train convolutional neural networks (CNNs) for two tasks: classifying materials from patches, and simultaneous material recognition and segmentation in full images. For patch-based classification on MINC we found that the best performing CNN architectures can achieve 85.2% mean class accuracy. We convert these trained CNN classifiers into an efficient fully convolutional framework combined with a fully connected conditional random field (CRF) to predict the material at every pixel in an image, achieving 73.1% mean class accuracy. Our experiments demonstrate that having a large, well-sampled dataset such as MINC is crucial for real-world material recognition and segmentation.",
"title": ""
},
{
"docid": "6f6667e4c485978b566d25837083b565",
"text": "Topic models provide a powerful tool for analyzing large text collections by representing high dimensional data in a low dimensional subspace. Fitting a topic model given a set of training documents requires approximate inference techniques that are computationally expensive. With today's large-scale, constantly expanding document collections, it is useful to be able to infer topic distributions for new documents without retraining the model. In this paper, we empirically evaluate the performance of several methods for topic inference in previously unseen documents, including methods based on Gibbs sampling, variational inference, and a new method inspired by text classification. The classification-based inference method produces results similar to iterative inference methods, but requires only a single matrix multiplication. In addition to these inference methods, we present SparseLDA, an algorithm and data structure for evaluating Gibbs sampling distributions. Empirical results indicate that SparseLDA can be approximately 20 times faster than traditional LDA and provide twice the speedup of previously published fast sampling methods, while also using substantially less memory.",
"title": ""
}
] |
scidocsrr
|
48261ccb2ec7c3702e637f1c0b460f47
|
Efficient approaches for escaping higher order saddle points in non-convex optimization
|
[
{
"docid": "181eafc11f3af016ca0926672bdb5a9d",
"text": "The conventional wisdom is that backprop nets with excess hi dden units generalize poorly. We show that nets with excess capacity ge neralize well when trained with backprop and early stopping. Experim nts suggest two reasons for this: 1) Overfitting can vary significant ly i different regions of the model. Excess capacity allows better fit to reg ions of high non-linearity, and backprop often avoids overfitting the re gions of low non-linearity. 2) Regardless of size, nets learn task subco mponents in similar sequence. Big nets pass through stages similar to th ose learned by smaller nets. Early stopping can stop training the large n et when it generalizes comparably to a smaller net. We also show that co njugate gradient can yield worse generalization because it overfits regions of low non-linearity when learning to fit regions of high non-linea rity.",
"title": ""
}
] |
[
{
"docid": "71244969f0e3a1f64c0f0286519c7998",
"text": "In present day scenario the security and authentication is very much needed to make a safety world. Beside all security one vital issue is recognition of number plate from the car for Authorization. In the busy world everything cannot be monitor by a human, so automatic license plate recognition is one of the best application for authorization without involvement of human power. In the proposed method we have make the problem into three fold, firstly extraction of number plate region, secondly segmentation of character and finally Authorization through recognition and classification. For number plate extraction and segmentation we have used morphological based approaches where as for classification we have used Neural Network as classifier. The proposed method is working well in varieties of scenario and the performance level is quiet good..",
"title": ""
},
{
"docid": "53dc606897bd6388c729cc8138027b31",
"text": "Abstract|This paper presents transient stability and power ow models of Thyristor Controlled Reactor (TCR) and Voltage Sourced Inverter (VSI) based Flexible AC Transmission System (FACTS) Controllers. Models of the Static VAr Compensator (SVC), the Thyristor Controlled Series Compensator (TCSC), the Static VAr Compensator (STATCOM), the Static Synchronous Source Series Compensator (SSSC), and the Uni ed Power Flow Controller (UPFC) appropriate for voltage and angle stability studies are discussed in detail. Validation procedures obtained for a test system with a detailed as well as a simpli ed UPFC model are also presented and brie y discussed.",
"title": ""
},
{
"docid": "2e1a6dfb1208bc09a227c7e16ffc7b4f",
"text": "Cannabis sativa L. (Cannabaceae) is an important medicinal plant well known for its pharmacologic and therapeutic potency. Because of allogamous nature of this species, it is difficult to maintain its potency and efficacy if grown from the seeds. Therefore, chemical profile-based screening, selection of high yielding elite clones and their propagation using biotechnological tools is the most suitable way to maintain their genetic lines. In this regard, we report a simple and efficient method for the in vitro propagation of a screened and selected high yielding drug type variety of Cannabis sativa, MX-1 using synthetic seed technology. Axillary buds of Cannabis sativa isolated from aseptic multiple shoot cultures were successfully encapsulated in calcium alginate beads. The best gel complexation was achieved using 5 % sodium alginate with 50 mM CaCl2.2H2O. Regrowth and conversion after encapsulation was evaluated both under in vitro and in vivo conditions on different planting substrates. The addition of antimicrobial substance — Plant Preservative Mixture (PPM) had a positive effect on overall plantlet development. Encapsulated explants exhibited the best regrowth and conversion frequency on Murashige and Skoog medium supplemented with thidiazuron (TDZ 0.5 μM) and PPM (0.075 %) under in vitro conditions. Under in vivo conditions, 100 % conversion of encapsulated explants was obtained on 1:1 potting mix- fertilome with coco natural growth medium, moistened with full strength MS medium without TDZ, supplemented with 3 % sucrose and 0.5 % PPM. Plantlets regenerated from the encapsulated explants were hardened off and successfully transferred to the soil. These plants are selected to be used in mass cultivation for the production of biomass as a starting material for the isolation of THC as a bulk active pharmaceutical.",
"title": ""
},
{
"docid": "51446063ea738c3d06e80a5a362f795d",
"text": "This paper presents SpinLight, an indoor positioning system that uses infrared LED lamps as signal transmitters, and light sensors as receivers. The main idea is to divide the space into spatial beams originating from the light source, and identify each beam with a unique timed sequence of light signals. This sequence is created by a coded shade that covers and rotates around the LED, blocking the light or allowing it to pass through according to pre-defined patterns. The receiver, equipped with a light sensor, is able to determine its spatial beam by detecting the light signals, followed by optimization schemes to refine its location within that beam. We present both 2D and 3D localization designs, demonstrated by a prototype implementation. Experiments show that SpinLight produces a median location error of 3.8 cm, with a 95th percentile of 6.8 cm. The receiver design is very low power and thus can operate for months to years from a button coin battery.",
"title": ""
},
{
"docid": "97de6efcdba528f801cbfa087498ab3f",
"text": "Abstract: Educational Data Mining refers to techniques, tools, and research designed for automatically extracting meaning from large repositories of data generated by or related to people' learning activities in educational settings.[1] It is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from educational settings, and using those methods to better understand students, and the settings which they learn in.[2]",
"title": ""
},
{
"docid": "443a4fe9e7484a18aa53a4b142d93956",
"text": "BACKGROUND AND PURPOSE\nFrequency and duration of static stretching have not been extensively examined. Additionally, the effect of multiple stretches per day has not been evaluated. The purpose of this study was to determine the optimal time and frequency of static stretching to increase flexibility of the hamstring muscles, as measured by knee extension range of motion (ROM).\n\n\nSUBJECTS\nNinety-three subjects (61 men, 32 women) ranging in age from 21 to 39 years and who had limited hamstring muscle flexibility were randomly assigned to one of five groups. The four stretching groups stretched 5 days per week for 6 weeks. The fifth group, which served as a control, did not stretch.\n\n\nMETHODS\nData were analyzed with a 5 x 2 (group x test) two-way analysis of variance for repeated measures on one variable (test).\n\n\nRESULTS\nThe change in flexibility appeared to be dependent on the duration and frequency of stretching. Further statistical analysis of the data indicated that the groups that stretched had more ROM than did the control group, but no differences were found among the stretching groups.\n\n\nCONCLUSION AND DISCUSSION\nThe results of this study suggest that a 30-second duration is an effective amount of time to sustain a hamstring muscle stretch in order to increase ROM. No increase in flexibility occurred when the duration of stretching was increased from 30 to 60 seconds or when the frequency of stretching was increased from one to three times per day.",
"title": ""
},
{
"docid": "f2c8af1f4bcf7115fc671ae9922adbb3",
"text": "Extracting insights from temporal event sequences is an important challenge. In particular, mining frequent patterns from event sequences is a desired capability for many domains. However, most techniques for mining frequent patterns are ineffective for real-world data that may be low-resolution, concurrent, or feature many types of events, or the algorithms may produce results too complex to interpret. To address these challenges, we propose Frequence, an intelligent user interface that integrates data mining and visualization in an interactive hierarchical information exploration system for finding frequent patterns from longitudinal event sequences. Frequence features a novel frequent sequence mining algorithm to handle multiple levels-of-detail, temporal context, concurrency, and outcome analysis. Frequence also features a visual interface designed to support insights, and support exploration of patterns of the level-of-detail relevant to users. Frequence's effectiveness is demonstrated with two use cases: medical research mining event sequences from clinical records to understand the progression of a disease, and social network research using frequent sequences from Foursquare to understand the mobility of people in an urban environment.",
"title": ""
},
{
"docid": "d4aca467d0014b2c2359f5609a1a199b",
"text": "MATLAB is specifically designed for simulating dynamic systems. This paper describes a method of modelling impulse voltage generator using Simulink, an extension of MATLAB. The equations for modelling have been developed and a corresponding Simulink model has been constructed. It shows that Simulink program becomes very useful in studying the effect of parameter changes in the design to obtain the desired impulse voltages and waveshapes from an impulse generator.",
"title": ""
},
{
"docid": "34b4a91dac887d6d0c7387baae9fd0a2",
"text": "Robert Burns wrote: “The best laid schemes of Mice and Men oft go awry”. This could be considered the motto of most educational innovation. The question that arises is not so much why some innovations fail (although this is very important question), but rather why other innovations succeed? This study investigated the success factors of large-scale educational innovation projects in Dutch higher education. The research team attempted to identify success factors that might be relevant to educational innovation projects. The research design was largely qualitative, with a guided interview as the primary means of data collection, followed by data analysis and a correlation of findings with the success factors identified in the literature review. In order to pursue the research goal, a literature review of success factors was first conducted to identify existing knowledge in this area, followed by a detailed study of the educational innovation projects that have been funded by SURF Education. To obtain a list of potential success factors, existing project documentation and evaluations were reviewed and the project chairs and other important players were interviewed. Reports and evaluations by the projects themselves were reviewed to extract commonalities and differences in the factors that the projects felt were influential in their success of educational innovation. In the next phase of the project experts in the field of project management, project chairs of successful projects and evaluators/raters of projects will be asked to pinpoint factors of importance that were facilitative or detrimental to the outcome of their projects and implementation of the innovations. After completing the interviews all potential success factors will be recorded and clustered using an affinity technique. The clusters will then be labeled and clustered, creating a hierarchy of potential success factors. The project chairs will finally be asked to select the five most important success factors out of the hierarchy, and to rank their importance. This technique – the Experts’ Concept Mapping Method – is based upon Trochim’s concept mapping approach (1989a, 1989b) and was developed and perfected by Stoyanov and Kirschner (2004). Finally, the results will lead to a number of instruments as well as a functional procedure for tendering, selecting and monitoring innovative educational projects. The identification of success factors for educational innovation projects and measuring performance of projects based upon these factors are important as they can aid the development and implementation of innovation projects by explicating and making visible (and thus manageable) those success and failure factors relating to educational innovation projects in higher education. Determinants for Failure and Success of Innovation Projects: The Road to Sustainable Educational Innovation The Dutch Government has invested heavily in stimulating better and more creative use of information and communication technologies (ICT) in all forms of education. The ultimate goal of this investment is to ensure that students and teachers are equipped with the skills and knowledge required for success in the new knowledge-based economy. All stakeholders (i.e., government, industry, educational institutions, society in general) have placed high priority on achieving this goal. However, these highly funded projects have often resulted in either short-lived or local successes or outright failures (see De Bie,",
"title": ""
},
{
"docid": "071c6e558a0991da4201ae0d966ec391",
"text": "This work presents a scalable solution to open-vocabulary visual speech recognition. To achieve this, we constructed the largest existing visual speech recognition dataset, consisting of pairs of text and video clips of faces speaking (3,886 hours of video). In tandem, we designed and trained an integrated lipreading system, consisting of a video processing pipeline that maps raw video to stable videos of lips and sequences of phonemes, a scalable deep neural network that maps the lip videos to sequences of phoneme distributions, and a production-level speech decoder that outputs sequences of words. The proposed system achieves a word error rate (WER) of 40.9% as measured on a held-out set. In comparison, professional lipreaders achieve either 86.4% or 92.9% WER on the same dataset when having access to additional types of contextual information. Our approach significantly improves on other lipreading approaches, including variants of LipNet and of Watch, Attend, and Spell (WAS), which are only capable of 89.8% and 76.8% WER respectively.",
"title": ""
},
{
"docid": "48b25af0d0e0bed6315b0dcf4e6573b3",
"text": "Datasets published in the LOD cloud are recommended to follow some best practice in order to be 4-5 stars Linked Data compliant. They can often be consumed and accessed by different means such as API access, bulk download or as linked data fragments, but most of the time, a SPARQL endpoint is also provided. While the LOD cloud keeps growing, having a quick glimpse of those datasets is getting harder and there is a need to develop new methods enabling to detect automatically what an arbitrary dataset is about and to recommend visualizations for data samples. We consider that “a visualization is worth a million triples”, and in this paper, we propose a novel approach that mines the content of datasets and automatically generates visualizations. Our approach is directly based on the usage of SPARQL queries that will detect the important categories of a dataset and that will specifically consider the properties used by the objects which have been interlinked via owl:sameAs links. We then propose to associate type of visualization for those categories. We have implemented this approach into a so-called Linked Data Vizualization Wizard (LDVizWiz).",
"title": ""
},
{
"docid": "a2a4908ab05abc1fe62c149d0012c031",
"text": "Model compression is significant for wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and in business clusters requiring quick responses to large-scale service requests. In this work, we focus on reducing the sizes of basic structures (including input updates, gates, hidden states, cell states and outputs) within Long Short-Term Memory (LSTM) units, so as to learn structurally-sparse LSTMs. Independently reducing the sizes of those basic structures can result in unmatched dimensions among them, and consequently, end up with invalid LSTM units. To overcome this, we propose Intrinsic Sparse Structures (ISS) in LSTMs. By reducing one component of ISS, the sizes of those basic structures are simultaneously reduced by one such that the consistency of dimensions is maintained. By learning ISS within LSTM units, the eventual LSTMs are still regular LSTMs but have much smaller sizes of basic structures. Our method achieves 10.59× speedup in state-of-the-art LSTMs, without losing any perplexity of language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our source code is public available1.",
"title": ""
},
{
"docid": "29d43e9ec2afa314c4a00f26ce816e7e",
"text": "The aim of this paper is to discuss about various feature selection algorithms applied on different datasets to select the relevant features to classify data into binary and multi class in order to improve the accuracy of the classifier. Recent researches in medical diagnose uses the different kind of classification algorithms to diagnose the disease. For predicting the disease, the classification algorithm produces the result as binary class. When there is a multiclass dataset, the classification algorithm reduces the dataset into a binary class for simplification purpose by using any one of the data reduction methods and the algorithm is applied for prediction. When data reduction on original dataset is carried out, the quality of the data may degrade and the accuracy of an algorithm will get affected. To maintain the effectiveness of the data, the multiclass data must be treated with its original form without maximum reduction, and the algorithm can be applied on the dataset for producing maximum accuracy. Dataset with maximum number of attributes like thousands must incorporate the best feature selection algorithm for selecting the relevant features to reduce the space and time complexity. The performance of Classification algorithm is estimated by how accurately it predicts the individual class on particular dataset. The accuracy constrain mainly depends on the selection of appropriate features from the original dataset. The feature selection algorithms play an important role in classification for better performance. The feature selection is one of",
"title": ""
},
{
"docid": "4e7443088eedf5e6199959a06ebc420c",
"text": "The development of computational-intelligence based strategies for electronic markets has been the focus of intense research. In order to be able to design efficient and effective automated trading strategies, one first needs to understand the workings of the market, the strategies that traders use and their interactions as well as the patterns emerging as a result of these interactions. In this paper, we develop an agent-based model of the FX market which is the market for the buying and selling of currencies. Our agent-based model of the FX market (ABFXM) comprises heterogeneous trading agents which employ a strategy that identifies and responds to periodic patterns in the price time series. We use the ABFXM to undertake a systematic exploration of its constituent elements and their impact on the stylized facts (statistical patterns) of transactions data. This enables us to identify a set of sufficient conditions which result in the emergence of the stylized facts similarly to the real market data, and formulate a model which closely approximates the stylized facts. We use a unique high frequency dataset of historical transactions data which enables us to run multiple simulation runs and validate our approach and draw comparisons and conclusions for each market setting.",
"title": ""
},
{
"docid": "879af50edd27c74bde5b656d0421059a",
"text": "In this thesis we present an approach to adapt the Single Shot multibox Detector (SSD) for face detection. Our experiments are performed on the WIDER dataset which contains a large amount of small faces (faces of 50 pixels or less). The results show that the SSD method performs poorly on the small/hard subset of this dataset. We analyze the influence of increasing the resolution during inference and training time. Building on this analysis we present two additions to the SSD method. The first addition is changing the SSD architecture to an image pyramid architecture. The second addition is creating a selection criteria on each of the different branches of the image pyramid architecture. The results show that increasing the resolution, even during inference, increases the performance for the small/hard subset. By combining resolutions in an image pyramid structure we observe that the performance keeps consistent across different sizes of faces. Finally, the results show that adding a selection criteria on each branch of the image pyramid further increases performance, because the selection criteria negates the competing behaviour of the image pyramid. We conclude that our approach not only increases performance on the small/hard subset of the WIDER dataset but keeps on performing well on the large subset.",
"title": ""
},
{
"docid": "91cd1546f366726a32038b5f78ae1d16",
"text": "ns c is LBNL’s Network Simulator [20]. The simulator is written in C++; it uses OTcl a s command and configuration interface.nsv2 has three substantial changes from nsv1: (1) the more complex objects in nsv1 have been decomposed into simpler components for greater flexibility and composabili ty; (2) the configuration interface is now OTcl, an object ori ented version of Tcl; and (3) the interface code to the OTcl interpr te is separate from the main simulator. Ns documentation is available in html, Postscript, and PDF f ormats. Seehttp://www.isi.edu/nsnam/ns/ns-documentation. html for pointers to these.",
"title": ""
},
{
"docid": "6fbf1dff8df2c97f44e236a9c7ffac2a",
"text": "The generation of multimode orbital angular momentum (OAM) carrying beams has attracted more and more attention. A broadband dual-polarized dual-OAM-mode uniform circular array is proposed in this letter. The proposed antenna array, which consists of a broadband dual-polarized bow-tie dipole array and a broadband phase-shifting feeding network, can be used to generate OAM mode −1 and OAM mode 1 beams from 2.1 to 2.7 GHz (a bandwidth of 25%) for each of two polarizations. Four orthogonal channels can be provided by the proposed antenna array. A 2.5-m broadband OAM link is built. The measured crosstalk between the mode matched channels and the mode mismatched channels is less than −12 dB at 2.1, 2.4, and 2.7 GHz. Four different data streams are transmitted simultaneously by the proposed array with a bit error rate less than 4.2×10-3 at 2.1, 2.4, and 2.7 GHz.",
"title": ""
},
{
"docid": "a9b0d197e41fc328502c71c0ddf7b91e",
"text": "We propose a new full-rate space-time block code (STBC) for two transmit antennas which can be designed to achieve maximum diversity or maximum capacity while enjoying optimized coding gain and reduced-complexity maximum-likelihood (ML) decoding. The maximum transmit diversity (MTD) construction provides a diversity order of 2Nr for any number of receive antennas Nr at the cost of channel capacity loss. The maximum channel capacity (MCC) construction preserves the mutual information between the transmit and the received vectors while sacrificing diversity. The system designer can switch between the two constructions through a simple parameter change based on the operating signal-to-noise ratio (SNR), signal constellation size and number of receive antennas. Thanks to their special algebraic structure, both constructions enjoy low-complexity ML decoding proportional to the square of the signal constellation size making them attractive alternatives to existing full-diversity full-rate STBCs in [6], [3] which have high ML decoding complexity proportional to the fourth order of the signal constellation size. Furthermore, we design a differential transmission scheme for our proposed STBC, derive the exact ML differential decoding rule, and compare its performance with competitive schemes. Finally, we investigate transceiver design and performance of our proposed STBC in spatial multiple-access scenarios and over frequency-selective channels.",
"title": ""
},
{
"docid": "1256f0799ed585092e60b50fb41055be",
"text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.",
"title": ""
},
{
"docid": "b4529985e1fa4e156900c9825fc1c6f9",
"text": "This paper presents the SWaT testbed, a modern industrial control system (ICS) for security research and training. SWaT is currently in use to (a) understand the impact of cyber and physical attacks on a water treatment system, (b) assess the effectiveness of attack detection algorithms, (c) assess the effectiveness of defense mechanisms when the system is under attack, and (d) understand the cascading effects of failures in one ICS on another dependent ICS. SWaT consists of a 6-stage water treatment process, each stage is autonomously controlled by a local PLC. The local fieldbus communications between sensors, actuators, and PLCs is realized through alternative wired and wireless channels. While the experience with the testbed indicates its value in conducting research in an active and realistic environment, it also points to design limitations that make it difficult for system identification and attack detection in some experiments.",
"title": ""
}
] |
scidocsrr
|
9810686a48fd7a907ee23218d7db120e
|
Large-scale training to increase speech intelligibility for hearing-impaired listeners in novel noises.
|
[
{
"docid": "e5af3960416553f9b1b03d8b974be8d0",
"text": "We propose a feature enhancement algorithm to improve robust automatic speech recognition (ASR). The algorithm estimates a smoothed ideal ratio mask (IRM) in the Mel frequency domain using deep neural networks and a set of time-frequency unit level features that has previously been used to estimate the ideal binary mask. The estimated IRM is used to filter out noise from a noisy Mel spectrogram before performing cepstral feature extraction for ASR. On the noisy subset of the Aurora-4 robust ASR corpus, the proposed enhancement obtains a relative improvement of over 38% in terms of word error rates using ASR models trained in clean conditions, and an improvement of over 14% when the models are trained using the multi-condition training data. In terms of instantaneous SNR estimation performance, the proposed system obtains a mean absolute error of less than 4 dB in most frequency channels.",
"title": ""
},
{
"docid": "1cd45a4f897ea6c473d00c4913440836",
"text": "What is the computational goal of auditory scene analysis? This is a key issue to address in the Marrian information-processing framework. It is also an important question for researchers in computational auditory scene analysis (CASA) because it bears directly on how a CASA system should be evaluated. In this chapter I discuss different objectives used in CASA. I suggest as a main CASA goal the use of the ideal time-frequency (T-F) binary mask whose value is one for a T-F unit where the target energy is greater than the interference energy and is zero otherwise. The notion of the ideal binary mask is motivated by the auditory masking phenomenon. Properties of the ideal binary mask are discussed, including their relationship to automatic speech recognition and human speech intelligibility. This CASA goal has led to algorithms that directly estimate the ideal binary mask in monaural and binaural conditions, and these algorithms have substantially advanced the state-of-the-art performance in speech separation.",
"title": ""
},
{
"docid": "9839b99ed67f541e95441f9e55da705c",
"text": "Machine learning algorithms to segregate speech from background noise hold considerable promise for alleviating limitations associated with hearing impairment. One of the most important considerations for implementing these algorithms into devices such as hearing aids and cochlear implants involves their ability to generalize to conditions not employed during the training stage. A major challenge involves the generalization to novel noise segments. In the current study, sentences were segregated from multi-talker babble and from cafeteria noise using an algorithm that employs deep neural networks to estimate the ideal ratio mask. Importantly, the algorithm was trained on segments of noise and tested using entirely novel segments of the same nonstationary noise type. Substantial sentence-intelligibility benefit was observed for hearing-impaired listeners in both noise types, despite the use of unseen noise segments during the test stage. Interestingly, normal-hearing listeners displayed benefit in babble but not in cafeteria noise. This result highlights the importance of evaluating these algorithms not only in human subjects, but in members of the actual target population.",
"title": ""
}
] |
[
{
"docid": "2ab2280b7821ae6ad27fff995fd36fe0",
"text": "Recent years have seen the development of a satellite communication system called a high-throughput satellite (HTS), which enables large-capacity communication to cope with various communication demands. Current HTSs have a fixed allocation of communication resources and cannot flexibly change this allocation during operation. Thus, effectively allocating communication resources for communication demands with a bias is not possible. Therefore, technology is being developed to add flexibility to satellite communication systems, but there is no system analysis model available to quantitatively evaluate the flexibility performance. In this study, we constructed a system analysis model to quantitatively evaluate the flexibility of a satellite communication system and used it to analyze a satellite communication system equipped with a digital channelizer.",
"title": ""
},
{
"docid": "54c0e238e904cfd9e5e155c1738a004f",
"text": "Modern GPUs offer much computing power at a very modest cost. Even though CUDA and other related recent developments are accelerating the use of GPUs for general purpose applications, several challenges still remain in programming the GPUs. Thus, it is clearly desirable to be able to program GPUs using a higher-level interface.\n In this paper, we offer a solution that targets a specific class of applications, which are the data mining and scientific data analysis applications. Our work is driven by the observation that a common processing structure, that of generalized reductions, fits a large number of popular data mining algorithms. In our solution, the programmers simply need to specify the sequential reduction loop(s) with some additional information about the parameters. We use program analysis and code generation to map the applications to a GPU. Several additional optimizations are also performed by the system.\n We have evaluated our system using three popular data mining applications, k-means clustering, EM clustering, and Principal Component Analysis (PCA). The main observations from our experiments are as follows. The speedup that each of these applications achieve over a sequential CPU version ranges between 20 and 50. The automatically generated version did not have any noticeable overheads compared to hand written codes. Finally, the optimizations performed in the system resulted in significant performance improvements.",
"title": ""
},
{
"docid": "78b913c7998239a817bf2e8745fd97e7",
"text": "We consider the problem of designing an artificial agent capable of interacting with humans in collaborative dialogue to produce creative, engaging narratives. In this task, the goal is to establish universe details, and to collaborate on an interesting story in that universe, through a series of natural dialogue exchanges. Our model can augment any probabilistic conversational agent by allowing it to reason about universe information established and what potential next utterances might reveal. Ideally, with each utterance, agents would reveal just enough information to add specificity and reduce ambiguity without limiting the conversation. We empirically show that our model allows control over the rate at which the agent reveals information and that doing so significantly improves accuracy in predicting the next line of dialogues from movies. We close with a case-study with four professional theatre performers, who preferred interactions with our model-augmented agent over an unaugmented agent.",
"title": ""
},
{
"docid": "1d4f89bb3e289ed138f45af0f1e3fc39",
"text": "The “covariance” of complex random variables and processes, when defined consistently with the corresponding notion for real random variables, is shown to be determined by the usual (complex) covariance together with a quantity called the pseudo-covariance. A characterization of uncorrelatedness and wide-sense stationarity in terms of covariance and pseudocovariance is given. Complex random variables and processes with a vanishing pseudo-covariance are called proper. It is shown that properness is preserved under affine transformations and that the complex-multivariate Gaussian density assumes a natural form only for proper random variables. The maximum-entropy theorem is generalized to the complex-multivariate case. The differential entropy of a complex random vector with a fixed correlation matrix is shown to be maximum, if and only if the random vector is proper, Gaussian and zero-mean. The notion of circular stutionarity is introduced. For the class of proper complex random processes, a discrete Fourier transform correspondence is derived relating circular stationarity in the time domain to uncorrelatedness in the frequency domain. As an application of the theory, the capacity of a discrete-time channel with complex inputs, proper complex additive white Gaussian noise, and a finite complex unit-sample response is determined. This derivation is considerably simpler than an earlier derivation for the real discrete-time Gaussian channel with intersymbol interference, whose capacity is obtained as a by-product of the results for the complex channel. Znder Terms-Proper complex random processes, circular stationarity, intersymbol interference, capacity.",
"title": ""
},
{
"docid": "cb4c33d4adfc7f3c0b659edcfd774e8b",
"text": "Convolutional Neural Networks (CNNs) have achieved comparable error rates to well-trained human on ILSVRC2014 image classification task. To achieve better performance, the complexity of CNNs is continually increasing with deeper and bigger architectures. Though CNNs achieved promising external classification behavior, understanding of their internal work mechanism is still limited. In this work, we attempt to understand the internal work mechanism of CNNs by probing the internal representations in two comprehensive aspects, i.e., visualizing patches in the representation spaces constructed by different layers, and visualizing visual information kept in each layer. We further compare CNNs with different depths and show the advantages brought by deeper architecture.",
"title": ""
},
{
"docid": "1baac17f06084dc6609374b037edbb62",
"text": "BACKGROUND\nSurgical skill assessment has predominantly been a subjective task. Recently, technological advances such as robot-assisted surgery have created great opportunities for objective surgical evaluation. In this paper, we introduce a predictive framework for objective skill assessment based on movement trajectory data. Our aim is to build a classification framework to automatically evaluate the performance of surgeons with different levels of expertise.\n\n\nMETHODS\nEight global movement features are extracted from movement trajectory data captured by a da Vinci robot for surgeons with two levels of expertise - novice and expert. Three classification methods - k-nearest neighbours, logistic regression and support vector machines - are applied.\n\n\nRESULTS\nThe result shows that the proposed framework can classify surgeons' expertise as novice or expert with an accuracy of 82.3% for knot tying and 89.9% for a suturing task.\n\n\nCONCLUSION\nThis study demonstrates and evaluates the ability of machine learning methods to automatically classify expert and novice surgeons using global movement features.",
"title": ""
},
{
"docid": "289005e2f4d666a606f7dfd9c8f7a1f4",
"text": "In this paper we present the design of a fin-like dielectric elastomer actuator (DEA) that drives a miniature autonomous underwater vehicle (AUV). The fin-like actuator is modular and independent of the body of the AUV. All electronics required to run the actuator are inside the 100 mm long 3D-printed body, allowing for autonomous mobility of the AUV. The DEA is easy to manufacture, requires no pre-stretch of the elastomers, and is completely sealed for underwater operation. The output thrust force can be tuned by stacking multiple actuation layers and modifying the Young's modulus of the elastomers. The AUV is reconfigurable by a shift of its center of mass, such that both planar and vertical swimming can be demonstrated on a single vehicle. For the DEA we measured thrust force and swimming speed for various actuator designs ran at frequencies from 1 Hz to 5 Hz. For the AUV we demonstrated autonomous planar swimming and closed-loop vertical diving. The actuators capable of outputting the highest thrust forces can power the AUV to swim at speeds of up to 0.55 body lengths per second. The speed falls in the upper range of untethered swimming robots powered by soft actuators. Our tunable DEAs also demonstrate the potential to mimic the undulatory motions of fish fins.",
"title": ""
},
{
"docid": "1abd6ff44e39d16a7f01cc2796dcdf77",
"text": "A 1.0 mm3 general-purpose sensor node platform with heterogeneous multi-layer structure is proposed. The sensor platform benefits from modularity by allowing the addition/removal of IC layers. A new low power I2C interface is introduced for energy efficient inter-layer communication with compatibility to commercial I2C protocols. A self-adapting power management unit is proposed for efficient battery voltage down conversion for wide range of battery voltages and load current. The power management unit also adapts itself by monitoring energy harvesting conditions and harvesting sources and is capable of harvesting from solar, thermal and microbial fuel cells. An optical wakeup receiver is proposed for sensor node programming and synchronization with 228 pW standby power. The system also includes two processors, timer, temperature sensor, and low-power imager. Standby power of the system is 11 nW.",
"title": ""
},
{
"docid": "3e54834b8e64bbdf25dd0795e770d63c",
"text": "Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes for computerized lung cancer detection, emphysema diagnosis and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3D airway tree structure from a CT volume is quite a challenging task. Several researchers have proposed automated airway segmentation algorithms basically based on region growing and machine learning techniques. However, these methods fail to detect the peripheral bronchial branches, which results in a large amount of leakage. This paper presents a novel approach for more accurate extraction of the complex airway tree. This proposed segmentation method is composed of three steps. First, Hessian analysis is utilized to enhance the tube-like structure in CT volumes; then, an adaptive multiscale cavity enhancement filter is employed to detect the cavity-like structure with different radii. In the second step, support vector machine learning will be utilized to remove the false positive (FP) regions from the result obtained in the previous step. Finally, the graph-cut algorithm is used to refine the candidate voxels to form an integrated airway tree. A test dataset including 50 standard-dose chest CT volumes was used for evaluating our proposed method. The average extraction rate was about 79.1 % with the significantly decreased FP rate. A new method of airway segmentation based on local intensity structure and machine learning technique was developed. The method was shown to be feasible for airway segmentation in a computer-aided diagnosis system for a lung and bronchoscope guidance system.",
"title": ""
},
{
"docid": "3dfb419706ae85d232753a085dc145f7",
"text": "This chapter describes the different steps of designing, building, simulating, and testing an intelligent flight control module for an increasingly popular unmanned aerial vehicle (UAV), known as a quadrotor. It presents an in-depth view of the modeling of the kinematics, dynamics, and control of such an interesting UAV. A quadrotor offers a challenging control problem due to its highly unstable nature. An effective control methodology is therefore needed for such a unique airborne vehicle. The chapter starts with a brief overview on the quadrotor's background and its applications, in light of its advantages. Comparisons with other UAVs are made to emphasize the versatile capabilities of this special design. For a better understanding of the vehicle's behavior, the quadrotor's kinematics and dynamics are then detailed. This yields the equations of motion, which are used later as a guideline for developing the proposed intelligent flight control scheme. In this chapter, fuzzy logic is adopted for building the flight controller of the quadrotor. It has been witnessed that fuzzy logic control offers several advantages over certain types of conventional control methods, specifically in dealing with highly nonlinear systems and modeling uncertainties. Two types of fuzzy inference engines are employed in the design of the flight controller, each of which is explained and evaluated. For testing the designed intelligent flight controller, a simulation environment was first developed. The simulations were made as realistic as possible by incorporating environmental disturbances such as wind gust and the ever-present sensor noise. The proposed controller was then tested on a real test-bed built specifically for this project. Both the simulator and the real quadrotor were later used for conducting different attitude stabilization experiments to evaluate the performance of the proposed control strategy. The controller's performance was also benchmarked against conventional control techniques such as input-output linearization, backstepping and sliding mode control strategies. Conclusions were then drawn based on the conducted experiments and their results.",
"title": ""
},
{
"docid": "b720df1467aade5dd1ba82602ba14591",
"text": "Modern medical devices and equipment have become very complex and sophisticated and are expected to operate under stringent environments. Hospitals must ensure that their critical medical devices are safe, accurate, reliable and operating at the required level of performance. Even though the importance, the application of all inspection, maintenance and optimization models to medical devices is fairly new. In Canada, most, if not all healthcare organizations include all their medical equipment in their maintenance program and just follow manufacturers’ recommendations for preventative maintenance. Then, current maintenance strategies employed in hospitals and healthcare organizations have difficulty in identifying specific risks and applying optimal risk reduction activities. This paper addresses these gaps found in literature for medical equipment inspection and maintenance and reviews various important aspects including current policies applied in hospitals. Finally we suggest future research which will be the starting point to develop tools and policies for better medical devices management in the future.",
"title": ""
},
{
"docid": "a8342d5a512fe99cefd8f4ce9f1208e8",
"text": "In this paper, we formally define the problem of representing and leveraging abstract event causality to power downstream applications. We propose a novel solution to this problem, which build an abstract causality network and embed the causality network into a continuous vector space. The abstract causality network is generalized from a specific one, with abstract event nodes represented by frequently co-occurring word pairs. To perform the embedding task, we design a dual cause-effect transition model. Therefore, the proposed method can obtain general, frequent, and simple causality patterns, meanwhile, simplify event matching. Given the causality network and the learned embeddings, our model can be applied to a wide range of applications such as event prediction, event clustering and stock market movement prediction. Experimental results demonstrate that 1) the abstract causality network is effective for discovering high-level causality rules behind specific causal events; 2) the embedding models perform better than state-of-the-art link prediction techniques in predicting events; and 3) the event causality embedding is an easy-to-use and sophisticated feature for downstream applications such as stock market movement prediction.",
"title": ""
},
{
"docid": "1ac03a7890a0145a8492a881caec4005",
"text": "The rapid growth of data and data sharing have been driven an evolution in distributed storage infrastructure. The need for sensitive data protection and the capacity to handle massive data sets have encouraged the research and development of secure and scalable storage systems. This paper identifies major security issues and requirements of data protection related to distributed data storage systems. We classify the security services and techniques in existing or proposed storage systems. We then discuss potential research topics and future trends.",
"title": ""
},
{
"docid": "01cd8355e0604868659e1a312d385ebe",
"text": "In the past years, knowledge graphs have proven to be beneficial for recommender systems, efficiently addressing paramount issues such as new items and data sparsity. At the same time, several works have recently tackled the problem of knowledge graph completion through machine learning algorithms able to learn knowledge graph embeddings. In this paper, we show that the item recommendation problem can be seen as a specific case of knowledge graph completion problem, where the “feedback” property, which connects users to items that they like, has to be predicted. We empirically compare a set of state-of-the-art knowledge graph embeddings algorithms on the task of item recommendation on the Movielens 1M dataset. The results show that knowledge graph embeddings models outperform traditional collaborative filtering baselines and that TransH obtains the best performance.",
"title": ""
},
{
"docid": "d9e0fd8abb80d6256bd86306b7112f20",
"text": "Visible light LEDs, due to their numerous advantages, are expected to become the dominant indoor lighting technology. These lights can also be switched ON/OFF at high frequency, enabling their additional use for wireless communication and indoor positioning. In this article, visible LED light--based indoor positioning systems are surveyed and classified into two broad categories based on the receiver structure. The basic principle and architecture of each design category, along with various position computation algorithms, are discussed and compared. Finally, several new research, implementation, commercialization, and standardization challenges are identified and highlighted for this relatively novel and interesting indoor localization technology.",
"title": ""
},
{
"docid": "fcdde2f5b55b6d8133e6dea63d61b2c8",
"text": "It has been observed by many people that a striking number of quite diverse mathematical problems can be formulated as problems in integer programming, that is, linear programming problems in which some or all of the variables are required to assume integral values. This fact is rendered quite interesting by recent research on such problems, notably by R. E. Gomory [2, 3], which gives promise of yielding efficient computational techniques for their solution. The present paper provides yet another example of the versatility of integer programming as a mathematical modeling device by representing a generalization of the well-known “Travelling Salesman Problem” in integer programming terms. The authors have developed several such models, of which the one presented here is the most efficient in terms of generality, number of variables, and number of constraints. This model is due to the second author [4] and was presented briefly at the Symposium on Combinatorial Problems held at Princeton University, April 1960, sponsored by SIAM and IBM. The problem treated is: (1) A salesman is required to visit each of <italic>n</italic> cities, indexed by 1, ··· , <italic>n</italic>. He leaves from a “base city” indexed by 0, visits each of the <italic>n</italic> other cities exactly once, and returns to city 0. During his travels he must return to 0 exactly <italic>t</italic> times, including his final return (here <italic>t</italic> may be allowed to vary), and he must visit no more than <italic>p</italic> cities in one tour. (By a tour we mean a succession of visits to cities without stopping at city 0.) It is required to find such an itinerary which minimizes the total distance traveled by the salesman.\n Note that if <italic>t</italic> is fixed, then for the problem to have a solution we must have <italic>tp</italic> ≧ <italic>n</italic>. For <italic>t</italic> = 1, <italic>p</italic> ≧ <italic>n</italic>, we have the standard traveling salesman problem.\nLet <italic>d<subscrpt>ij</subscrpt></italic> (<italic>i</italic> ≠ <italic>j</italic> = 0, 1, ··· , <italic>n</italic>) be the distance covered in traveling from city <italic>i</italic> to city <italic>j</italic>. The following integer programming problem will be shown to be equivalent to (1): (2) Minimize the linear form ∑<subscrpt>0≦<italic>i</italic>≠<italic>j</italic>≦<italic>n</italic></subscrpt>∑ <italic>d<subscrpt>ij</subscrpt>x<subscrpt>ij</subscrpt></italic> over the set determined by the relations ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=0<italic>i</italic>≠<italic>j</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>j</italic> = 1, ··· , <italic>n</italic>) ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>j</italic>=0<italic>j</italic>≠<italic>i</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>i</italic> = 1, ··· , <italic>n</italic>) <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> + <italic>px<subscrpt>ij</subscrpt></italic> ≦ <italic>p</italic> - 1 (1 ≦ <italic>i</italic> ≠ <italic>j</italic> ≦ <italic>n</italic>) where the <italic>x<subscrpt>ij</subscrpt></italic> are non-negative integers and the <italic>u<subscrpt>i</subscrpt></italic> (<italic>i</italic> = 1, …, <italic>n</italic>) are arbitrary real numbers. (We shall see that it is permissible to restrict the <italic>u<subscrpt>i</subscrpt></italic> to be non-negative integers as well.)\n If <italic>t</italic> is fixed it is necessary to add the additional relation: ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>u</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> = <italic>t</italic> Note that the constraints require that <italic>x<subscrpt>ij</subscrpt></italic> = 0 or 1, so that a natural correspondence between these two problems exists if the <italic>x<subscrpt>ij</subscrpt></italic> are interpreted as follows: The salesman proceeds from city <italic>i</italic> to city <italic>j</italic> if and only if <italic>x<subscrpt>ij</subscrpt></italic> = 1. Under this correspondence the form to be minimized in (2) is the total distance to be traveled by the salesman in (1), so the burden of proof is to show that the two feasible sets correspond; i.e., a feasible solution to (2) has <italic>x<subscrpt>ij</subscrpt></italic> which do define a legitimate itinerary in (1), and, conversely a legitimate itinerary in (1) defines <italic>x<subscrpt>ij</subscrpt></italic>, which, together with appropriate <italic>u<subscrpt>i</subscrpt></italic>, satisfy the constraints of (2).\nConsider a feasible solution to (2).\n The number of returns to city 0 is given by ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt>. The constraints of the form ∑ <italic>x<subscrpt>ij</subscrpt></italic> = 1, all <italic>x<subscrpt>ij</subscrpt></italic> non-negative integers, represent the conditions that each city (other than zero) is visited exactly once. The <italic>u<subscrpt>i</subscrpt></italic> play a role similar to node potentials in a network and the inequalities involving them serve to eliminate tours that do not begin and end at city 0 and tours that visit more than <italic>p</italic> cities. Consider any <italic>x</italic><subscrpt><italic>r</italic><subscrpt>0</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> = 1 (<italic>r</italic><subscrpt>1</subscrpt> ≠ 0). There exists a unique <italic>r</italic><subscrpt>2</subscrpt> such that <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> = 1. Unless <italic>r</italic><subscrpt>2</subscrpt> = 0, there is a unique <italic>r</italic><subscrpt>3</subscrpt> with <italic>x</italic><subscrpt><italic>r</italic><subscrpt>2</subscrpt><italic>r</italic><subscrpt>3</subscrpt></subscrpt> = 1. We proceed in this fashion until some <italic>r<subscrpt>j</subscrpt></italic> = 0. This must happen since the alternative is that at some point we reach an <italic>r<subscrpt>k</subscrpt></italic> = <italic>r<subscrpt>j</subscrpt></italic>, <italic>j</italic> + 1 < <italic>k</italic>. \n Since none of the <italic>r</italic>'s are zero we have <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r<subscrpt>i</subscrpt></italic><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ - 1. Summing from <italic>i</italic> = <italic>j</italic> to <italic>k</italic> - 1, we have <italic>u<subscrpt>r<subscrpt>j</subscrpt></subscrpt></italic> - <italic>u<subscrpt>r<subscrpt>k</subscrpt></subscrpt></italic> = 0 ≦ <italic>j</italic> + 1 - <italic>k</italic>, which is a contradiction. Thus all tours include city 0. It remains to observe that no tours is of length greater than <italic>p</italic>. Suppose such a tour exists, <italic>x</italic><subscrpt>0<italic>r</italic><subscrpt>1</subscrpt></subscrpt> , <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> , ···· , <italic>x</italic><subscrpt><italic>r<subscrpt>p</subscrpt>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> = 1 with all <italic>r<subscrpt>i</subscrpt></italic> ≠ 0. Then, as before, <italic>u</italic><subscrpt><italic>r</italic>1</subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> ≦ - <italic>p</italic> or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≧ <italic>p</italic>.\n But we have <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> (1 - <italic>x</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt>) - 1 ≦ <italic>p</italic> - 1, which is a contradiction.\nConversely, if the <italic>x<subscrpt>ij</subscrpt></italic> correspond to a legitimate itinerary, it is clear that the <italic>u<subscrpt>i</subscrpt></italic> can be adjusted so that <italic>u<subscrpt>i</subscrpt></italic> = <italic>j</italic> if city <italic>i</italic> is the <italic>j</italic>th city visited in the tour which includes city <italic>i</italic>, for we then have <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> = - 1 if <italic>x<subscrpt>ij</subscrpt></italic> = 1, and always <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> ≦ <italic>p</italic> - 1.\n The above integer program involves <italic>n</italic><supscrpt>2</supscrpt> + <italic>n</italic> constraints (if <italic>t</italic> is not fixed) in <italic>n</italic><supscrpt>2</supscrpt> + 2<italic>n</italic> variables. Since the inequality form of constraint is fundamental for integer programming calculations, one may eliminate 2<italic>n</italic> variables, say the <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> and <italic>x</italic><subscrpt>0<italic>j</italic></subscrpt>, by means of the equation constraints and produce",
"title": ""
},
{
"docid": "36fbc5f485d44fd7c8726ac0df5648c0",
"text": "We present “Ouroboros Praos”, a proof-of-stake blockchain protocol that, for the first time, provides security against fully-adaptive corruption in the semi-synchronous setting : Specifically, the adversary can corrupt any participant of a dynamically evolving population of stakeholders at any moment as long the stakeholder distribution maintains an honest majority of stake; furthermore, the protocol tolerates an adversarially-controlled message delivery delay unknown to protocol participants. To achieve these guarantees we formalize and realize in the universal composition setting a suitable form of forward secure digital signatures and a new type of verifiable random function that maintains unpredictability under malicious key generation. Our security proof develops a general combinatorial framework for the analysis of semi-synchronous blockchains that may be of independent interest. We prove our protocol secure under standard cryptographic assumptions in the random oracle model.",
"title": ""
},
{
"docid": "9f8c05f7825067ca86caa16547f709e7",
"text": "We consider video object cut as an ensemble of frame-level background-foreground object classifiers which fuses information across frames and refine their segmentation results in a collaborative and iterative manner. Our approach addresses the challenging issues of modeling of background with dynamic textures and segmentation of foreground objects from cluttered scenes. We construct patch-level bag-of-words background models to effectively capture the background motion and texture dynamics. We propose a foreground salience graph (FSG) to characterize the similarity of an image patch to the bag-of-words background models in the temporal domain and to neighboring image patches in the spatial domain. We incorporate this similarity information into a graph-cut energy minimization framework for foreground object segmentation. The background-foreground classification results at neighboring frames are fused together to construct a foreground probability map to update the graph weights. The resulting object shapes at neighboring frames are also used as constraints to guide the energy minimization process during graph cut. Our extensive experimental results and performance comparisons over a diverse set of challenging videos with dynamic scenes, including the new Change Detection Challenge Dataset, demonstrate that the proposed ensemble video object cut method outperforms various state-of-the-art algorithms.",
"title": ""
},
{
"docid": "7d860b431f44d42572fc0787bf452575",
"text": "Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.",
"title": ""
},
{
"docid": "a0852f31be3791d7ce52b99930ea95d1",
"text": "Stock trading system to assist decision-making is an emerging research area and has great commercial potentials. Successful trading operations should occur near the reversal points of price trends. Traditional technical analysis, which usually appears as various trading rules, does aim to look for peaks and bottoms of trends and is widely used in stock market. Unfortunately, it is not convenient to directly apply technical analysis since it depends on person’s experience to select appropriate rules for individual share. In this paper, we enhance conventional technical analysis with Genetic Algorithms by learning trading rules from history for individual stock and then combine different rules together with Echo State Network to provide trading suggestions. Numerous experiments on S&P 500 components demonstrate that whether in bull or bear market, our system significantly outperforms buy-and-hold strategy. Especially in bear market where S&P 500 index declines a lot, our system still profits. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
55b4c38b45b0e1063fb59c14de699738
|
Bayesian Multi-Task Reinforcement Learning
|
[
{
"docid": "401c8a60d89af590925c13e7d22da2ff",
"text": "We consider the problem of multi-task learning, that is, learning multiple related functions. Our approach is based on a hierarchical Bayesian framework, that exploits the equivalence between parametric linear models and nonparametric Gaussian processes (GPs). The resulting models can be learned easily via an EM-algorithm. Empirical studies on multi-label text categorization suggest that the presented models allow accurate solutions of these multi-task problems.",
"title": ""
},
{
"docid": "53be2c41da023d9e2380e362bfbe7cce",
"text": "A rich and exible class of random probability measures, which we call stick-breaking priors, can be constructed using a sequence of independent beta random variables. Examples of random measures that have this characterization include the Dirichlet process, its two-parameter extension, the two-parameter Poisson–Dirichlet process, nite dimensional Dirichlet priors, and beta two-parameter processes. The rich nature of stick-breaking priors offers Bayesians a useful class of priors for nonparametri c problems, while the similar construction used in each prior can be exploited to develop a general computational procedure for tting them. In this article we present two general types of Gibbs samplers that can be used to t posteriors of Bayesian hierarchical models based on stick-breaking priors. The rst type of Gibbs sampler, referred to as a Pólya urn Gibbs sampler, is a generalized version of a widely used Gibbs sampling method currently employed for Dirichlet process computing. This method applies to stick-breaking priors with a known Pólya urn characterization, that is, priors with an explicit and simple prediction rule. Our second method, the blocked Gibbs sampler, is based on an entirely different approach that works by directly sampling values from the posterior of the random measure. The blocked Gibbs sampler can be viewed as a more general approach because it works without requiring an explicit prediction rule. We nd that the blocked Gibbs avoids some of the limitations seen with the Pólya urn approach and should be simpler for nonexperts to use.",
"title": ""
}
] |
[
{
"docid": "2cea5f37c8c03fc0b6abc9e5d70bb1b3",
"text": "This paper summarize our approach to author profiling task – a part of evaluation lab PAN’13. We have used ensemble-based classification on large features set. All the features are roughly described and experimental section provides evaluation of different methods and classification approaches.",
"title": ""
},
{
"docid": "beddbd22bbeb636d8e5aeb56c1863d9a",
"text": "In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects. Following a historical introduction,and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of r ecent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminatin g to a unifying discussion, and a forward-looking conclusion. I. I NTRODUCTION: HISTORICAL OVERVIEW While the first modern-day industrial robot, Unimate, began work on the General Motors assembly line in 1961, and was conceived in 1954 by George Devol [1], [2], the concept of a robot has a very long history, starting in mythology and folklore, and the first mechanical predecessors (automa ta) having been constructed in Ancient Times. For example, in Greek mythology, the God Hephaestus is reputed to have made mechanical servants from gold ([3] in p.114, and [4] verse 18.419). Furthermore, a rich tradition of designing a d building mechanical, pneumatic or hydraulic automata also exists: from the automata of Ancient Egyptian temples, to th e mechanical pigeon of the Pythagorean Archytas of Tarantum circa 400BC [5], to the accounts of earlier automata found in the Lie Zi text in China in 300BC [6], to the devices of Heron of Alexandria [7] in the 1st century. The Islamic world also plays an important role in the development of automata; AlJazari, an Arab inventor, designed and constructed numerou s automatic machines, and is even reputed to have devised the first programmable humanoid robot in 1206AD [8]. The word “robot”, a Slavic word meaning servitude, was first used in this context by the Czech author Karel Capek in 1921 [9]. However, regarding robots with natural-language conversa tional abilities, it wasnt until the 1990’s that the first pio neering systems started to appear. Despite the long history of mytho logy and automata, and the fact that even the mythological handmaidens of Hephaestus were reputed to have been given a voice [3], and despite the fact that the first general-purpo se electronic speech synthesizer was developed by Noriko Omed a in Japan in 1968 [10], it wasnt until the early 1990’s that conversational robots such as MAIA [11], RHINO [12], and AESOP [13] appeared. These robots cover a range of intended application domains; for example, MAIA was intended to carry objects and deliver them, while RHINO is a museum guide robot, and AESOP a surgical robot. In more detail, the early systems include Polly, a robotic guide that could give tours in offices [14], [15]. Polly had very simple interaction capacities; it could perceive huma n feet waving a “tour wanted” signal, and then it would just use pre-determined phrases during the tour itself. A slightly m ore advanced system was TJ [16]. TJ could verbally respond to simple commands, such as “go left”, albeit through a keyboar d. RHINO, on the other hand [12], could respond to tour-start commands, but then, again, just offered a pre-programmed to ur with fixed programmer-defined verbal descriptions. Regardi ng mobile assistant robots with conversational capabilities in the 1990s, a classic system is MAIA [11], [17], obeying simple commands, and carrying objects around places, as well as the mobile office assistant which could not only deliver parcels but guide visitors described in [18], and the similar in functio nality Japanese-language robot Jijo-2 [19], [20], [21]. Finally, an important book from the period is [22], which is characteristi c of the traditional natural-language semantics-inspired the oretical approaches to the problem of human-robot communication, and also of the great gap between the theoretical proposals and the actual implemented systems of this early decade. What is common to all the above early systems is that they share a number of limitations. First, all of them only accept a fixed and small number of simple canned commands , and they respond with a set of canned answers . Second, the only speech acts(in the sense of Searle [23]) that they can handle are requests. Third, the dialogue they support is cle arly ot flexibly mixed initiative; in most cases it is just humaninitiative. Four, they dont really support situated language , i.e. language about their physical situations and events th at are happening around them; except for a fixed number of canned location names in a few cases. Five, they are not able to handleaffective speech ; i.e. emotion-carrying prosody is either recognized nor generated. Six, their non-verbal communication[24] capabilities are almost non-existent; for example, gestures, gait, facial expressions, and head nods are neither recognized nor produced. And seventh, their dialog ue systems are usually effectively stimulus-response or stim ulusstate-response systems; i.e. no real speech planningor purposeful dialogue generation is taking place, and certainly not in conjunction with the motor planning subsystems of the robot. Last but quite importantly, no real learning, off-line or on-the-fly is taking place in these systems; verbal behavior s have to be prescribed. All of these shortcomings of the early systems of the 1990s, effectively have become desiderata for the next two decades of research: the 2000s and 2010s, which we are in at the moment. Thus, in this paper, we will start by providing a discussion giving motivation to the need for existence of interactive r obots with natural human-robot communication capabilities, and then we will enlist a number of desiderata for such systems, which have also effectively become areas of active research in the last decade. Then, we will examine these desiderata one by one, and discuss the research that has taken place towards their fulfillment. Special consideration will be given to th e socalled “symbol grounding problem” [25], which is central to most endeavors towards natural language communication wit h physically embodied agents, such as robots. Finally, after a discussion of the most important open problems for the futur e, we will provide a concise conclusion. II. M OTIVATION : INTERACTIVE ROBOTS WITH NATURAL LANGUAGE CAPABILITIES BUT WHY? There are at least two avenues towards answering this fundamental question, and both will be attempted here. The first avenue will attempt to start from first principles and derive a rationale towards equipping robots with natural language . The second, more traditional and safe avenue, will start fro m a concrete, yet partially transient, base: application dom ains existing or potential. In more detail: Traditionally, there used to be clear separation between design and deployment phases for robots. Application-spec ific robots (for example, manufacturing robots, such as [26]) were: (a) designed by expert designers, (b) possibly tailor programmed and occasionally reprogrammed by specialist engineers at their installation site, and (c) interacted wi th their environment as well as with specialized operators during ac tual operation. However, the phenomenal simplicity but also the accompanying inflexibility and cost of this traditional set ting is often changing nowadays. For example, one might want to have broader-domain and less application-specific robot s, necessitating more generic designs, as well as less effort b y the programmer-engineers on site, in order to cover the vari ous contexts of operation. Even better, one might want to rely less on specialized operators, and to have robots interact a nd collaborate with non-expert humans with little if any prior training. Ideally, even the actual traditional programmin g and re-programming might also be transferred over to non-exper t humans; and instead of programming in a technical language, to be replaced by intuitive tuition by demonstration, imita tion and explanation [27], [28], [29]. Learning by demonstratio n and imitation for robots already has quite some active resea ch; but most examples only cover motor and aspects of learning, and language and communication is not involved deeply. And this is exactly where natural language and other forms of fluid and natural human-robot communication enter the picture: Unspecialized non-expert humans are used to (and quite good at) teaching and interacting with other humans through a mixture of natural language as well as nonverbal signs. Thus, it makes sense to capitalize on this existing ab ility of non-expert humans by building robots that do not require humans to adapt to them in a special way, and which can fluidly collaborate with other humans, interacting with the m and being taught by them in a natural manner, almost as if they were other humans themselves. Thus, based on the above observations, the following is one classic line of motivation towards justifying efforts f or equipping robots with natural language capabilities: Why n ot build robots that can comprehend and generate human-like interactive behaviors, so that they can cooperate with and b e taught by non-expert humans, so that they can be applied in a wide range of contexts with ease? And of course, as natural language plays a very important role within these behaviors, why not build robots that can fluidly converse wit h humans in natural language, also supporting crucial non-ve rbal communication aspects, in order to maximize communication effectiveness, and enable their quick and effective applic ation? Thus, having presented the classical line of reasoning arriving towards the utility of equipping robots with natural language capabilities, and having discussed a space of possibilities regarding role assignment between human and rob ot, let us now move to the second, more concrete, albeit less general avenue towards justifying conversational robots: nam ely, specific applications, existing or potential. Such applica tions, where natural human-robot interaction capabilities with v erbal and non-verbal aspects would be desirabl",
"title": ""
},
{
"docid": "f154b293b364498f228c71af14813ad2",
"text": "advantage of array antenna structures to better process the incoming signals. They also have the ability to identify multiple targets. This paper explores the eigen-analysis category of super resolution algorithm. A class of Multiple Signal Classification (MUSIC) algorithms known as a root-MUSIC algorithm is presented in this paper. The root-MUSIC method is based on the eigenvectors of the sensor array correlation matrix. It obtains the signal estimation by examining the roots of the spectrum polynomial. The peaks in the spectrum space correspond to the roots of the polynomial lying close to the unit circle. Statistical analysis of the performance of the processing algorithm and processing resource requirements are discussed in this paper. Extensive computer simulations are used to show the performance of the algorithms.",
"title": ""
},
{
"docid": "0d2ddb448c01172e53f19d9d5ac39f21",
"text": "Malicious Android applications are currently the biggest threat in the scope of mobile security. To cope with their exponential growth and with their deceptive and hideous behaviors, static analysis signature based approaches are not enough to timely detect and tackle brand new threats such as polymorphic and composition malware. This work presents BRIDEMAID, a novel framework for analysis of Android apps' behavior, which exploits both a static and dynamic approach to detect malicious apps directly on mobile devices. The static analysis is based on n-grams matching to statically recognize malicious app execution patterns. The dynamic analysis is instead based on multi-level monitoring of device, app and user behavior to detect and prevent at runtime malicious behaviors. The framework has been tested against 2794 malicious apps reporting a detection accuracy of 99,7% and a negligible false positive rate, tested on a set of 10k genuine apps.",
"title": ""
},
{
"docid": "6330bfa6be0361e2c0d2985372db9f0a",
"text": "The increasing pervasiveness of the internet, broadband connections and the emergence of digital compression technologies have dramatically changed the face of digital music piracy. Digitally compressed music files are essentially a perfect public economic good, and illegal copying of these files has increasingly become rampant. This paper presents a study on the behavioral dynamics which impact the piracy of digital audio files, and provides a contrast with software piracy. Our results indicate that the general ethical model of software piracy is also broadly applicable to audio piracy. However, significant enough differences with software underscore the unique dynamics of audio piracy. Practical implications that can help the recording industry to effectively combat piracy, and future research directions are highlighted.",
"title": ""
},
{
"docid": "88b0bdfb06e91f63d1930814388d0c9c",
"text": "Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at https: //bitbucket.org/deeplab/deeplab-public.",
"title": ""
},
{
"docid": "60fbaecc398f04bdb428ccec061a15a5",
"text": "A decade earlier, work on modeling and analyzing social network, was primarily focused on manually collected datasets where the friendship links were sparse but relatively noise free (i.e. all links represented strong physical relation). With the popularity of online social networks, the notion of “friendship” changed dramatically. The data collection, now although automated, contains dense friendship links but the links contain noisier information (i.e. some weaker relationships). The aim of this study is to identify these weaker links and suggest how these links (identification) play a vital role in improving social media design elements such as privacy control, detection of auto-bots, friend introductions, information prioritization and so on. The binary metric used so far for modeling links in social network (i.e. friends or not) is of little importance as it groups all our relatives, close friends and acquaintances in the same category. Therefore a popular notion of tie-strength has been incorporated for modeling links. In this paper, a predictive model is presented that helps evaluate tie-strength for each link in network based on transactional features (e.g. communication, file transfer, photos). The model predicts tie strength with 76.4% efficiency. This work also suggests that important link properties manifest similarly across different social media sites.",
"title": ""
},
{
"docid": "9a6a724f8aa0ae4fa9de1367f8661583",
"text": "In this paper, we develop a simple algorithm to determine the required number of generating units of wind-turbine generator and photovoltaic array, and the associated storage capacity for stand-alone hybrid microgrid. The algorithm is based on the observation that the state of charge of battery should be periodically invariant. The optimal sizing of hybrid microgrid is given in the sense that the life cycle cost of system is minimized while the given load power demand can be satisfied without load rejection. We also report a case study to show the efficacy of the developed algorithm.",
"title": ""
},
{
"docid": "5554bea693ba285e74f72b8a7b13230a",
"text": "Multitasking is the result of time allocation decisions made by individuals faced with multiple tasks. Multitasking research is important in order to improve the design of systems and applications. Since people typically use computers to perform multiple tasks at the same time, insights into this type of behavior can help develop better systems and ideal types of computer environments for modern multitasking users. In this paper, we define multitasking based on the principles of task independence and performance concurrency and develop a set of metrics for computer-based multitasking. The theoretical foundation of this metric development effort stems from an application of key principles of Activity Theory and a systematic analysis of computer usage from the perspective of the user, the task and the technology. The proposed metrics, which range from a lean dichotomous variable to a richer measure based on switches, were validated with data from a sample of users who self-reported their activities during a computer usage session. This set of metrics can be used to establish a conceptual and methodological foundation for future multitasking studies.",
"title": ""
},
{
"docid": "0c1001c6195795885604a2aaa24ddb07",
"text": "Recent advances in artificial intelligence (AI) have increased the opportunities for users to interact with the technology. Now, users can even collaborate with AI in creative activities such as art. To understand the user experience in this new user--AI collaboration, we designed a prototype, DuetDraw, an AI interface that allows users and the AI agent to draw pictures collaboratively. We conducted a user study employing both quantitative and qualitative methods. Thirty participants performed a series of drawing tasks with the think-aloud method, followed by post-hoc surveys and interviews. Our findings are as follows: (1) Users were significantly more content with DuetDraw when the tool gave detailed instructions. (2) While users always wanted to lead the task, they also wanted the AI to explain its intentions but only when the users wanted it to do so. (3) Although users rated the AI relatively low in predictability, controllability, and comprehensibility, they enjoyed their interactions with it during the task. Based on these findings, we discuss implications for user interfaces where users can collaborate with AI in creative works.",
"title": ""
},
{
"docid": "c1fa2b5da311edb241dca83edcf327a4",
"text": "The growing amount of web-based attacks poses a severe threat to the security of web applications. Signature-based detection techniques increasingly fail to cope with the variety and complexity of novel attack instances. As a remedy, we introduce a protocol-aware reverse HTTP proxy TokDoc (the token doctor), which intercepts requests and decides on a per-token basis whether a token requires automatic \"healing\". In particular, we propose an intelligent mangling technique, which, based on the decision of previously trained anomaly detectors, replaces suspicious parts in requests by benign data the system has seen in the past. Evaluation of our system in terms of accuracy is performed on two real-world data sets and a large variety of recent attacks. In comparison to state-of-the-art anomaly detectors, TokDoc is not only capable of detecting most attacks, but also significantly outperforms the other methods in terms of false positives. Runtime measurements show that our implementation can be deployed as an inline intrusion prevention system.",
"title": ""
},
{
"docid": "5353d9e123261783a5bcb02adaac09b2",
"text": "This work presents a new digital control strategy of a three-phase PWM inverter for uninterruptible power supplies (UPS) systems. To achieve a fast transient response, a good voltage regulation, nearly zero steady state inverter output voltage error, and low total harmonic distortion (THD), the proposed control method consists of two discrete-time feedback controllers: a discrete-time optimal + sliding-mode voltage controller in outer loop and a discrete-time optimal current controller in inner loop. To prove the effectiveness of the proposed technique, various simulation results using Matlab/Simulink are shown under both linear and nonlinear loads.",
"title": ""
},
{
"docid": "6196444488388da0ab6a6b79d05af6e0",
"text": "Data mining techniques are becoming very popular nowadays because of the wide availability of huge quantity of data and the need for transforming such data into knowledge. In today’s globalization, core banking model and cut throat competition making banks to struggling to gain a competitive edge over each other. The face to face interaction with customer is no more exists in the modern banking world. Banking systems collect huge amounts of data on day to day basis, be it customer information, transaction details like deposits and withdrawals, loans, risk profiles, credit card details, credit limit and collateral details related information. Thousands of decisions are taken in a bank on daily basis. In recent years the ability to generate, capture and store data has increased enormously. The information contained in this data can be very important. The wide availability of huge amounts of data and the need for transforming such data into knowledge encourage IT industry to use data mining. Lending is the primary business of the banks. Credit Risk Management is one of the most important and critical factor in banking world. Without proper credit risk management banks will face huge losses and lending becomes very tough for the banks. Data mining techniques are greatly used in the banking industry which helps them compete in the market and provide the right product to the right customer with less risk. Credit risks which account for the risk of loss and loan defaults are the major source of risk encountered by banking industry. Data mining techniques like classification and prediction can be applied to overcome this to a great extent. In this paper we introduce an effective prediction model for the bankers that help them predict the credible customers who have applied for loan. Decision Tree Induction Data Mining Algorithm is applied to predict the attributes relevant for credibility. A prototype of the model is described in this paper which can be used by the organizations in making the right decision to approve or reject the loan request of the customers. Keywords— Banking industry; Data Mining; Risk Management; Classification; Credit Scoring; Non-Performing Assets; Default Detection; Non-Performing Loans Decision Tree; Credit Risk Assessment; Classification; Prediction --------------------------------------------------------------------***----------------------------------------------------------",
"title": ""
},
{
"docid": "570eca9884edb7e4a03ed95763be20aa",
"text": "Gene expression is a fundamentally stochastic process, with randomness in transcription and translation leading to cell-to-cell variations in mRNA and protein levels. This variation appears in organisms ranging from microbes to metazoans, and its characteristics depend both on the biophysical parameters governing gene expression and on gene network structure. Stochastic gene expression has important consequences for cellular function, being beneficial in some contexts and harmful in others. These situations include the stress response, metabolism, development, the cell cycle, circadian rhythms, and aging.",
"title": ""
},
{
"docid": "1c5591bec1b8bfab63309aa2eb488e83",
"text": "When performing visualization and classification, people often confront the problem of dimensionality reduction. Isomap is one of the most promising nonlinear dimensionality reduction techniques. However, when Isomap is applied to real-world data, it shows some limitations, such as being sensitive to noise. In this paper, an improved version of Isomap, namely S-Isomap, is proposed. S-Isomap utilizes class information to guide the procedure of nonlinear dimensionality reduction. Such a kind of procedure is called supervised nonlinear dimensionality reduction. In S-Isomap, the neighborhood graph of the input data is constructed according to a certain kind of dissimilarity between data points, which is specially designed to integrate the class information. The dissimilarity has several good properties which help to discover the true neighborhood of the data and, thus, makes S-Isomap a robust technique for both visualization and classification, especially for real-world problems. In the visualization experiments, S-Isomap is compared with Isomap, LLE, and WeightedIso. The results show that S-Isomap performs the best. In the classification experiments, S-Isomap is used as a preprocess of classification and compared with Isomap, WeightedIso, as well as some other well-established classification methods, including the K-nearest neighbor classifier, BP neural network, J4.8 decision tree, and SVM. The results reveal that S-Isomap excels compared to Isomap and WeightedIso in classification, and it is highly competitive with those well-known classification methods.",
"title": ""
},
{
"docid": "fcb4de96f37256bb8d189f6518c0bb41",
"text": "We relate the sources of innovation market failure to the dominant mode of sectoral innovation and outline mechanisms for public support of innovation that target specific sources of innovation market failure. q 2000 Elsevier Science B.V. All",
"title": ""
},
{
"docid": "8be957572c846ddda107d8343094401b",
"text": "Corporate accounting statements provide financial markets, and tax services with valuable data on the economic health of companies, although financial indices are only focused on a very limited part of the activity within the company. Useful tools in the field of processing extended financial and accounting data are the methods of Artificial Intelligence, aiming the efficient delivery of financial information to tax services, investors, and financial markets where lucrative portfolios can be created. Key-words: Financial Indices, Artificial Intelligence, Data Mining, Neural Networks, Genetic Algorithms",
"title": ""
},
{
"docid": "29713533202820951b0d4e5ed49a009e",
"text": "This work presents a start-up boosting circuit designed for fast stabilization of a 2-transistor voltage reference. A clock injection method is used to induce a large bias on the 2-transistor voltage reference resulting in a fast output voltage settling which is critical to reducing initialization time for analog components, reducing energy consumption. The fast stabilization technique is implemented in 180nm CMOS process and uses 0.404mm2 of area. Measurement from test chips shows 50.4μW power consumption during start-up phase with 133× speed gain.",
"title": ""
},
{
"docid": "26e7469d45ba1df6c1a03a2b1fa6ec15",
"text": "The Internet appears to have become an ever-increasing part in many areas of people’s day-to-day lives. One area that deserves further examination surrounds sex addiction and its relationship with excessive Internet usage. It has been alleged by some academics that social pathologies are beginning to surface in cyberspace and have been referred to as “technological addictions.” This article examines the concept of “Internet addiction” in relation to excessive sexual behavior. It contains discussions of the concept of sexual addiction and whether the whole concept is viable. This is done through the evaluation of the small amount of empirical data available. It is concluded that Internet sex is a new medium of expression that may increase participation because of the perceived anonymity and disinhibition factors. It is also argued that although the amount of empirical data is small, Internet sex addiction exists and that there are many opportunities for future research. These are explicitly outlined.",
"title": ""
},
{
"docid": "216e38bb5e6585099e949572f7645ebf",
"text": "The graviperception of the hypotrichous ciliate Stylonychia mytilus was investigated using electrophysiological methods and behavioural analysis. It is shown that Stylonychia can sense gravity and thereby compensates sedimentation rate by a negative gravikinesis. The graviresponse consists of a velocity-regulating physiological component (negative gravikinesis) and an additional orientational component. The latter is largely based on a physical mechanism but might, in addition, be affected by the frequency of ciliary reversals, which is under physiological control. We show that the external stimulus of gravity is transformed to a physiological signal, activating mechanosensitive calcium and potassium channels. Earlier electrophysiological experiments revealed that these ion channels are distributed in the manner of two opposing gradients over the surface membrane. Here, we show, for the first time, records of gravireceptor potentials in Stylonychia that are presumably based on this two-gradient system of ion channels. The gravireceptor potentials had maximum amplitudes of approximately 4 mV and slow activation characteristics (0.03 mV s(-1)). The presumptive number of involved graviperceptive ion channels was calculated and correlates with the analysis of the locomotive behaviour.",
"title": ""
}
] |
scidocsrr
|
40fc19031206c07f5a786f43141f4cd8
|
Improved Asymmetric Locality Sensitive Hashing (ALSH) for Maximum Inner Product Search (MIPS)
|
[
{
"docid": "9a4a519023175802578dad5864b3dd01",
"text": "The problem of efficiently finding the best match for a query in a given set with respect to the Euclidean distance or the cosine similarity has been extensively studied. However, the closely related problem of efficiently finding the best match with respect to the inner-product has never been explored in the general setting to the best of our knowledge. In this paper we consider this problem and contrast it with the previous problems considered. First, we propose a general branch-and-bound algorithm based on a (single) tree data structure. Subsequently, we present a dual-tree algorithm for the case where there are multiple queries. Our proposed branch-and-bound algorithms are based on novel inner-product bounds. Finally we present a new data structure, the cone tree, for increasing the efficiency of the dual-tree algorithm. We evaluate our proposed algorithms on a variety of data sets from various applications, and exhibit up to five orders of magnitude improvement in query time over the naive search technique in some cases.",
"title": ""
},
{
"docid": "2052b47be2b5e4d0c54ab0be6ae1958b",
"text": "Discriminative training approaches like structural SVMs have shown much promise for building highly complex and accurate models in areas like natural language processing, protein structure prediction, and information retrieval. However, current training algorithms are computationally expensive or intractable on large datasets. To overcome this bottleneck, this paper explores how cutting-plane methods can provide fast training not only for classification SVMs, but also for structural SVMs. We show that for an equivalent “1-slack” reformulation of the linear SVM training problem, our cutting-plane method has time complexity linear in the number of training examples. In particular, the number of iterations does not depend on the number of training examples, and it is linear in the desired precision and the regularization parameter. Furthermore, we present an extensive empirical evaluation of the method applied to binary classification, multi-class classification, HMM sequence tagging, and CFG parsing. The experiments show that the cutting-plane algorithm is broadly applicable and fast in practice. On large datasets, it is typically several orders of magnitude faster than conventional training methods derived from decomposition methods like SVM-light, or conventional cutting-plane methods. Implementations of our methods are available at www.joachims.org .",
"title": ""
}
] |
[
{
"docid": "d48a5e5005e757af878e97c0d63a50da",
"text": "Measures of Semantic Relatedness determine the degree of relatedness between two words. Most of these measures work only between pairs of words in a single language. We propose a novel method of measuring semantic relatedness between pairs of words in two different languages. This method does not use a parallel corpus but is rather seeded with a set of known translations. For evaluation we construct a cross-language dataset of French-English word pairs with similarity scores. Our new cross-language measure correlates more closely with averaged human scores than our unilingual baselines. 1. Distributional Semantics “You shall know a word by the company it keeps” – Firth (1957) •Construct a word-context matrix • Corpora: French and English Wikipedias • Used POS-tagged words as contexts • Re-weight matrix – Pointwise Mutual Information (PMI) •Cosine similarity •Evaluate correlation on Rubenstein and Goodenough (1965) style dataset",
"title": ""
},
{
"docid": "9b5b7c575a5d6912ddcbe82e539cc5f6",
"text": "This paper describes a grasp planning for a mobile manipulator which works in real environment. Mobile robot studies up to now that manipulate an object in real world practically use ID tag on an object or an object model which is given to the robot in advance. The authors aim to develop a mobile manipulator that can acquire an object model through video images and can manipulate the object. In this approach, the robot can manipulate an unknown object autonomously. A grasp planning proposed in this paper can find a stable grasp pose from the automatically generated model which contains redundant data and the shape error of the object. Experiments show the effectiveness of the proposed method",
"title": ""
},
{
"docid": "3b72c70213ccd3d5f3bda5cc2e2c6945",
"text": "Neural language models (NLMs) have recently gained a renewed interest by achieving state-of-the-art performance across many natural language processing (NLP) tasks. However, NLMs are very computationally demanding largely due to the computational cost of the softmax layer over a large vocabulary. We observe that, in decoding of many NLP tasks, only the probabilities of the top-K hypotheses need to be calculated preciously and K is often much smaller than the vocabulary size. This paper proposes a novel softmax layer approximation algorithm, called Fast Graph Decoder (FGD), which quickly identifies, for a given context, a set of K words that are most likely to occur according to a NLM. We demonstrate that FGD reduces the decoding time by an order of magnitude while attaining close to the full softmax baseline accuracy on neural machine translation and language modeling tasks. We also prove the theoretical guarantee on the softmax approximation quality.",
"title": ""
},
{
"docid": "14fa72af2a1a4264b2e84e6c810df326",
"text": "This paper presents a clustering approach that simultaneously identifies product features and groups them into aspect categories from online reviews. Unlike prior approaches that first extract features and then group them into categories, the proposed approach combines feature and aspect discovery instead of chaining them. In addition, prior work on feature extraction tends to require seed terms and focus on identifying explicit features, while the proposed approach extracts both explicit and implicit features, and does not require seed terms. We evaluate this approach on reviews from three domains. The results show that it outperforms several state-of-the-art methods on both tasks across all three domains.",
"title": ""
},
{
"docid": "7e77adbdb66b24c0a2a4ba22993bd7f7",
"text": "This paper provides an overview of research on social media and body image. Correlational studies consistently show that social media usage (particularly Facebook) is associated with body image concerns among young women and men, and longitudinal studies suggest that this association may strengthen over time. Furthermore, appearance comparisons play a role in the relationship between social media and body image. Experimental studies, however, suggest that brief exposure to one’s own Facebook account does not negatively impact young women’s appearance concerns. Further longitudinal and experimental research is needed to determine which aspects of social media are most detrimental to people’s body image concerns. Research is also needed on more diverse samples as well as other social media platforms (e.g., Instagram).",
"title": ""
},
{
"docid": "48b48ac2c811976ca6daf7d180eb895f",
"text": "Open information extraction approaches have led to the creation of large knowledge bases from the Web. The problem with such methods is that their entities and relations are not canonicalized, leading to redundant and ambiguous facts. For example, they may store {Barack Obama, was born, Honolulu and {Obama, place of birth, Honolulu}. In this paper, we present an approach based on machine learning methods that can canonicalize such Open IE triples, by clustering synonymous names and phrases.\n We also provide a detailed discussion about the different signals, features and design choices that influence the quality of synonym resolution for noun phrases in Open IE KBs, thus shedding light on the middle ground between \"open\" and \"closed\" information extraction systems.",
"title": ""
},
{
"docid": "438094ef7913de0236b57a85e7d511c2",
"text": "Magnetic resonance (MR) is the best way to assess the new anatomy of the pelvis after male to female (MtF) sex reassignment surgery. The aim of the study was to evaluate the radiological appearance of the small pelvis after MtF surgery and to compare it with the normal women's anatomy. Fifteen patients who underwent MtF surgery were subjected to pelvic MR at least 6 months after surgery. The anthropometric parameters of the small pelvis were measured and compared with those of ten healthy women (control group). Our personal technique (creation of the mons Veneris under the pubic skin) was performed in all patients. In patients who underwent MtF surgery, the mean neovaginal depth was slightly superior than in women (P=0.009). The length of the inferior pelvic aperture and of the inlet of pelvis was higher in the control group (P<0.005). The inclination between the axis of the neovagina and the inferior pelvis aperture, the thickness of the mons Veneris and the thickness of the rectovaginal septum were comparable between the two study groups. MR consents a detailed assessment of the new pelvic anatomy after MtF surgery. The anthropometric parameters measured in our patients were comparable with those of women.",
"title": ""
},
{
"docid": "382eec3778d98cb0c8445633c16f59ef",
"text": "In the face of acute global competition, supplier management is rapidly emerging as a crucial issue to any companies striving for business success and sustainable development. To optimise competitive advantages, a company should incorporate ‘suppliers’ as an essential part of its core competencies. Supplier evaluation, the first step in supplier management, is a complex multiple criteria decision making (MCDM) problem, and its complexity is further aggravated if the highly important interdependence among the selection criteria is taken into consideration. The objective of this paper is to suggest a comprehensive decision method for identifying top suppliers by considering the effects of interdependence among the selection criteria. Proposed in this study is a hybrid model, which incorporates the technique of analytic network process (ANP) in which criteria weights are determined using fuzzy extent analysis, Technique for order performance by similarity to ideal solution (TOPSIS) under fuzzy environment is adopted to rank competing suppliers in terms of their overall performances. An example is solved to illustrate the effectiveness and feasibility of the suggested model.",
"title": ""
},
{
"docid": "a4581bc315cee9824cdc96747a785359",
"text": "Networks are widely used to model structured data and enable various downstream applications. However, in the real world, most data are structureless, and the assumption of a given network for each particular task is oen invalid. In this work, given a set of objects, we propose to leverage data cube to organize its enormous ambient data. Upon that, we further provide a reinforcement learning algorithm to automatically explore the cube structure and eciently select appropriate data for the construction of a quality network, which can facilitate various tasks on the given set of objects. With extensive experiments of two classic networkmining tasks on dierent real-world large datasets, we show that our proposed cube2net pipeline is general, and much more eective and ecient in quality network construction, compared with other methods without the leverage of data cube or reinforcement learning.",
"title": ""
},
{
"docid": "f9c4f413618d94b78b96c8cb188e09c5",
"text": "We propose to detect abnormal events via a sparse reconstruction over the normal bases. Given a collection of normal training examples, e.g., an image sequence or a collection of local spatio-temporal patches, we propose the sparse reconstruction cost (SRC) over the normal dictionary to measure the normalness of the testing sample. By introducing the prior weight of each basis during sparse reconstruction, the proposed SRC is more robust compared to other outlier detection criteria. To condense the over-completed normal bases into a compact dictionary, a novel dictionary selection method with group sparsity constraint is designed, which can be solved by standard convex optimization. Observing that the group sparsity also implies a low rank structure, we reformulate the problem using matrix decomposition, which can handle large scale training samples by reducing the memory requirement at each iteration from O(k2) to O(k) where k is the number of samples. We use the column wise coordinate descent to solve the matrix decomposition represented formulation, which empirically leads to a similar solution to the group sparsity formulation. By designing different types of spatio-temporal basis, our method can detect both local and global abnormal events. Meanwhile, as it does not rely on object detection and tracking, it can be applied to crowded video scenes. By updating the dictionary incrementally, our 1This work was supported in part by the Nanyang Assistant Professorship (M4080134), JSPSNTU joint project (M4080882), Natural Science Foundation of China (61105013), and National Science and Technology Pillar Program (2012BAI14B03). Part of this work was done when Yang Cong was a research fellow at NTU. Preprint submitted to Pattern Recognition January 30, 2013 method can be easily extended to online event detection. Experiments on three benchmark datasets and the comparison to the state-of-the-art methods validate the advantages of our method.",
"title": ""
},
{
"docid": "4332198661cfadafc57458ef6c1fd0f1",
"text": "While recent corpus annotation efforts cover a wide variety of semantic structures, work on temporal and causal relatio ns is still in its early stages. Annotation efforts have typically considere either temporal relations or causal relations, but not bot h, and no corpora currently exist that allow the relation between temporals a nd causals to be examined empirically. We have annotated a co rpus f 1000 event pairs for both temporal and causal relations, focusin g on a relatively frequent construction in which the events a re conjoined by the word and. Temporal relations were annotated using an extension of th e BEFOREandAFTER scheme used in the TempEval competition, and causal relations were annotated using a scheme based on c onnective phrases like and as a result . The annotators achieved 81.2% agreement on temporal relations and 77.8% agreement on caus al relations. Analysis of the resulting corpus revealed som e interesting findings, for example, that over 30% of CAUSAL relations do not have an underlying BEFORErelation. The corpus was also explored using machine learning methods, and while model performanc e ex eeded all baselines, the results suggested that simple grammatical cues may be insufficient for identifying the more difficult te mporal and causal relations.",
"title": ""
},
{
"docid": "50c0f3cdccc1fe63f3fcb4cb3c983617",
"text": "Junho Yang Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: yang125@illinois.edu Ashwin Dani Department of Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: adani@illinois.edu Soon-Jo Chung Department of Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: sjchung@illinois.edu Seth Hutchinson Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: seth@illinois.edu",
"title": ""
},
{
"docid": "ba4cf5c09f167f74e573fdc196ac41a4",
"text": "In this paper we first give a presentation of the history and organisation of the electricity market in Scandinavia, which has been gradually restructured over the last decade. A futures market has been in operation there since September 1995. We analyse the historical prices in the spot and futures markets, using general theory for pricing of commodities futures contracts. We find that the futures prices on average exceeded the actual spot price at delivery. Hence, we conclude that there is a negative risk premium in the electricity futures market. This result contradicts the findings in most other commodities markets, where the risk premium from holding a futures contract tend to be zero or positive. Physical factors like unexpected precipitation can contribute to explain parts of the observations. However, we also identify the difference in flexibility between the supply and demand sides of the electricity market, leaving the demand side with higher incentive to hedge their positions in the futures market, as a possible explanation for the negative risk premium. The limited data available might not be sufficient to draw fully conclusive results. However, the analysis described in the paper can be repeated with higher significance in a few years from now.",
"title": ""
},
{
"docid": "15b0b080f27059cca6b137e71144712e",
"text": "The current study explored the elaborative retrieval hypothesis as an explanation for the testing effect: the tendency for a memory test to enhance retention more than restudying. In particular, the retrieval process during testing may activate elaborative information related to the target response, thereby increasing the chances that activation of any of this information will facilitate later retrieval of the target. In a test of this view, participants learned cue-target pairs, which were strongly associated (e.g., Toast: Bread) or weakly associated (e.g., Basket: Bread), through either a cued recall test (Toast: _____) or a restudy opportunity (Toast: Bread). A final test requiring free recall of the targets revealed that tested items were retained better than restudied items, and although strong cues facilitated recall of tested items initially, items recalled from weak cues were retained better over time, such that this advantage was eliminated or reversed at the time of the final test. Restudied items were retained at similar rates on the final test regardless of the strength of the cue-target relationship. These results indicate that the activation of elaborative information-which would occur to a greater extent during testing than restudying--may be one mechanism that underlies the testing effect.",
"title": ""
},
{
"docid": "0e3f43a28c477ae0e15a8608d3a1d4a5",
"text": "This report provides an overview of the current state of the art deep learning architectures and optimisation techniques, and uses the ADNI hippocampus MRI dataset as an example to compare the effectiveness and efficiency of different convolutional architectures on the task of patch-based 3dimensional hippocampal segmentation, which is important in the diagnosis of Alzheimer’s Disease. We found that a slightly unconventional ”stacked 2D” approach provides much better classification performance than simple 2D patches without requiring significantly more computational power. We also examined the popular ”tri-planar” approach used in some recently published studies, and found that it provides much better results than the 2D approaches, but also with a moderate increase in computational power requirement. Finally, we evaluated a full 3D convolutional architecture, and found that it provides marginally better results than the tri-planar approach, but at the cost of a very significant increase in computational power requirement. ar X iv :1 50 5. 02 00 0v 1 [ cs .L G ] 8 M ay 2 01 5",
"title": ""
},
{
"docid": "50708eb1617b59f605b926583d9215bf",
"text": "Due to filmmakers focusing on violence, traumatic events, and hallucinations when depicting characters with schizophrenia, critics have scrutinized the representation of mental disorders in contemporary films for years. This study compared previous research on schizophrenia with the fictional representation of the disease in contemporary films. Through content analysis, this study examined 10 films featuring a schizophrenic protagonist, tallying moments of violence and charting if they fell into four common stereotypes. Results showed a high frequency of violent behavior in films depicting schizophrenic characters, implying that those individuals are overwhelmingly dangerous and to be feared.",
"title": ""
},
{
"docid": "e1d22370dcfde0a01e74aca71e6fbe2f",
"text": "The liposuction technique has changed greatly over the years. In 1989, the authors presented subdermal superficial liposuction which treats the superficial fat layer and yields better skin retraction. With this technique the surgeon can treat thin adipose layers to obtain better results in more cases than the traditional liposuction technique. The technique can be used in cases with difficult skin adjustment and in secondary cases when “deep only” liposuction has been performed and there were residual adiposities. Subdermal superficial liposuction evolved so that one could obtain good skin retraction by performing massive liposuction of all the fat layers. The authors named this technique MALL (Massive All Layer Liposuction). The technique is applied in body areas where the fat layer is very thick and stretches the skin because of its volume and weight such as in the abdomen, posterior arms, and internal surface of the upper third of the thighs. MALL liposuction drastically reduces the indications for abdominoplasty and inner thigh and arm dermolipectomies. Knowledge of the anatomy of the subcutaneous fat and the superficial fascial system allows one to explain the subdermal superficial liposuction from an anatomical point of view, to perform a more rational and effetive procedure, and to differentiate the technique depending on the area of the body.",
"title": ""
},
{
"docid": "9b18acdff51b379ec112b3c1a58307fb",
"text": "Learning visual features from unlabeled image data is an important yet challenging task, which is often achieved by training a model on some annotation-free information. We consider spatial contexts, for which we solve so-called jigsaw puzzles, i.e., each image is cut into grids and then disordered, and the goal is to recover the correct configuration. Existing approaches formulated it as a classification task by defining a fixed mapping from a small subset of configurations to a class set, but these approaches ignore the underlying relationship between different configurations and also limit their application to more complex scenarios. This paper presents a novel approach which applies to jigsaw puzzles with an arbitrary grid size and dimensionality. We provide a fundamental and generalized principle, that weaker cues are easier to be learned in an unsupervised manner and also transfer better. In the context of puzzle recognition, we use an iterative manner which, instead of solving the puzzle all at once, adjusts the order of the patches in each step until convergence. In each step, we combine both unary and binary features on each patch into a cost function judging the correctness of the current configuration. Our approach, by taking similarity between puzzles into consideration, enjoys a more reasonable way of learning visual knowledge. We verify the effectiveness of our approach in two aspects. First, it is able to solve arbitrarily complex puzzles, including high-dimensional puzzles, that prior methods are difficult to handle. Second, it serves as a reliable way of network initialization, which leads to better transfer performance in a few visual recognition tasks including image classification, object detection, and semantic segmentation.",
"title": ""
},
{
"docid": "85a076e58f4d117a37dfe6b3d68f5933",
"text": "We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.",
"title": ""
}
] |
scidocsrr
|
15e09675a351e7c3aca4eb277d176de3
|
Density-based semi-supervised clustering
|
[
{
"docid": "368a3dd36283257c5573a7e1ab94e930",
"text": "This paper develops the multidimensional binary search tree (or <italic>k</italic>-d tree, where <italic>k</italic> is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The <italic>k</italic>-d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an <italic>n</italic> record file are: insertion, <italic>O</italic>(log <italic>n</italic>); deletion of the root, <italic>O</italic>(<italic>n</italic><supscrpt>(<italic>k</italic>-1)/<italic>k</italic></supscrpt>); deletion of a random node, <italic>O</italic>(log <italic>n</italic>); and optimization (guarantees logarithmic performance of searches), <italic>O</italic>(<italic>n</italic> log <italic>n</italic>). Search algorithms are given for partial match queries with <italic>t</italic> keys specified [proven maximum running time of <italic>O</italic>(<italic>n</italic><supscrpt>(<italic>k</italic>-<italic>t</italic>)/<italic>k</italic></supscrpt>)] and for nearest neighbor queries [empirically observed average running time of <italic>O</italic>(log <italic>n</italic>).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that <italic>k</italic>-d trees could be quite useful in many applications, and examples of potential uses are given.",
"title": ""
}
] |
[
{
"docid": "5e58638e766904eb84380b53cae60df2",
"text": "BACKGROUND\nAneurysmal subarachnoid hemorrhage (SAH) accounts for 5% of strokes and carries a poor prognosis. It affects around 6 cases per 100,000 patient years occurring at a relatively young age.\n\n\nMETHODS\nCommon risk factors are the same as for stroke, and only in a minority of the cases, genetic factors can be found. The overall mortality ranges from 32% to 67%, with 10-20% of patients with long-term dependence due to brain damage. An explosive headache is the most common reported symptom, although a wide spectrum of clinical disturbances can be the presenting symptoms. Brain computed tomography (CT) allow the diagnosis of SAH. The subsequent CT angiography (CTA) or digital subtraction angiography (DSA) can detect vascular malformations such as aneurysms. Non-aneurysmal SAH is observed in 10% of the cases. In patients surviving the initial aneurysmal bleeding, re-hemorrhage and acute hydrocephalus can affect the prognosis.\n\n\nRESULTS\nAlthough occlusion of an aneurysm by surgical clipping or endovascular procedure effectively prevents rebleeding, cerebral vasospasm and the resulting cerebral ischemia occurring after SAH are still responsible for the considerable morbidity and mortality related to such a pathology. A significant amount of experimental and clinical research has been conducted to find ways in preventing these complications without sound results.\n\n\nCONCLUSIONS\nEven though no single pharmacological agent or treatment protocol has been identified, the main therapeutic interventions remain ineffective and limited to the manipulation of systemic blood pressure, alteration of blood volume or viscosity, and control of arterial dioxide tension.",
"title": ""
},
{
"docid": "a4f2a82daf86314363ceeac34cba7ed9",
"text": "As a vital task in natural language processing, relation classification aims to identify relation types between entities from texts. In this paper, we propose a novel Att-RCNN model to extract text features and classify relations by combining recurrent neural network (RNN) and convolutional neural network (CNN). This network structure utilizes RNN to extract higher level contextual representations of words and CNN to obtain sentence features for the relation classification task. In addition to this network structure, both word-level and sentence-level attention mechanisms are employed in Att-RCNN to strengthen critical words and features to promote the model performance. Moreover, we conduct experiments on four distinct datasets: SemEval-2010 task 8, SemEval-2018 task 7 (two subtask datasets), and KBP37 dataset. Compared with the previous public models, Att-RCNN has the overall best performance and achieves the highest $F_{1}$ score, especially on the KBP37 dataset.",
"title": ""
},
{
"docid": "e8978b519808e0bfd24ffe2cfcc8499b",
"text": "Experimental and numerical analysis of loads and responses for planing craft in waves is considered. Extensive experiments have been performed on a planing craft, in full-scale as well as in model scale. The test set-ups and significant results are reviewed. The required resolution in experiments on planing craft in waves, concerning sampling frequencies, filtering and pressure transducer areas, is investigated. The aspects of peak identification in transient signals, fitting of analytical cumulative distribution functions to sampled data, and statistical convergence are treated. A method for reconstruction of the momentary pressure distribution at hull-water impact, from measurements with a limited number of transducers, is presented. The method is evaluated to full-scale data, and is concluded to be applicable in detailed evaluation of the hydrodynamic load distribution in time-domain simulations. Another suggested area of application is in full-scale design evaluations, where it can improve the traceability, i.e. enable evaluation of the loads along with the responses with more confidence. The presented model experiment was designed to enable time-domain monitoring of the complete hydromechanic pressure distribution on planing craft in waves. The test set-up is evaluated by comparing vertical forces and pitching moments derived from acceleration measurements, with the corresponding forces derived with the pressure distribution reconstruction method. Clear correlation is found. An approach for direct calculations of loads, as well as motion and structure response, is presented. Hydrodynamic loads and motion responses are calculated with a non-linear timedomain strip method. Structure responses are calculated by applying momentary distributed pressure loads, formulated from hydrodynamic simulations, on a global finite element model with inertia relief. From the time series output, limiting conditions and extreme responses are determined by means of short term statistics. Promising results are demonstrated in applications, where extreme structure responses derived by the presented approach, are compared with responses to equivalent uniform rule based loads, and measured responses from the full-scale trials. It is concluded that the approach is a useful tool for further research, which could be developed into a rational design method.",
"title": ""
},
{
"docid": "358cd39cf050f8c38c1378d6c5dee65a",
"text": "Despite the fact that benefits of adopting cloud computing services and applications have become well known within the communities of information technology, adoption of such technology still facing various challenges. Currently, establishing trust is considered as the main barrier preventing individuals and enterprises adopting cloud solutions. Though, to best of our knowledge it has not been considered as a main factor when investigating cloud computing acceptance whereas the majority of studies in the literature focused mainly on finding out ways and approaches increasing trust of cloud computing solutions. Therefore, we present a revised Unified Theory of Acceptance and Use of Technology (UTAUT) where trust is considered as a main construct beside its original constructs.",
"title": ""
},
{
"docid": "25c412af8e072bf592ebfa1aa0168aa1",
"text": "One of the most promising strategies to improve the bioavailability of active pharmaceutical ingredients is based on the association of the drug with colloidal carriers, for example, polymeric nanoparticles, which are stable in biological environment, protective for encapsulated substances and able to modulate physicochemical characteristics, drug release and biological behaviour. The synthetic polymers possess unique properties due to their chemical structure. Some of them are characterized with mucoadhesiveness; another can facilitate the penetration through mucous layers; or to be stimuli responsive, providing controlled drug release at the target organ, tissues or cells; and all of them are biocompatible and versatile. These are suitable vehicles of nucleic acids, oligonucleotides, DNA, peptides and proteins. This chapter aims to look at the ‘hot spots’ in the design of synthetic polymer nanoparticles as an intelligent drug delivery system in terms of biopharmaceutical challenges and in relation to the route of their administration: the non-invasive—oral, transdermal, transmucosal (nasal, buccal/sublingual, vaginal, rectal and ocular) and inhalation routes—and the invasive parenteral route.",
"title": ""
},
{
"docid": "98742b27582cf8e56e01f435daa3aa78",
"text": "Temporal segmentation of human motion into actions is central to the understanding and building of computational models of human motion and activity recognition. Several issues contribute to the challenge of temporal segmentation and classification of human motion. These include the large variability in the temporal scale and periodicity of human actions, the complexity of representing articulated motion, and the exponential nature of all possible movement combinations. We provide initial results from investigating two distinct problems -classification of the overall task being performed, and the more difficult problem of classifying individual frames over time into specific actions. We explore first-person sensing through a wearable camera and inertial measurement units (IMUs) for temporally segmenting human motion into actions and performing activity classification in the context of cooking and recipe preparation in a natural environment. We present baseline results for supervised and unsupervised temporal segmentation, and recipe recognition in the CMU-multimodal activity database (CMU-MMAC).",
"title": ""
},
{
"docid": "73c8978b793d7904264f0e78d9efdc61",
"text": "The aim of this study was (1) to provide behavioral evidence for multimodal feature integration in an object recognition task in humans and (2) to characterize the processing stages and the neural structures where multisensory interactions take place. Event-related potentials (ERPs) were recorded from 30 scalp electrodes while subjects performed a forced-choice reaction-time categorization task: At each trial, the subjects had to indicate which of two objects was presented by pressing one of two keys. The two objects were defined by auditory features alone, visual features alone, or the combination of auditory and visual features. Subjects were more accurate and rapid at identifying multimodal than unimodal objects. Spatiotemporal analysis of ERPs and scalp current densities revealed several auditory-visual interaction components temporally, spatially, and functionally distinct before 200 msec poststimulus. The effects observed were (1) in visual areas, new neural activities (as early as 40 msec poststimulus) and modulation (amplitude decrease) of the N185 wave to unimodal visual stimulus, (2) in the auditory cortex, modulation (amplitude increase) of subcomponents of the unimodal auditory N1 wave around 90 to 110 msec, and (3) new neural activity over the right fronto-temporal area (140 to 165 msec). Furthermore, when the subjects were separated into two groups according to their dominant modality to perform the task in unimodal conditions (shortest reaction time criteria), the integration effects were found to be similar for the two groups over the nonspecific fronto-temporal areas, but they clearly differed in the sensory-specific cortices, affecting predominantly the sensory areas of the nondominant modality. Taken together, the results indicate that multisensory integration is mediated by flexible, highly adaptive physiological processes that can take place very early in the sensory processing chain and operate in both sensory-specific and nonspecific cortical structures in different ways.",
"title": ""
},
{
"docid": "2f7a0eaf15515a9cf8cbbebc4d734072",
"text": "Rifampicin (Rif) is one of the most potent and broad spectrum antibiotics against bacterial pathogens and is a key component of anti-tuberculosis therapy, stemming from its inhibition of the bacterial RNA polymerase (RNAP). We determined the crystal structure of Thermus aquaticus core RNAP complexed with Rif. The inhibitor binds in a pocket of the RNAP beta subunit deep within the DNA/RNA channel, but more than 12 A away from the active site. The structure, combined with biochemical results, explains the effects of Rif on RNAP function and indicates that the inhibitor acts by directly blocking the path of the elongating RNA when the transcript becomes 2 to 3 nt in length.",
"title": ""
},
{
"docid": "58061318f47a2b96367fe3e8f3cd1fce",
"text": "The growth of lymphatic vessels (lymphangiogenesis) is actively involved in a number of pathological processes including tissue inflammation and tumor dissemination but is insufficient in patients suffering from lymphedema, a debilitating condition characterized by chronic tissue edema and impaired immunity. The recent explosion of knowledge on the molecular mechanisms governing lymphangiogenesis provides new possibilities to treat these diseases.",
"title": ""
},
{
"docid": "3d5ab2c686c11527296537b4c8396ae2",
"text": "This study investigated writing beliefs, self-regulatory behaviors, and epistemology beliefs of preservice teachers in academic writing tasks. Students completed self-report measures of selfregulation, epistemology, and beliefs about writing. Both knowledge and regulation of cognition were positively related to writing enjoyment, and knowledge of cognition was negatively related to beliefs of ability as a fixed entity. Enjoyment of writing was related to learnability and selfassessment. It may be that students who are more self-regulated during writing also believe they can learn to improve their writing skills. It may be, however, that students who believe writing is learnable will exert the effort to self-regulate during writing. Student beliefs and feelings about learning and writing play an important and complex role in their self-regulation behaviors. Suggestions for instruction are included, and continued research of students’ beliefs and selfregulation in naturalistic contexts is recommended.",
"title": ""
},
{
"docid": "6cb0c739d4cb0b8d59f17d2d37cb5caa",
"text": "In this work, a context-based multisensor system, applied for pedestrian detection in urban environment, is presented. The proposed system comprises three main processing modules: (i) a LIDAR-based module acting as primary object detection, (ii) a module which supplies the system with contextual information obtained from a semantic map of the roads, and (iii) an image-based detection module, using sliding-window detectors, with the role of validating the presence of pedestrians in regions of interest (ROIs) generated by the LIDAR module. A Bayesian strategy is used to combine information from sensors on-board the vehicle (‘local’ information) with information contained in a digital map of the roads (‘global’ information). To support experimental analysis, a multisensor dataset, named Laser and Image Pedestrian Detection dataset (LIPD), is used. The LIPD dataset was collected in an urban environment, at day light conditions, using an electrical vehicle driven at low speed. A down sampling method, using support vectors extracted from multiple linear-SVMs, was used to reduce the cardinality of the training set and, as consequence, to decrease the CPU-time during the training process of image-based classifiers. The performance of the system is evaluated, in terms of true positive rate and false positives per frame, using three image-detectors: a linear-SVM, a SVM-cascade, and a benchmark method. Additionally, experiments are performed to assess the impact of contextual information on the performance of the detection system.",
"title": ""
},
{
"docid": "b6f4a2122f8fe1bc7cb4e59ad7cf8017",
"text": "The use of biomass to provide energy has been fundamental to the development of civilisation. In recent times pressures on the global environment have led to calls for an increased use of renewable energy sources, in lieu of fossil fuels. Biomass is one potential source of renewable energy and the conversion of plant material into a suitable form of energy, usually electricity or as a fuel for an internal combustion engine, can be achieved using a number of different routes, each with specific pros and cons. A brief review of the main conversion processes is presented, with specific regard to the production of a fuel suitable for spark ignition gas engines.",
"title": ""
},
{
"docid": "ba89c498edb8361ebe0d53b203aceb06",
"text": "We present OASIS, a CPU instruction set extension for externally verifiable initiation, execution, and termination of an isolated execution environment with a trusted computing base consisting solely of the CPU. OASIS leverages the hardware components available on commodity CPUs to achieve a low-cost, low-overhead design.",
"title": ""
},
{
"docid": "d9d3b646f7d4d88b3999f5b431159afe",
"text": "The main aim of this study was to characterize neural correlates of analogizing as a cognitive contributor to fluid and crystallized intelligence. In a previous fMRI study which employed fluid analogy letter strings as criteria in a multiple plausibility design (Geake and Hansen, 2005), two frontal ROIs associated with working memory (WM) load (within BA 9 and BA 45/46) were identified as regions in which BOLD increase correlated positively with a crystallized measure of (verbal) IQ. In this fMRI study we used fluid letter, number and polygon strings to further investigate the role of analogizing in fluid (transformation string completion) and non fluid or crystallized (unique symbol counting) cognitive tasks. The multi stimulus type (letter, number, polygon) design of the analogy strings enabled investigation of a secondary research question concerning the generalizability of fluid analogizing at a neural level. A selective psychometric battery, including the Raven's Progressive Matrices (RPM), measured individual cognitive abilities. Neural activations for the effect of task-fluid analogizing (string transformation plausibility) vs. crystallized analogizing (unique symbol counting)-included bilateral frontal and parietal areas associated with WM load and fronto parietal models of general intelligence. Neural activations for stimulus type differences were mainly confined to visually specific posterior regions. ROI covariate analyses of the psychometric measures failed to find consistent co-relationships between fluid analogizing and the RPM and other subtests, except for the WAIS Digit Symbol subtest in a group of bilateral frontal cortical regions associated with the maintenance of WM load. Together, these results support claims for separate developmental trajectories for fluid cognition and general intelligence as assessed by these psychometric subtests.",
"title": ""
},
{
"docid": "f02b44ff478952f1958ba33d8a488b8e",
"text": "Plagiarism is an illicit act of using other’s work wholly or partially as one’s own in any field such as art, poetry literature, cinema, research and other creative forms of study. It has become a serious crime in academia and research fields and access to wide range of resources on the internet has made the situation even worse. Therefore, there is a need for automatic detection of plagiarism in text. This paper presents a survey of various plagiarism detection techniques used for different languages.",
"title": ""
},
{
"docid": "e918ae2b1312292836eb661497909a83",
"text": "We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results.",
"title": ""
},
{
"docid": "e8b0536f5d749b5f6f5651fe69debbe1",
"text": "Current centralized cloud datacenters provide scalable computation- and storage resources in a virtualized infrastructure and employ a use-based \"pay-as-you-go\" model. But current mobile devices and their resource-hungry applications (e.g., Speech-or face recognition) demand for these resources on the spot, though a mobile device's intrinsic characteristic is its limited availability of resources (e.g., CPU, storage, bandwidth, energy). Thus, mobile cloud computing (MCC) was introduced to overcome these limitations by transparently making accessible the apparently infinite cloud resources to the mobile devices and by allowing mobile applications to (elastically) expand into the cloud. However, MCC often relies on a stable and fast connection to the mobile devices' surrogate in the cloud, which is a rare case in mobile scenarios. Moreover, the increased latency and the limited bandwidth prevent the use of real-time applications like, e.g. Cloud gaming. Instead, mobile edge computing (MEC) or fog computing tries to provide the necessary resources at the logical edge of the network by including infrastructure components to create ad-hoc mobile clouds. However, this approach requires the replication and management of the applications' business logic in an untrusted, unreliable and constantly changing environment. Consequently, this paper presents a novel approach to allow mobile app developers to easily benefit from the features of MEC. In particular, we present a programming model and framework that directly fit the common app developers' mindset to design elastic and scalable edge-based mobile applications.",
"title": ""
},
{
"docid": "ecc105b449b0ec054cfb523704978980",
"text": "Modern information seekers face dynamic streams of large-scale heterogeneous data that are both intimidating and overwhelming. They need a strategy to filter this barrage of massive data sets, and to find all of the information responding to their information needs, despite the pressures imposed by schedules and budgets. In this applied research, we present an exploratory search strategy that allows professional information seekers to efficiently and effectively triage all of the data. We demonstrate that exploratory search is particularly useful for information filtering and large-scale information triage, regardless of the language of the data, and regardless of the particular industry, whether finance, medical, business, government, information technology, news, or legal. Our strategy reduces a dauntingly large volume of information into a manageable, high-precision data set, suitable for focused reading. This strategy is interdisciplinary, integrating concepts from information filtering, information triage, and exploratory search. Key aspects include advanced search software, interdisciplinary paired search, asynchronous collaborative search, attention to linguistic phenomena, and aggregated search results in the form of a search matrix or search grid. We present the positive results of a task-oriented evaluation in a real-world setting, discuss these results from a qualitative perspective, and share future research areas.",
"title": ""
},
{
"docid": "f5d6f3e0f408cbccfcc5d7da86453d53",
"text": "Financial fraud detection plays a crucial role in the stability of institutions and the economy at large. Data mining methods have been used to detect/flag cases of fraud due to a large amount of data and possible concept drift. In the financial statement fraud detection domain, instances containing missing values are usually discarded from experiments and this may lead to a loss of crucial information. Imputation has been previously ruled out as an option to keep instances with missing values. This paper will examine the impact of imputation in financial statement fraud in two ways. Firstly, seven similarity measures are used to benchmark ground truth data against imputed datasets where seven imputation methods are used. Thereafter, the predictive performance of imputed datasets is compared to the original data classification using three cost-sensitive classifiers: Support Vector Machines, Näıve Bayes and Random Forest.",
"title": ""
},
{
"docid": "bae6a214381859ac955f1651c7df0c0f",
"text": "The fastcluster package is a C++ library for hierarchical, agglomerative clustering. It provides a fast implementation of the most efficient, current algorithms when the input is a dissimilarity index. Moreover, it features memory-saving routines for hierarchical clustering of vector data. It improves both asymptotic time complexity (in most cases) and practical performance (in all cases) compared to the existing implementations in standard software: several R packages, MATLAB, Mathematica, Python with SciPy. The fastcluster package presently has interfaces to R and Python. Part of the functionality is designed as a drop-in replacement for the methods hclust and flashClust in R and scipy.cluster.hierarchy.linkage in Python, so that existing programs can be effortlessly adapted for improved performance.",
"title": ""
}
] |
scidocsrr
|
eb80da3e38c8b5efbcb20bda9c299730
|
Constraint-based motion optimization using a statistical dynamic model
|
[
{
"docid": "db964a7761ac16c63196ab32f4559e2e",
"text": "We present an end-to-end system that goes from video sequences to high resolution, editable, dynamically controllable face models. The capture system employs synchronized video cameras and structured light projectors to record videos of a moving face from multiple viewpoints. A novel spacetime stereo algorithm is introduced to compute depth maps accurately and overcome over-fitting deficiencies in prior work. A new template fitting and tracking procedure fills in missing data and yields point correspondence across the entire sequence without using markers. We demonstrate a data-driven, interactive method for inverse kinematics that draws on the large set of fitted templates and allows for posing new expressions by dragging surface points directly. Finally, we describe new tools that model the dynamics in the input sequence to enable new animations, created via key-framing or texture-synthesis techniques.",
"title": ""
}
] |
[
{
"docid": "6f5ee673c82d43a984e0217b5044d2dd",
"text": "Twitter currently receives about 190 million tweets (small text-based Web posts) a day, in which people share their comments regarding a wide range of topics. A large number of tweets include opinions about products and services. However, with Twitter being a relatively new phenomenon, these tweets are underutilized as a source for evaluating customer sentiment. To explore high-volume twitter data, we introduce three novel time-based visual sentiment analysis techniques: (1) topic-based sentiment analysis that extracts, maps, and measures customer opinions; (2) stream analysis that identifies interesting tweets based on their density, negativity, and influence characteristics; and (3) pixel cell-based sentiment calendars and high density geo maps that visualize large volumes of data in a single view. We applied these techniques to a variety of twitter data, (e.g., movies, amusement parks, and hotels) to show their distribution and patterns, and to identify influential opinions.",
"title": ""
},
{
"docid": "11ed66cfb1a686ce46b1ad0ec6cf5d13",
"text": "OBJECTIVE\nTo evaluate a novel ultrasound measurement, the prefrontal space ratio (PFSR), in second-trimester trisomy 21 and euploid fetuses.\n\n\nMETHODS\nStored three-dimensional volumes of fetal profiles from 26 trisomy 21 fetuses and 90 euploid fetuses at 15-25 weeks' gestation were examined. A line was drawn between the leading edge of the mandible and the maxilla (MM line) and extended in front of the forehead. The ratio of the distance between the leading edge of the skull and that of the skin (d(1)) to the distance between the skin and the point where the MM line was intercepted (d(2)) was calculated (d(2)/d(1)). The distributions of PFSR in trisomy 21 and euploid fetuses were compared, and the relationship with gestational age in each group was evaluated by Spearman's rank correlation coefficient (r(s) ).\n\n\nRESULTS\nThe PFSR in trisomy 21 fetuses (mean, 0.36; range, 0-0.81) was significantly lower than in euploid fetuses (mean, 1.48; range, 0.85-2.95; P < 0.001 (Mann-Whitney U-test)). There was no significant association between PFSR and gestational age in either trisomy 21 (r(s) = 0.25; 95% CI, - 0.15 to 0.58) or euploid (r(s) = 0.06; 95% CI, - 0.15 to 0.27) fetuses.\n\n\nCONCLUSION\nThe PFSR appears to be a highly sensitive and specific marker of trisomy 21 in the second trimester of pregnancy.",
"title": ""
},
{
"docid": "a10752bb80ad47e18ef7dbcd83d49ff7",
"text": "Approximate computing has gained significant attention due to the popularity of multimedia applications. In this paper, we propose a novel inaccurate 4:2 counter that can effectively reduce the partial product stages of the Wallace Multiplier. Compared to the normal Wallace multiplier, our proposed multiplier can reduce 10.74% of power consumption and 9.8% of delay on average, with an error rate from 0.2% to 13.76% The accuracy of amplitude is higher than 99% In addition, we further enhance the design with error-correction units to provide accurate results. The experimental results show that the extra power consumption of correct units is lower than 6% on average. Compared to the normal Wallace multiplier, the average latency of our proposed multiplier with EDC is 6% faster when the bit-width is 32, and the power consumption is still 10% lower than that of the Wallace multiplier.",
"title": ""
},
{
"docid": "54ba46965571a60e073dfab95ede656e",
"text": "ÐThis paper presents a fair decentralized mutual exclusion algorithm for distributed systems in which processes communicate by asynchronous message passing. The algorithm requires between N ÿ 1 and 2
N ÿ 1 messages per critical section access, where N is the number of processes in the system. The exact message complexity can be expressed as a deterministic function of concurrency in the computation. The algorithm does not introduce any other overheads over Lamport's and RicartAgrawala's algorithms, which require 3
N ÿ 1 and 2
N ÿ 1 messages, respectively, per critical section access and are the only other decentralized algorithms that allow mutual exclusion access in the order of the timestamps of requests. Index TermsÐAlgorithm, concurrency, distributed system, fairness, mutual exclusion, synchronization.",
"title": ""
},
{
"docid": "a7e7d4232bd5c923746a1ecd7b5d4a27",
"text": "OBJECTIVE\nThe goal of this project was to determine whether screening different groups of elderly individuals in a general or specialty practice would be beneficial in detecting dementia.\n\n\nBACKGROUND\nEpidemiologic studies of aging and dementia have demonstrated that the use of research criteria for the classification of dementia has yielded three groups of subjects: those who are demented, those who are not demented, and a third group of individuals who cannot be classified as normal or demented but who are cognitively (usually memory) impaired.\n\n\nMETHODS\nThe authors conducted computerized literature searches and generated a set of abstracts based on text and index words selected to reflect the key issues to be addressed. Articles were abstracted to determine whether there were sufficient data to recommend the screening of asymptomatic individuals. Other research studies were evaluated to determine whether there was value in identifying individuals who were memory-impaired beyond what one would expect for age but who were not demented. Finally, screening instruments and evaluation techniques for the identification of cognitive impairment were reviewed.\n\n\nRESULTS\nThere were insufficient data to make any recommendations regarding cognitive screening of asymptomatic individuals. Persons with memory impairment who were not demented were characterized in the literature as having mild cognitive impairment. These subjects were at increased risk for developing dementia or AD when compared with similarly aged individuals in the general population.\n\n\nRECOMMENDATIONS\nThere were sufficient data to recommend the evaluation and clinical monitoring of persons with mild cognitive impairment due to their increased risk for developing dementia (Guideline). Screening instruments, e.g., Mini-Mental State Examination, were found to be useful to the clinician for assessing the degree of cognitive impairment (Guideline), as were neuropsychologic batteries (Guideline), brief focused cognitive instruments (Option), and certain structured informant interviews (Option). Increasing attention is being paid to persons with mild cognitive impairment for whom treatment options are being evaluated that may alter the rate of progression to dementia.",
"title": ""
},
{
"docid": "c625221e79bdc508c7c772f5be0458a1",
"text": "Word embeddings that can capture semantic and syntactic information from contexts have been extensively used for various natural language processing tasks. However, existing methods for learning contextbased word embeddings typically fail to capture sufficient sentiment information. This may result in words with similar vector representations having an opposite sentiment polarity (e.g., good and bad), thus degrading sentiment analysis performance. Therefore, this study proposes a word vector refinement model that can be applied to any pre-trained word vectors (e.g., Word2vec and GloVe). The refinement model is based on adjusting the vector representations of words such that they can be closer to both semantically and sentimentally similar words and further away from sentimentally dissimilar words. Experimental results show that the proposed method can improve conventional word embeddings and outperform previously proposed sentiment embeddings for both binary and fine-grained classification on Stanford Sentiment Treebank (SST).",
"title": ""
},
{
"docid": "df1fa2b0c426bd71627a538780c0b30f",
"text": "This brief presents a novel ultralow power CMOS voltage reference (CVR) with only 4.6-nW power consumption. In the proposed CVR circuit, the proportional-to-absolute-temperature voltage is generated by feeding the leakage current of a zero-<inline-formula> <tex-math notation=\"LaTeX\">$V_{\\mathrm {gs}}$ </tex-math></inline-formula> nMOS transistor to two diode-connected nMOS transistors in series, both of which are in subthreshold region; while the complementary-to-absolute-temperature voltage is created by using the body diodes of another nMOS transistor. Consequently, low-power operation can be achieved without requiring resistors or bipolar junction transistors, leading to small chip area consumption. The proposed CVR circuit is fabricated in a standard 0.18-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> CMOS process. Measurement results show that the prototype design is capable of providing a 755 mV typical reference voltage with 34 ppm/°C from −15 °C to 140 °C. Moreover, the typical power consumption is only 4.6 nW at room temperature and the active area is only 0.0598 mm<sup>2</sup>.",
"title": ""
},
{
"docid": "25a99f97e034cd3dbdb76819e50e6198",
"text": "Nearest neighbor classiication assumes locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with nite samples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. We propose a locally adaptive nearest neighbor classiication method to try to minimize bias. We use a Chi-squared distance analysis to compute a exible metric for producing neighborhoods that are highly adaptive to query locations. Neighborhoods are elongated along less relevant feature dimensions and constricted along most innuential ones. As a result, the class conditional probabilities tend to be smoother in the mod-iied neighborhoods, whereby better classiication performance can be achieved. The eecacy of our method is validated and compared against other techniques using a variety of simulated and real world data.",
"title": ""
},
{
"docid": "46bca704fff4c97c4dc6e6a8c51bb7b3",
"text": "Computer science concept inventories: past and future C. Taylor, D. Zingaro, L. Porter, K.C. Webb, C.B. Lee & M. Clancy a Department of Computer Science, Oberlin College, Oberlin, OH, USA. b Department of Math and Computer Sciences, University of Toronto, ON, Canada. c Computer Science and Eng. Dept., University of California, San Diego, CA, USA. d Department of Computer Science, Swarthmore College, Swarthmore, PA, USA. e Department of Computer Science, Stanford University, Stanford, CA, USA. f EECS Computer Science Division, University of California, Berkeley, CA, USA. Published online: 20 Oct 2014.",
"title": ""
},
{
"docid": "894eac11da60a5d81c437b3953d16408",
"text": "ion Levels 3 Behavior (Function) Structure (Netlist) Physical (Layout) Logic Circuit Processor System",
"title": ""
},
{
"docid": "0dc670653f3f61b9c694f996587091f0",
"text": "BACKGROUND\nThis paper presents data on alternations in the argument structure of common domain-specific verbs and their associated verbal nominalizations in the PennBioIE corpus. Alternation is the term in theoretical linguistics for variations in the surface syntactic form of verbs, e.g. the different forms of stimulate in FSH stimulates follicular development and follicular development is stimulated by FSH. The data is used to assess the implications of alternations for biomedical text mining systems and to test the fit of the sublanguage model to biomedical texts.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe examined 1,872 tokens of the ten most common domain-specific verbs or their zero-related nouns in the PennBioIE corpus and labelled them for the presence or absence of three alternations. We then annotated the arguments of 746 tokens of the nominalizations related to these verbs and counted alternations related to the presence or absence of arguments and to the syntactic position of non-absent arguments. We found that alternations are quite common both for verbs and for nominalizations. We also found a previously undescribed alternation involving an adjectival present participle.\n\n\nCONCLUSIONS/SIGNIFICANCE\nWe found that even in this semantically restricted domain, alternations are quite common, and alternations involving nominalizations are exceptionally diverse. Nonetheless, the sublanguage model applies to biomedical language. We also report on a previously undescribed alternation involving an adjectival present participle.",
"title": ""
},
{
"docid": "090f5709500bb06f919a5b95ac6549f3",
"text": "Esperanto is a constructed natural language, which was intended to be an easy-to-learn lingua franca. Zipf's law models the statistical proportions of various phenomena in human ecology, including natural languages. Given Esperanto’s artificial origins, one wonders how “natural” it appears, relative to other natural languages, in the context of Zipf’s law. To explore this question, we collected a total of 283 books from six languages: English, French, German, Italian, Spanish, and Esperanto. We applied Zipf-based metrics on our corpus to extract distributions for word, word distance, word bigram, word trigram, and word length for each book. Statistical analyses show that Esperanto’s statistical proportions are similar to those of other languages. We then trained artificial neural networks (ANNs) to classify books according to language. The ANNs achieved high accuracy rates (86.3% to 98.6%). Subsequent analysis identified German as having the most unique proportions, followed by Esperanto, Italian, Spanish, English, and French. Analysis of misclassified patterns shows that Esperanto’s statistical proportions resemble mostly those of German and Spanish, and least those of French and Italian.",
"title": ""
},
{
"docid": "e2ed500ce298ea175554af97bd0f2f98",
"text": "The Climate CoLab is a system to help thousands of people around the world collectively develop plans for what humans should do about global climate change. This paper shows how the system combines three design elements (model-based planning, on-line debates, and electronic voting) in a synergistic way. The paper also reports early usage experience showing that: (a) the system is attracting a continuing stream of new and returning visitors from all over the world, and (b) the nascent community can use the platform to generate interesting and high quality plans to address climate change. These initial results indicate significant progress towards an important goal in developing a collective intelligence system—the formation of a large and diverse community collectively engaged in solving a single problem.",
"title": ""
},
{
"docid": "f5182ad077b1fdaa450d16544d63f01b",
"text": "This article paves the knowledge about the next generation Bluetooth Standard-BT 5 that will bring some mesmerizing upgrades including increased range, speed, and broadcast messaging capacity. Further, three relevant queries such as what is better about BT 5, why does that matter, and how will it affect IoT have been explained to gather related information so that developers, practitioners, and naive people could formulate BT 5 into IoT based applications while assimilating the need of short range communication in true sense.",
"title": ""
},
{
"docid": "5e04372f08336da5b8ab4d41d69d3533",
"text": "Purpose – This research aims at investigating the role of certain factors in organizational culture in the success of knowledge sharing. Such factors as interpersonal trust, communication between staff, information systems, rewards and organization structure play an important role in defining the relationships between staff and in turn, providing possibilities to break obstacles to knowledge sharing. This research is intended to contribute in helping businesses understand the essential role of organizational culture in nourishing knowledge and spreading it in order to become leaders in utilizing their know-how and enjoying prosperity thereafter. Design/methodology/approach – The conclusions of this study are based on interpreting the results of a survey and a number of interviews with staff from various organizations in Bahrain from the public and private sectors. Findings – The research findings indicate that trust, communication, information systems, rewards and organization structure are positively related to knowledge sharing in organizations. Research limitations/implications – The authors believe that further research is required to address governmental sector institutions, where organizational politics dominate a role in hoarding knowledge, through such methods as case studies and observation. Originality/value – Previous research indicated that the Bahraini society is influenced by traditions of household, tribe, and especially religion of the Arab and Islamic world. These factors define people’s beliefs and behaviours, and thus exercise strong influence in the performance of business organizations. This study is motivated by the desire to explore the role of the national organizational culture on knowledge sharing, which may be different from previous studies conducted abroad.",
"title": ""
},
{
"docid": "92099d409e506a776853d4ae80c4285e",
"text": "Arti
cial intelligence (AI) has achieved superhuman performance in a growing number of tasks, but understanding and explaining AI remain challenging. This paper clari
es the connections between machine-learning algorithms to develop AIs and the econometrics of dynamic structural models through the case studies of three famous game AIs. Chess-playing Deep Blue is a calibrated value function, whereas shogiplaying Bonanza is an estimated value function via Rusts (1987) nested
xed-point method. AlphaGos supervised-learning policy network is a deep neural network implementation of Hotz and Millers (1993) conditional choice probability estimation; its reinforcement-learning value networkis equivalent to Hotz, Miller, Sanders, and Smiths (1994) conditional choice simulation method. Relaxing these AIs implicit econometric assumptions would improve their structural interpretability. Keywords: Arti
cial intelligence, Conditional choice probability, Deep neural network, Dynamic game, Dynamic structural model, Simulation estimator. JEL classi
cations: A12, C45, C57, C63, C73. First version: October 30, 2017. This paper bene
ted from seminar comments at Riken AIP, Georgetown, Tokyo, Osaka, Harvard, and The Third Cambridge Area Economics and Computation Day conference at Microsoft Research New England, as well as conversations with Susan Athey, Xiaohong Chen, Jerry Hausman, Greg Lewis, Robert Miller, Yusuke Narita, Aviv Nevo, Anton Popov, John Rust, Takuo Sugaya, Elie Tamer, and Yosuke Yasuda. yYale Department of Economics and MIT Department of Economics. E-mail: mitsuru.igami@gmail.com.",
"title": ""
},
{
"docid": "1d4309cfff1aff77aa4882e355a807b9",
"text": "VLIW architectures are popular in embedded systems because they offer high-performance processing at low cost and energy. The major problem with traditional VLIW designs is that they do not scale efficiently due to bottlenecks that result from centralized resources and global communication. Multicluster designs have been proposed to solve the scaling problem of VLIW datapaths, while much less work has been done on the control path. In this paper, we propose a distributed control path architecture for VLIW processors (DVLIW) to overcome the scalability problem of VLIW control paths. The architecture simplifies the dispersal of complex VLIW instructions and supports efficient distribution of instructions through a limited bandwidth interconnect, while supporting compressed instruction encodings. DVLIW employs a multicluster design where each cluster contains a local instruction memory that provides all intra-cluster control. All clusters have their own program counter and instruction sequencing capabilities, thus instruction execution is completely decentralized. The architecture executes multiple instruction streams at the same time, but these streams collectively function as a single logical instruction stream. Simulation results show that DVLIW processors reduce the number of cross-chip control signals by approximately two orders of magnitude while incurring a small performance overhead to explicitly manage the instruction streams.",
"title": ""
},
{
"docid": "08df6cd44a26be6c4cc96082631a0e6e",
"text": "In the natural habitat of our ancestors, physical activity was not a preventive intervention but a matter of survival. In this hostile environment with scarce food and ubiquitous dangers, human genes were selected to optimize aerobic metabolic pathways and conserve energy for potential future famines.1 Cardiac and vascular functions were continuously challenged by intermittent bouts of high-intensity physical activity and adapted to meet the metabolic demands of the working skeletal muscle under these conditions. When speaking about molecular cardiovascular effects of exercise, we should keep in mind that most of the changes from baseline are probably a return to normal values. The statistical average of physical activity in Western societies is so much below the levels normal for our genetic background that sedentary lifestyle in combination with excess food intake has surpassed smoking as the No. 1 preventable cause of death in the United States.2 Physical activity has been shown to have beneficial effects on glucose metabolism, skeletal muscle function, ventilator muscle strength, bone stability, locomotor coordination, psychological well-being, and other organ functions. However, in the context of this review, we will focus entirely on important molecular effects on the cardiovascular system. The aim of this review is to provide a bird’s-eye view on what is known and unknown about the physiological and biochemical mechanisms involved in mediating exercise-induced cardiovascular effects. The resulting map is surprisingly detailed in some areas (ie, endothelial function), whereas other areas, such as direct cardiac training effects in heart failure, are still incompletely understood. For practical purposes, we have decided to use primarily an anatomic approach to present key data on exercise effects on cardiac and vascular function. For the cardiac effects, the left ventricle and the cardiac valves will be described separately; for the vascular effects, we will follow the arterial vascular tree, addressing changes in the aorta, the large conduit arteries, the resistance vessels, and the microcirculation before turning our attention toward the venous and the pulmonary circulation (Figure 1). Cardiac Effects of Exercise Left Ventricular Myocardium and Ventricular Arrhythmias The maintenance of left ventricular (LV) mass and function depends on regular exercise. Prolonged periods of physical inactivity, as studied in bed rest trials, lead to significant reductions in LV mass and impaired cardiac compliance, resulting in reduced upright stroke volume and orthostatic intolerance.3 In contrast, a group of bed rest subjects randomized to regular supine lower-body negative pressure treadmill exercise showed an increase in LV mass and a preserved LV stoke volume.4 In previously sedentary healthy subjects, a 12-week moderate exercise program induced a mild cardiac hypertrophic response as measured by cardiac magnetic resonance imaging.5 These findings highlight the plasticity of LV mass and function in relation to the current level of physical activity.",
"title": ""
},
{
"docid": "a4319af83eaecdf3ffd84fdeea5ef62f",
"text": "In this paper, we investigate the problem of overfitting in deep reinforcement learning. Among the most common benchmarks in RL, it is customary to use the same environments for both training and testing. This practice offers relatively little insight into an agent’s ability to generalize. We address this issue by using procedurally generated environments to construct distinct training and test sets. Most notably, we introduce a new environment called CoinRun, designed as a benchmark for generalization in RL. Using CoinRun, we find that agents overfit to surprisingly large training sets. We then show that deeper convolutional architectures improve generalization, as do methods traditionally found in supervised learning, including L2 regularization, dropout, data augmentation and batch normalization.",
"title": ""
}
] |
scidocsrr
|
a701575305929c8dbf6ac20c8efe6605
|
Efficient Multi-objective Neural Architecture Search via Lamarckian Evolution
|
[
{
"docid": "adb9c43bb23ca4737aebbb9ee4b6c14e",
"text": "Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and errorprone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.",
"title": ""
},
{
"docid": "245de72c0f333f4814990926e08c13e9",
"text": "Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.",
"title": ""
},
{
"docid": "10318d39b3ad18779accbf29b2f00fcd",
"text": "Designing convolutional neural networks (CNN) models for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although significant effort has been dedicated to design and improve mobile models on all three dimensions, it is challenging to manually balance these trade-offs when there are so many architectural possibilities to consider. In this paper, we propose an automated neural architecture search approach for designing resourceconstrained mobile CNN models. We propose to explicitly incorporate latency information into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. Unlike in previous work, where mobile latency is considered via another, often inaccurate proxy (e.g., FLOPS), in our experiments, we directly measure real-world inference latency by executing the model on a particular platform, e.g., Pixel phones. To further strike the right balance between flexibility and search space size, we propose a novel factorized hierarchical search space that permits layer diversity throughout the network. Experimental results show that our approach consistently outperforms state-of-the-art mobile CNN models across multiple vision tasks. On the ImageNet classification task, our model achieves 74.0% top-1 accuracy with 76ms latency on a Pixel phone, which is 1.5× faster than MobileNetV2 (Sandler et al. 2018) and 2.4× faster than NASNet (Zoph et al. 2018) with the same top-1 accuracy. On the COCO object detection task, our model family achieves both higher mAP quality and lower latency than MobileNets.",
"title": ""
},
{
"docid": "af25bc1266003202d3448c098628aee8",
"text": "Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR10, CIFAR-100, and SVHN datasets, yielding new state-ofthe-art results of 2.56%, 15.20%, and 1.30% test error respectively. Code available at https://github.com/ uoguelph-mlrg/Cutout.",
"title": ""
}
] |
[
{
"docid": "052121ed86fe268f0a58d6a6c53e342f",
"text": "Utilization of power electronics converters in pulsed power applications introduced a new series of reliable, long life, and cost-effective pulse generators. However, these converters suffer from the limited power ratings of semiconductor switches, which necessitate introduction of a new family of modular topologies. This paper proposes a modular power electronics converter based on voltage multiplier as a high voltage pulsed power generator. This modular circuit is able to generate flexible high output voltage from low input voltage sources. Circuit topology and operational principles of proposed topology are verified via experimental and simulation results as well as theoretical analysis.",
"title": ""
},
{
"docid": "788806728ab69413a435b2a51d484571",
"text": "As one of the most important research topic of nowadays, deep learning attracts researchers' attention with applications of convolutional (CNNs) and recurrent neural networks (RNNs). By pioneers of the deep learning community, generative adversarial training, which has been working for especially last two years, is defined as the most exciting topic of computer vision for the last 10 years. With the influence of these views, a new training approach is proposed to combine generative adversarial network (GAN) architecture with a cascading training. Using CVL database, text images can be generated in a short training time as a different application from the existing GAN examples.",
"title": ""
},
{
"docid": "bbdd0d5e82445f98c7482a3509517114",
"text": "In this paper, a converter-based dc microgrid is studied. By considering the impact of each component in dc microgrids on system stability, a multistage configuration is employed, which includes the source stage, interface converter stage between buses, and common load stage. In order to study the overall stability of the above dc microgrid with constant power loads (CPLs), a comprehensive small-signal model is derived by analyzing the interface converters in each stage. The instability issue induced by the CPLs is revealed by using the criteria of impedance matching. Meanwhile, virtual-impedance-based stabilizers are proposed in order to enhance the damping of dc microgrids with CPLs and guarantee the stable operation. Since droop control is commonly used to reach proper load power sharing in dc microgrids, its impact is taken into account when testing the proposed stabilizers. By using the proposed stabilizers, virtual impedances are employed in the output filters of the interface converters in the second stage of the multistage configuration. In particular, one of the virtual impedances is connected in series with the filter capacitor, and the other one is connected at the output path of the converter. It can be seen that by using the proposed stabilizers, the unstable poles induced by the CPLs are forced to move into the stable region. The proposed method is verified by the MATLAB/Simulink model of multistage dc microgrids with three distributed power generation units.",
"title": ""
},
{
"docid": "1d44a615112b4ede5258ae12c922715e",
"text": "The petrolingual ligament is the posteroinferior attachment of the lateral wall of the cavernous sinus, where the internal carotid artery enters the cavernous sinus. The petrous segment of the internal carotid artery finishes and the cavernous segment begins at the superior margin of this ligament. The ligament is surgically important due to its identification as a landmark for dissection of the internal carotid artery during the approaches to posterolateral intracavernous and extracavernous lesions. It can be well exposed after mobilization of the gasserian ganglion, or after the trigeminal root and ganglion have been split along the junction of V2 and V3 (the transtrigeminal approach). The petrolingual ligament was studied in five cadaveric head specimens from ten sides. The size of the ligament was measured, and its anatomical, clinical and surgical importance is discussed.",
"title": ""
},
{
"docid": "974869c228cbc69f0197fbb73a2c5c37",
"text": "Gut microbial communities represent one source of human genetic and metabolic diversity. To examine how gut microbiomes differ among human populations, here we characterize bacterial species in fecal samples from 531 individuals, plus the gene content of 110 of them. The cohort encompassed healthy children and adults from the Amazonas of Venezuela, rural Malawi and US metropolitan areas and included mono- and dizygotic twins. Shared features of the functional maturation of the gut microbiome were identified during the first three years of life in all three populations, including age-associated changes in the genes involved in vitamin biosynthesis and metabolism. Pronounced differences in bacterial assemblages and functional gene repertoires were noted between US residents and those in the other two countries. These distinctive features are evident in early infancy as well as adulthood. Our findings underscore the need to consider the microbiome when evaluating human development, nutritional needs, physiological variations and the impact of westernization.",
"title": ""
},
{
"docid": "d521b14ee04dbf69656240ef47c3319c",
"text": "This paper presents a computationally efficient approach for temporal action detection in untrimmed videos that outperforms state-of-the-art methods by a large margin. We exploit the temporal structure of actions by modeling an action as a sequence of sub-actions. A novel and fully automatic sub-action discovery algorithm is proposed, where the number of sub-actions for each action as well as their types are automatically determined from the training videos. We find that the discovered sub-actions are semantically meaningful. To localize an action, an objective function combining appearance, duration and temporal structure of sub-actions is optimized as a shortest path problem in a network flow formulation. A significant benefit of the proposed approach is that it enables real-time action localization (40 fps) in untrimmed videos. We demonstrate state-of-the-art results on THUMOS’14 and MEXaction2 datasets.",
"title": ""
},
{
"docid": "2afbf85020a40b7e1476d19419e7a2bd",
"text": "Coronary artery disease is the leading global cause of mortality. Long recognized to be heritable, recent advances have started to unravel the genetic architecture of the disease. Common variant association studies have linked approximately 60 genetic loci to coronary risk. Large-scale gene sequencing efforts and functional studies have facilitated a better understanding of causal risk factors, elucidated underlying biology and informed the development of new therapeutics. Moving forwards, genetic testing could enable precision medicine approaches by identifying subgroups of patients at increased risk of coronary artery disease or those with a specific driving pathophysiology in whom a therapeutic or preventive approach would be most useful.",
"title": ""
},
{
"docid": "14cb0e8fc4e8f82dc4e45d8562ca4bb2",
"text": "Information security is one of the most important factors to be considered when secret information has to be communicated between two parties. Cryptography and steganography are the two techniques used for this purpose. Cryptography scrambles the information, but it reveals the existence of the information. Steganography hides the actual existence of the information so that anyone else other than the sender and the recipient cannot recognize the transmission. In steganography the secret information to be communicated is hidden in some other carrier in such a way that the secret information is invisible. In this paper an image steganography technique is proposed to hide audio signal in image in the transform domain using wavelet transform. The audio signal in any format (MP3 or WAV or any other type) is encrypted and carried by the image without revealing the existence to anybody. When the secret information is hidden in the carrier the result is the stego signal. In this work, the results show good quality stego signal and the stego signal is analyzed for different attacks. It is found that the technique is robust and it can withstand the attacks. The quality of the stego image is measured by Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Metric (SSIM), Universal Image Quality Index (UIQI). The quality of extracted secret audio signal is measured by Signal to Noise Ratio (SNR), Squared Pearson Correlation Coefficient (SPCC). The results show good values for these metrics. © 2015 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Graph Algorithms, High Performance Implementations and Applications (ICGHIA2014).",
"title": ""
},
{
"docid": "f60c0c53b83fb1f1bdd68b0f3d1051c9",
"text": "Television (TV), the predominant advertising medium, is being transformed by the micro-targeting capabilities of set-top boxes (STBs). By procuring impressions at the STB level (often denoted programmatic television), advertisers can now lower per-exposure costs and/or reach viewers most responsive to advertising creatives. Accordingly, this paper uses a proprietary, household-level, single-source data set to develop an instantaneous show and advertisement viewing model to forecast consumers’ exposure to advertising and the downstream consequences for impressions and sales. Viewing data suggest person-specific factors dwarf brandor show-specific factors in explaining advertising avoidance, thereby suggesting that device-level advertising targeting can be more effective than existing show-level targeting. Consistent with this observation, the model indicates that microtargeting lowers advertising costs and raises incremental profits considerably relative to show-level targeting. Further, these advantages are amplified when advertisers are allowed to buy real-time as opposed to up-front.",
"title": ""
},
{
"docid": "d623c2223c3971b3a204b3369a16bce7",
"text": "Simultaneous localization, mapping and moving object tracking (SLAMMOT) involves both simultaneous localization and mapping (SLAM) in dynamic environments and detecting and tracking these dynamic objects. In this paper, we establish a mathematical framework to integrate SLAM and moving object tracking. We describe two solutions: SLAM with generalized objects, and SLAM with detection and tracking of moving objects (DATMO). SLAM with generalized objects calculates a joint posterior over all generalized objects and the robot. Such an approach is similar to existing SLAM algorithms, but with additional structure to allow for motion modeling of generalized objects. Unfortunately, it is computationally demanding and generally infeasible. SLAM with DATMO decomposes the estimation problem into two separate estimators. By maintaining separate posteriors for stationary objects and moving objects, the resulting estimation problems are much lower dimensional then SLAM with generalized objects. Both SLAM and moving object tracking from a moving vehicle in crowded urban areas are daunting tasks. Based on the SLAM with DATMO framework, we propose practical algorithms which deal with issues of perception modeling, data association, and moving object detection. The implementation of SLAM with DATMO was demonstrated using data collected from the CMU Navlab11 vehicle at high speeds in crowded urban environments. Ample experimental results shows the feasibility of the proposed theory and algorithms.",
"title": ""
},
{
"docid": "37de72b0e9064d09fb6901b40d695c0a",
"text": "BACKGROUND AND OBJECTIVES\nVery little is known about the use of probiotics among pregnant women with gestational diabetes mellitus (GDM) especially its effect on oxidative stress and inflammatory indices. The aim of present study was to measure the effect of a probiotic supplement capsule on inflammation and oxidative stress biomarkers in women with newly-diagnosed GDM.\n\n\nMETHODS AND STUDY DESIGN\n64 pregnant women with GDM were enrolled in a double-blind placebo controlled randomized clinical trial in the spring and summer of 2014. They were randomly assigned to receive either a probiotic containing four bacterial strains of Lactobacillus acidophilus LA-5, Bifidobacterium BB-12, Streptococcus Thermophilus STY-31 and Lactobacillus delbrueckii bulgaricus LBY-27 or placebo capsule for 8 consecutive weeks. Blood samples were taken pre- and post-treatment and serum indices of inflammation and oxidative stress were assayed. The measured mean response scales were then analyzed using mixed effects model. All statistical analysis was performed using Statistical Package for Social Sciences (SPSS) software (version 16).\n\n\nRESULTS\nSerum high-sensitivity C-reactive protein and tumor necrosis factor-α levels improved in the probiotic group to a statistically significant level over the placebo group. Serum interleukin-6 levels decreased in both groups after intervention; however, neither within group nor between group differences interleukin-6 serum levels was statistically significant. Malondialdehyde, glutathione reductase and erythrocyte glutathione peroxidase levels improved significantly with the use of probiotics when compared with the placebo.\n\n\nCONCLUSIONS\nThe probiotic supplement containing L.acidophilus LA- 5, Bifidobacterium BB- 12, S.thermophilus STY-31 and L.delbrueckii bulgaricus LBY-2 appears to improve several inflammation and oxidative stress biomarkers in women with GDM.",
"title": ""
},
{
"docid": "1203f22bfdfc9ecd211dbd79a2043a6a",
"text": "After a short introduction to classic cryptography we explain thoroughly how quantum cryptography works. We present then an elegant experimental realization based on a self-balanced interferometer with Faraday mirrors. This phase-coding setup needs no alignment of the interferometer nor polarization control, and therefore considerably facilitates the experiment. Moreover it features excellent fringe visibility. Next, we estimate the practical limits of quantum cryptography. The importance of the detector noise is illustrated and means of reducing it are presented. With present-day technologies maximum distances of about 70 kmwith bit rates of 100 Hzare achievable. PACS: 03.67.Dd; 85.60; 42.25; 33.55.A Cryptography is the art of hiding information in a string of bits meaningless to any unauthorized party. To achieve this goal, one uses encryption: a message is combined according to an algorithm with some additional secret information – the key – to produce a cryptogram. In the traditional terminology, Alice is the party encrypting and transmitting the message, Bob the one receiving it, and Eve the malevolent eavesdropper. For a crypto-system to be considered secure, it should be impossible to unlock the cryptogram without Bob’s key. In practice, this demand is often softened, and one requires only that the system is sufficiently difficult to crack. The idea is that the message should remain protected as long as the information it contains is valuable. There are two main classes of crypto-systems, the publickey and the secret-key crypto-systems: Public key systems are based on so-called one-way functions: given a certainx, it is easy to computef(x), but difficult to do the inverse, i.e. compute x from f(x). “Difficult” means that the task shall take a time that grows exponentially with the number of bits of the input. The RSA (Rivest, Shamir, Adleman) crypto-system for example is based on the factorizing of large integers. Anyone can compute 137 ×53 in a few seconds, but it may take a while to find the prime factors of 28 907. To transmit a message Bob chooses a private key (based on two large prime numbers) and computes from it a public key (based on the product of these numbers) which he discloses publicly. Now Alice can encrypt her message using this public key and transmit it to Bob, who decrypts it with the private key. Public key systems are very convenient and became very popular over the last 20 years, however, they suffer from two potential major flaws. To date, nobody knows for sure whether or not factorizing is indeed difficult. For known algorithms, the time for calculation increases exponentially with the number of input bits, and one can easily improve the safety of RSA by choosing a longer key. However, a fast algorithm for factorization would immediately annihilate the security of the RSA system. Although it has not been published yet, there is no guarantee that such an algorithm does not exist. Second, problems that are difficult for a classical computer could become easy for a quantum computer. With the recent developments in the theory of quantum computation, there are reasons to fear that building these machines will eventually become possible. If one of these two possibilities came true, RSA would become obsolete. One would then have no choice, but to turn to secret-key cryptosystems. Very convenient and broadly used are crypto-systems based on a public algorithm and a relatively short secret key. The DES (Data Encryption Standard, 1977) for example uses a 56-bit key and the same algorithm for coding and decoding. The secrecy of the cryptogram, however, depends again on the calculating power and the time of the eavesdropper. The only crypto-system providing proven, perfect secrecy is the “one-time pad” proposed by Vernam in 1935. With this scheme, a message is encrypted using a random key of equal length, by simply “adding” each bit of the message to the orresponding bit of the key. The scrambled text can then be sent to Bob, who decrypts the message by “subtracting” the same key. The bits of the ciphertext are as random as those of the key and consequently do not contain any information. Although perfectly secure, the problem with this system is that it is essential for Alice and Bob to share a common secret key, at least as long as the message they want to exchange, and use it only for a single encryption. This key must be transmitted by some trusted means or personal meeting, which turns out to be complex and expensive.",
"title": ""
},
{
"docid": "95333e4206a3b4c1a576f452c591421f",
"text": "Given a set of observations generated by an optimization process, the goal of inverse optimization is to determine likely parameters of that process. We cast inverse optimization as a form of deep learning. Our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn parameters that generate the observations. We demonstrate that by backpropagating through the interior point algorithm we can learn the coefficients determining the cost vector and the constraints, independently or jointly, for both non-parametric and parametric linear programs, starting from one or multiple observations. With this approach, inverse optimization can leverage concepts and algorithms from deep learning.",
"title": ""
},
{
"docid": "64ef634078467594df83fe4cec779c27",
"text": "In Natural Language Processing the sequence-to-sequence, encoder-decoder model is very successful in generating sentences, as are the tasks of dialogue, translation and question answering. On top of this model an attention mechanism is often used. The attention mechanism has the ability to look back at all encoder outputs for every decoding step. The performance increase of attention shows that the final encoded state of an input sequence alone is too poor to successfully generate a target. In this paper more elaborate forms of attention, namely external memory, are investigated on varying properties within the range of dialogue. In dialogue, the target sequence is much more complex to predict than in other tasks, since the sequence can be of arbitrary length and can contain any information related to any of the previous utterances. External memory is hypothesized to improve performance exactly because of these properties of dialogue. Varying memory models are tested on a range of context sizes. Some memory modules show more stable results with an increasing context size.",
"title": ""
},
{
"docid": "ad2655aaed8a4f3379cb206c6e405f16",
"text": "Lesions of the orbital frontal lobe, particularly its medial sectors, are known to cause deficits in empathic ability, whereas the role of this region in theory of mind processing is the subject of some controversy. In a functional magnetic resonance imaging study with healthy participants, emotional perspective-taking was contrasted with cognitive perspective-taking in order to examine the role of the orbital frontal lobe in subcomponents of theory of mind processing. Subjects responded to a series of scenarios presented visually in three conditions: emotional perspective-taking, cognitive perspective-taking and a control condition that required inferential reasoning, but not perspective-taking. Group results demonstrated that the medial orbitofrontal lobe, defined as Brodmann's areas 11 and 25, was preferentially involved in emotional as compared to cognitive perspective-taking. This finding is both consistent with the lesion literature, and resolves the inconsistency of orbital frontal findings in the theory of mind literature.",
"title": ""
},
{
"docid": "537f9537d94f045317f52fd5e1469c01",
"text": "This paper summarizes the work of building the autonomous system including detection system and path tracking controller for a formula student autonomous racecar. A LIDAR-vision cooperating method of detecting traffic cone which is used as track mark is proposed. Detection algorithm of the racecar also implements a precise and high rate localization method which combines the GPS-INS data and LIDAR odometry. Besides, a track map including the location and color information of the cones is built simultaneously. Finally, the system and vehicle performance on a closed loop track is tested. This paper also briefly introduces the Formula Student Autonomous Competition (FSAC) in 2017.",
"title": ""
},
{
"docid": "2dd2732babd2f18841d259899fec9323",
"text": "Individuals with mental illness receive harsh stigmatization, resulting in decreased life opportunities and a loss of independent functioning over and above the impairments related to mental disorders themselves. We begin our review with a multidisciplinary discussion of mechanisms underlying the strong propensity to devalue individuals displaying both deviant behavior and the label of mental illness. Featured is the high potential for internalization of negative perceptions on the part of those with mental disorders-i.e., self-stigmatization. We next focus on several issues of conceptual and practical relevance: (a) stigma against less severe forms of mental disorder; (b) the role of perceptions of dangerousness related to mental illness; (c) reconciliation of behavioral research with investigations of explicit and implicit attitudes; (d) evolutionary models and their testability; (e) attributional accounts of the causes of mental illness, especially to personal control versus biogenetic factors; and (f) developmental trends regarding stigma processes. We conclude with a brief review of multilevel efforts to overcome mental illness stigma, spanning policy and legislation, alterations in media depictions, changed attitudes and practices among mental health professionals, contact and empathy enhancement, and family and individual treatment.",
"title": ""
},
{
"docid": "92c72aa180d3dccd5fcc5504832780e9",
"text": "The site of S1-S2 root activation following percutaneous high-voltage electrical (ES) and magnetic stimulation were located by analyzing the variations of the time interval from M to H soleus responses elicited by moving the stimulus point from lumbar to low thoracic levels. ES was effective in activating S1-S2 roots at their origin. However supramaximal motor root stimulation required a dorsoventral montage, the anode being a large, circular surface electrode placed ventrally, midline between the apex of the xiphoid process and the umbilicus. Responses to magnetic stimuli always resulted from the activation of a fraction of the fiber pool, sometimes limited to the low-thresholds afferent component, near its exit from the intervertebral foramina, or even more distally. Normal values for conduction velocity in motor and 1a afferent fibers in the proximal nerve tract are provided.",
"title": ""
},
{
"docid": "4ead765c1fc9b62f2477b4b8e1f80ece",
"text": "Educational researchers in every discipline need to be cognisant of alternative research traditions to make decisions about which method to use when embarking on a research study. There are two major approaches to research that can be used in the study of the social and the individual world. These are quantitative and qualitative research. Although there are books on research methods that discuss the differences between alternative approaches, it is rare to find an article that examines the design issues at the intersection of the quantitative and qualitative divide based on eminent research literature. The purpose of this article is to explain the major differences between the two research paradigms by comparing them in terms of their epistemological, theoretical, and methodological underpinnings. Since quantitative research has well-established strategies and methods but qualitative research is still growing and becoming more differentiated in methodological approaches, greater consideration will be given to the latter.",
"title": ""
},
{
"docid": "b2a0755176f20cd8ee2ca19c091d022d",
"text": "Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot’s own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the influence of an agent on this environment. In the context of model-based learning control, we view the model from three different perspectives. First, we need to study the different possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.",
"title": ""
}
] |
scidocsrr
|
3cf3929c61c27bf0d3a1001ec20a98e7
|
Zero-Shot Recognition via Structured Prediction
|
[
{
"docid": "b3012ab055e3f4352b3473700c30c085",
"text": "Zero-shot recognition (ZSR) deals with the problem of predicting class labels for target domain instances based on source domain side information (e.g. attributes) of unseen classes. We formulate ZSR as a binary prediction problem. Our resulting classifier is class-independent. It takes an arbitrary pair of source and target domain instances as input and predicts whether or not they come from the same class, i.e. whether there is a match. We model the posterior probability of a match since it is a sufficient statistic and propose a latent probabilistic model in this context. We develop a joint discriminative learning framework based on dictionary learning to jointly learn the parameters of our model for both domains, which ultimately leads to our class-independent classifier. Many of the existing embedding methods can be viewed as special cases of our probabilistic model. On ZSR our method shows 4.90% improvement over the state-of-the-art in accuracy averaged across four benchmark datasets. We also adapt ZSR method for zero-shot retrieval and show 22.45% improvement accordingly in mean average precision (mAP).",
"title": ""
}
] |
[
{
"docid": "00acc8c527894f887229a55d3db1e9f7",
"text": "The multi-level perspective (MLP) has emerged as a fruitful middle-range framework for analysing socio-technical transitions to sustainability. The MLP also received constructive criticisms. This paper summarises seven criticisms, formulates responses to them, and translates these into suggestions for future research. The criticisms relate to: (1) lack of agency, (2) operationalization of regimes, (3) bias towards bottom-up change models, (4) epistemology and explanatory style, (5) methodology, (6) socio-technical landscape as residual category, and (7) flat ontologies versus hierarchical levels. © 2011 Elsevier B.V. All rights reserved. 1. Transitions to sustainability Contemporary environmental problems, such as climate change, loss of biodiversity, and resource depletion (clean water, oil, forests, fish stocks, etc.) present formidable societal challenges. Addressing these problems requires factor 10 or more improvements in environmental performance which can only be realized by deep-structural changes in transport, energy, agri-food and other systems (Elzen et al., 2004; Van den Bergh and Bruinsma, 2008; Grin et al., 2010). These systemic changes are often called ‘socio-technical transitions’, because they involve alterations in the overall configuration of transport, energy, and agri-food systems, which entail technology, policy, markets, consumer practices, infrastructure, cultural meaning and scientific knowledge (Elzen et al., 2004; Geels, 2004). These elements are reproduced, maintained and transformed by actors such as firms and industries, policy makers and politicians, consumers, civil society, engineers and researchers. Transitions are therefore complex and long-term processes comprising multiple actors. ∗ Tel.: +44 01273678171. E-mail address: f.w.geels@sussex.ac.uk 2210-4224/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.eist.2011.02.002 F.W. Geels / Environmental Innovation and Societal Transitions 1 (2011) 24–40 25 Transitions towards sustainability have some special characteristics that make them different, in certain respects, from many (though not all) historical transitions. First, sustainability transitions are goal-oriented or ‘purposive’ (Smith et al., 2005) in the sense of addressing persistent environmental problems, whereas many historical transitions were ‘emergent’ (e.g. entrepreneurs exploring commercial opportunities related to new technologies). Private actors have limited incentives to address sustainability transitions, because the goal is related to a collective good (‘sustainability’), which implies free rider problems and prisoner’s dilemmas. Public authorities and civil society will be crucial to address public goods and internalize negative externalities, to change economic frame conditions, and to support ‘green’ niches (Elzen et al., 2011). Because sustainability is an ambiguous and contested concept, there will be disagreement and debate about the directionality of sustainability transitions (Stirling, 2009), the (dis)advantages of particular solutions and the most appropriate policy instruments or packages. A second characteristic that makes sustainability transitions unique is that most ‘sustainable’ solutions do not offer obvious user benefits (because sustainability is a collective good), and often score lower on price/performance dimensions than established technologies. It is therefore unlikely that environmental innovations will be able to replace existing systems without changes in economic frame conditions (e.g., taxes, subsidies, regulatory frameworks). These changes will require changes in policies, which entails politics and power struggles, because vested interests will try to resist such changes. A third characteristic relates to the empirical domains where sustainability transitions are most needed, such as transport, energy and agri-food. These domains are characterized by large firms (e.g., car manufacturers, electric utilities, oil companies, food processing companies, supermarkets) that possess ‘complementary assets’ such as specialized manufacturing capability, experience with largescale test trials, access to distribution channels, service networks, and complementary technologies (Rothaermel, 2001). These complementary assets give incumbent firms strong positions vis-a-vis pioneers that often first develop environmental innovations. Although large incumbent firms will probably not be the initial leaders of sustainability transitions, their involvement might accelerate the breakthrough of environmental innovations if they support these innovations with their complementary assets and resources. This would, however, require a strategic reorientation of incumbents who presently still defend existing systems and regimes. These considerations imply that sustainability transitions are necessarily about interactions between technology, policy/power/politics, economics/business/markets, and culture/discourse/public opinion. Researchers therefore need theoretical approaches that address, firstly, the multi-dimensional nature of sustainability transitions, and, secondly, the dynamics of structural change. With regard to structural change the problem is that many existing (unsustainable) systems are stabilized through various lock-in mechanisms, such as scale economies, sunk investments in machines, infrastructures and competencies. Also institutional commitments, shared beliefs and discourses, power relations, and political lobbying by incumbents stabilize existing systems (Unruh, 2000). Additionally, consumer lifestyles and preferences may have become adjusted to existing technical systems. These lock-in mechanisms create path dependence and make it difficult to dislodge existing systems. So, the core analytical puzzle is to understand how environmental innovations emerge and how these can replace, transform or reconfigure existing systems. This paper is about one particular approach, namely the multi-level perspective (MLP). On both of the above aspects (multi-dimensionality and structural change), the MLP goes beyond studies of single technologies (such as wind turbines, biofuels, fuel cells, and electric vehicles), which dominate the literature on environmental innovation. The technological innovation system approach (Hekkert et al., 2007) is multi-dimensional (although cultural and demand side aspects are under-developed), but does not address structural change (how emerging innovations struggle against existing systems). The disruptive innovation (Christensen, 1997) and technological discontinuity (Anderson and Tushman, 1990) literatures do look at interactions between new entrants and incumbents, but tend to focus only on technology and market dimensions. Both approaches also have a technology-push character, with ‘eras of incremental change’ being punctuated by new technologies into ‘eras of ferment’. While the MLP allows for technology-push substitution as one kind of transition pattern, it also distinguishes other transition patterns in which regime destabilisation precedes technical substitution (see Section 26 F.W. Geels / Environmental Innovation and Societal Transitions 1 (2011) 24–40 “Bias towards bottom-up change models” below). Long-wave theory on techno-economic paradigm (TEP) shifts (Freeman and Perez, 1988) is multi-dimensional and addresses structural change. TEPs refer to configurations of pervasive technologies, methods of productions, economic structures, institutions and beliefs that are stable for long periods. Early explanations of TEP shifts (Freeman and Perez, 1988) had deterministic overtones with techno-economic forces doing the initial acting and the socioinstitutional framework doing the subsequent reacting. Freeman and Louçă (Freeman and Louçă, 2001) subsequently distinguished five interacting sub-systems (science, technology, economy, politics and culture), and emphasized the alignments between sub-system dynamics: “It is essential to study both the relatively independent development of each stream of history and their interdependencies, their loss of integration, and their reintegration” (Freeman and Louçă, 2001:127). While this conceptual refinement brings TEP closer to the MLP, there remains a difference in scope, with TEP focusing on entire economies and MLP focusing on concrete energy, transport, agri-good systems etc. TEP-work therefore gives more attention to aggregate processes, while MLP-work focuses in more detail on the various groups, their strategies, resources, beliefs and interactions. While there are various approaches to transitions, with different strengths and weaknesses, the remainder of this paper focuses on the MLP. The emergence of the MLP as a fruitful approach has been accompanied by various criticisms. This paper aims to respond to these criticisms, qualify them, and, where possible, translate them into productive suggestions for future research. The paper is organized as follows. Section “The multi-level perspective on socio-technical transitions” briefly describes the MLP. Section “Criticisms, responses, and suggestions for future research” addresses seven types of criticisms. Section “Concluding comments” concludes. 2. The multi-level perspective on socio-technical transitions The multi-level perspective (MLP) is a middle-range theory that conceptualizes overall dynamic patterns in socio-technical transitions.1 The analytical framework combines concepts from evolutionary economics (trajectories, regimes, niches, speciation, path dependence, routines), science and technology studies (sense making, social networks, innovation as a social process shaped by broader societal contexts), structuration theory and neo-institutional theory (rules and institutions as ‘deep structures’ on which knowledgeable actors draw in their actions, duality of structure, i.e. structures are both context and outcome of actions, ‘rules of the game’ that structure actions). These theoretical ",
"title": ""
},
{
"docid": "984bf4f0500e737159b847eab2fa5021",
"text": "We present efmaral, a new system for efficient and accurate word alignment using a Bayesian model with Markov Chain Monte Carlo (MCMC) inference. Through careful selection of data structures and model architecture we are able to surpass the fast_align system, commonly used for performance-critical word alignment, both in computational efficiency and alignment accuracy. Our evaluation shows that a phrase-based statistical machine translation (SMT) system produces translations of higher quality when using word alignments from efmaral than from fast_align, and that translation quality is on par with what is obtained using giza++, a tool requiring orders of magnitude more processing time. More generally we hope to convince the reader that Monte Carlo sampling, rather than being viewed as a slow method of last resort, should actually be the method of choice for the SMT practitioner and others interested in word alignment.",
"title": ""
},
{
"docid": "c42d1ee7a6b947e94eeb6c772e2b638f",
"text": "As mobile devices are equipped with more memory and computational capability, a novel peer-to-peer communication model for mobile cloud computing is proposed to interconnect nearby mobile devices through various short range radio communication technologies to form mobile cloudlets, where every mobile device works as either a computational service provider or a client of a service requester. Though this kind of computation offloading benefits compute-intensive applications, the corresponding service models and analytics tools are remaining open issues. In this paper we categorize computation offloading into three modes: remote cloud service mode, connected ad hoc cloudlet service mode, and opportunistic ad hoc cloudlet service mode. We also conduct a detailed analytic study for the proposed three modes of computation offloading at ad hoc cloudlet.",
"title": ""
},
{
"docid": "3d489be641e3bb259c02eaf5d23b79ff",
"text": "Teaching plays a very important role in our society, by spreading human knowledge and educating our next generations. A good teacher will select appropriate teaching materials, impact suitable methodologies, and set up targeted examinations, according to the learning behaviors of the students. In the field of artificial intelligence, however, one has not fully explored the role of teaching, and pays most attention to machine learning. In this paper, we argue that equal attention, if not more, should be paid to teaching, and furthermore, an optimization framework (instead of heuristics) should be used to obtain good teaching strategies. We call this approach “learning to teach”. In the approach, two intelligent agents interact with each other: a student model (which corresponds to the learner in traditional machine learning algorithms), and a teacher model (which determines the appropriate data, loss function, and hypothesis space to facilitate the training of the student model). The teacher model leverages the feedback from the student model to optimize its own teaching strategies by means of reinforcement learning, so as to achieve teacher-student co-evolution. To demonstrate the practical value of our proposed approach, we take the training of deep neural networks (DNN) as an example, and show that by using the learning to teach techniques, we are able to use much less training data and fewer iterations to achieve almost the same accuracy for different kinds of DNN models (e.g., multi-layer perceptron, convolutional neural networks and recurrent neural networks) under various machine learning tasks (e.g., image classification and text understanding).",
"title": ""
},
{
"docid": "34993e22f91f3d5b31fe0423668a7eb1",
"text": "K-means as a clustering algorithm has been studied in intrusion detection. However, with the deficiency of global search ability it is not satisfactory. Particle swarm optimization (PSO) is one of the evolutionary computation techniques based on swarm intelligence, which has high global search ability. So K-means algorithm based on PSO (PSO-KM) is proposed in this paper. Experiment over network connection records from KDD CUP 1999 data set was implemented to evaluate the proposed method. A Bayesian classifier was trained to select some fields in the data set. The experimental results clearly showed the outstanding performance of the proposed method",
"title": ""
},
{
"docid": "05db9a684a537fdf1234e92047618e18",
"text": "Globally the internet is been accessed by enormous people within their restricted domains. When the client and server exchange messages among each other, there is an activity that can be observed in log files. Log files give a detailed description of the activities that occur in a network that shows the IP address, login and logout durations, the user's behavior etc. There are several types of attacks occurring from the internet. Our focus of research in this paper is Denial of Service (DoS) attacks with the help of pattern recognition techniques in data mining. Through which the Denial of Service attack is identified. Denial of service is a very dangerous attack that jeopardizes the IT resources of an organization by overloading with imitation messages or multiple requests from unauthorized users.",
"title": ""
},
{
"docid": "95afd1d83b5641a7dff782588348d2ec",
"text": "Intensive repetitive therapy improves function and quality of life for stroke patients. Intense therapies to overcome upper extremity impairment are beneficial, however, they are expensive because, in part, they rely on individualized interaction between the patient and rehabilitation specialist. The development of a pneumatic muscle driven hand therapy device, the Mentor/spl trade/, reinforces the need for volitional activation of joint movement while concurrently offering knowledge of results about range of motion, muscle activity or resistance to movement. The device is well tolerated and has received favorable comments from stroke survivors, their caregivers, and therapists.",
"title": ""
},
{
"docid": "9e0186c53e0a55744f60074145d135e3",
"text": "Two new low-power, and high-performance 1bit Full Adder cells are proposed in this paper. These cells are based on low-power XOR/XNOR circuit and Majority-not gate. Majority-not gate, which produces Cout (Output Carry), is implemented with an efficient method, using input capacitors and a static CMOS inverter. This kind of implementation benefits from low power consumption, a high degree of regularity and simplicity. Eight state-of-the-art 1-bit Full Adders and two proposed Full Adders are simulated with HSPICE using 0.18μm CMOS technology at several supply voltages ranging from 2.4v down to 0.8v. Although low power consumption is targeted in implementation of our designs, simulation results demonstrate great improvement in terms of power consumption and also PDP.",
"title": ""
},
{
"docid": "5e9d63bfc3b4a66e0ead79a2d883adfe",
"text": "Cloud computing is becoming a major trend for delivering and accessing infrastructure on demand via the network. Meanwhile, the usage of FPGAs (Field Programmable Gate Arrays) for computation acceleration has made significant inroads into multiple application domains due to their ability to achieve high throughput and predictable latency, while providing programmability, low power consumption and time-to-value. Many types of workloads, e.g. databases, big data analytics, and high performance computing, can be and have been accelerated by FPGAs. As more and more workloads are being deployed in the cloud, it is appropriate to consider how to make FPGAs and their capabilities available in the cloud. However, such integration is non-trivial due to issues related to FPGA resource abstraction and sharing, compatibility with applications and accelerator logics, and security, among others. In this paper, a general framework for integrating FPGAs into the cloud is proposed and a prototype of the framework is implemented based on OpenStack, Linux-KVM and Xilinx FPGAs. The prototype enables isolation between multiple processes in multiple VMs, precise quantitative acceleration resource allocation, and priority-based workload scheduling. Experimental results demonstrate the effectiveness of this prototype, an acceptable overhead, and good scalability when hosting multiple VMs and processes.",
"title": ""
},
{
"docid": "d21c83cf000314f7094c1e58dd081b91",
"text": "The Aurora system [1] is an experimental data stream management system with a fully functional prototype. It includes both a graphical development environment, and a runtime system. We propose to demonstrate the Aurora system with its development environment and runtime system, with several example monitoring applications developed in consultation with defense, financial, and natural science communities. We will also demonstrate the effect of various system alternatives on various workloads. For example, we will show how different scheduling algorithms affect tuple latency and internal queue lengths. We will use some of our visualization tools to accomplish this. Data Stream Management Aurora is a data stream management system for monitoring applications. Streams are continuous data feeds from such sources as sensors, satellites and stock feeds. Monitoring applications track the data from numerous streams, filtering them for signs of abnormal activity and processing them for purposes of aggregation, reduction and correlation. The management requirements for monitoring applications differ profoundly from those satisfied by a traditional DBMS: o A traditional DBMS assumes a passive model where most data processing results from humans issuing transactions and queries. Data stream management requires a more active approach, monitoring data feeds from unpredictable external sources (e.g., sensors) and alerting humans when abnormal activity is detected. o A traditional DBMS manages data that is currently in its tables. Data stream management often requires processing data that is bounded by some finite window of values, and not over an unbounded past. o A traditional DBMS provides exact answers to exact queries, and is blind to real-time deadlines. Data stream management often must respond to real-time deadlines (e.g., military applications monitoring positions of enemy platforms) and therefore must often provide reasonable approximations to queries. o A traditional query processor optimizes all queries in the same way (typically focusing on response time). A stream data manager benefits from application specific optimization criteria (QoS). o A traditional DBMS assumes pull-based queries to be the norm. Push-based data processing is the norm for a data stream management system. A Brief Summary of Aurora Aurora has been designed to deal with very large numbers of data streams. Users build queries out of a small set of operators (a.k.a. boxes). The current implementation provides a user interface for tapping into pre-existing inputs and network flows and for wiring boxes together to produces answers at the outputs. While it is certainly possible to accept input as declarative queries, we feel that for a very large number of such queries, the process of common sub-expression elimination is too difficult. An example of an Aurora network is given in Screen Shot 1. A simple stream is a potentially infinite sequence of tuples that all have the same stream ID. An arc carries multiple simple streams. This is important so that simple streams can be added and deleted from the system without having to modify the basic network. A query, then, is a sub-network that ends at a single output and includes an arbitrary number of inputs. Boxes can connect to multiple downstream boxes. All such path splits carry identical tuples. Multiple streams can be merged since some box types accept more than one input (e.g., Join, Union). We do not allow any cycles in an operator network. Each output is supplied with a Quality of Service (QoS) specification. Currently, QoS is captured by three functions (1) a latency graph, (2) a value-based graph, and (3) a loss-tolerance graph. The latency graph indicates how utility drops as an answer is delayed. The value-based graph shows which values of the output space are most important. The loss-tolerance graph is a simple way to describe how averse the application is to approximate answers. Tuples arrive at the input and are queued for processing. A scheduler selects a box with waiting tuples and executes that box on one or more of the input tuples. The output tuples of a box are queued at the input of the next box in sequence. In this way, tuples make their way from the inputs to the outputs. If the system is overloaded, QoS is adversely affected. In this case, we invoke a load shedder to strategically eliminate Aurora supports persistent storage in two different ways. First, when box queues consume more storage than available RAM, the system will spill tuples that are less likely to be needed soon to secondary storage. Second, ad hoc queries can be connected to (and disconnected from) any arc for which a connection point has been defined. A connection point stores a historical portion of a stream that has flowed on the arc. For example, one could define a connection point as the last hour’s worth of data that has been seen on a given arc. Any ad hoc query that connects to a connection point has access to the full stored history as well as any additional data that flows past while the query is connected.",
"title": ""
},
{
"docid": "0e482ebd5fa8f8f3fc67b01e9e6ee4bc",
"text": "Lung cancer is one of the most deadly diseases. It has a high death rate and its incidence rate has been increasing all over the world. Lung cancer appears as a solitary nodule in chest x-ray radiograph (CXR). Therefore, lung nodule detection in CXR could have a significant impact on early detection of lung cancer. Radiologists define a lung nodule in CXR as “solitary white nodule-like blob.” However, the solitary feature has not been employed for lung nodule detection before. In this paper, a solitary feature-based lung nodule detection method was proposed. We employed stationary wavelet transform and convergence index filter to extract the texture features and used AdaBoost to generate white nodule-likeness map. A solitary feature was defined to evaluate the isolation degree of candidates. Both the isolation degree and the white nodule likeness were used as final evaluation of lung nodule candidates. The proposed method shows better performance and robustness than those reported in previous research. More than 80% and 93% of lung nodules in the lung field in the Japanese Society of Radiological Technology (JSRT) database were detected when the false positives per image were two and five, respectively. The proposed approach has the potential of being used in clinical practice.",
"title": ""
},
{
"docid": "367268c67657a43d1b981347e8175153",
"text": "In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems. Different from the vanilla SGD and other modern stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple recursive framework for updating stochastic gradient estimates; when comparing to SAG/SAGA, SARAH does not require a storage of past gradients. The linear convergence rate of SARAH is proven under strong convexity assumption. We also prove a linear convergence rate (in the strongly convex case) for an inner loop of SARAH, the property that SVRG does not possess. Numerical experiments demonstrate the efficiency of our algorithm.",
"title": ""
},
{
"docid": "767485f599caba578e17182d5407f8c1",
"text": "Marijuana is the most commonly used drug of abuse in the USA. It is commonly abused through inhalation and therefore has effects on the lung that are similar to tobacco smoke, including increased cough, sputum production, hyperinflation, and upper lobe emphysematous changes. However, at this time, it does not appear that marijuana smoke contributes to the development of chronic obstructive pulmonary disease. Marijuana can have multiple physiologic effects such as tachycardia, peripheral vasodilatation, behavioral and emotional changes, and possible prolonged cognitive impairment. The carcinogenic effects of marijuana are unclear at this time. Studies are mixed on the ability of marijuana smoke to increase the risk for head and neck squamous cell carcinoma, lung cancer, prostate cancer, and cervical cancer. Some studies show that marijuana is protective for development of malignancy. Marijuana smoke has been shown to have an inhibitory effect on the immune system. Components of cannabis are under investigation as treatment for autoimmune diseases and malignancy. As marijuana becomes legalized in many states for medical and recreational use, other forms of tetrahydrocannabinol (THC) have been developed, such as food products and beverages. As most research on marijuana at this time has been on whole marijuana smoke, rather than THC, it is difficult to determine if the currently available data is applicable to these newer products.",
"title": ""
},
{
"docid": "8589ec481e78d14fbeb3e6e4205eee50",
"text": "This paper presents a novel ensemble classifier generation technique RotBoost, which is constructed by combining Rotation Forest and AdaBoost. The experiments conducted with 36 real-world data sets available from the UCI repository, among which a classification tree is adopted as the base learning algorithm, demonstrate that RotBoost can generate ensemble classifiers with significantly lower prediction error than either Rotation Forest or AdaBoost more often than the reverse. Meanwhile, RotBoost is found to perform much better than Bagging and MultiBoost. Through employing the bias and variance decompositions of error to gain more insight of the considered classification methods, RotBoost is seen to simultaneously reduce the bias and variance terms of a single tree and the decrement achieved by it is much greater than that done by the other ensemble methods, which leads RotBoost to perform best among the considered classification procedures. Furthermore, RotBoost has a potential advantage over AdaBoost of suiting parallel execution. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e8b0536f5d749b5f6f5651fe69debbe1",
"text": "Current centralized cloud datacenters provide scalable computation- and storage resources in a virtualized infrastructure and employ a use-based \"pay-as-you-go\" model. But current mobile devices and their resource-hungry applications (e.g., Speech-or face recognition) demand for these resources on the spot, though a mobile device's intrinsic characteristic is its limited availability of resources (e.g., CPU, storage, bandwidth, energy). Thus, mobile cloud computing (MCC) was introduced to overcome these limitations by transparently making accessible the apparently infinite cloud resources to the mobile devices and by allowing mobile applications to (elastically) expand into the cloud. However, MCC often relies on a stable and fast connection to the mobile devices' surrogate in the cloud, which is a rare case in mobile scenarios. Moreover, the increased latency and the limited bandwidth prevent the use of real-time applications like, e.g. Cloud gaming. Instead, mobile edge computing (MEC) or fog computing tries to provide the necessary resources at the logical edge of the network by including infrastructure components to create ad-hoc mobile clouds. However, this approach requires the replication and management of the applications' business logic in an untrusted, unreliable and constantly changing environment. Consequently, this paper presents a novel approach to allow mobile app developers to easily benefit from the features of MEC. In particular, we present a programming model and framework that directly fit the common app developers' mindset to design elastic and scalable edge-based mobile applications.",
"title": ""
},
{
"docid": "dbb2a53d4dfbf0840d96670a25f88113",
"text": "In real-world recognition/classification tasks, limited by various objective factors, it is usually difficult to collect training samples to exhaust all classes when training a recognizer or classifier. A more realistic scenario is open set recognition (OSR), where incomplete knowledge of the world exists at training time, and unknown classes can be submitted to an algorithm during testing, requiring the classifiers not only to accurately classify the seen classes, but also to effectively deal with the unseen ones. This paper provides a comprehensive survey of existing open set recognition techniques covering various aspects ranging from related definitions, representations of models, datasets, experiment setup and evaluation metrics. Furthermore, we briefly analyze the relationships between OSR and its related tasks including zero-shot, one-shot (few-shot) recognition/learning techniques, classification with reject option, and so forth. Additionally, we also overview the open world recognition which can be seen as a natural extension of OSR. Importantly, we highlight the limitations of existing approaches and point out some promising subsequent research directions in this field.",
"title": ""
},
{
"docid": "382885a630e4d870fe6735977b519689",
"text": "TMEM16A forms calcium-activated chloride channels (CaCCs) that regulate physiological processes such as the secretions of airway epithelia and exocrine glands, the contraction of smooth muscles, and the excitability of neurons. Notwithstanding intense interest in the mechanism behind TMEM16A-CaCC calcium-dependent gating, comprehensive surveys to identify and characterize potential calcium sensors of this channel are still lacking. By aligning distantly related calcium-activated ion channels in the TMEM16 family and conducting systematic mutagenesis of all conserved acidic residues thought to be exposed to the cytoplasm, we identify four acidic amino acids as putative calcium-binding residues. Alterations of the charge, polarity, and size of amino acid side chains at these sites alter the ability of different divalent cations to activate the channel. Furthermore, TMEM16A mutant channels containing double cysteine substitutions at these residues are sensitive to the redox potential of the internal solution, providing evidence for their physical proximity and solvent accessibility.",
"title": ""
},
{
"docid": "06574b1a35aef36494726f91dfe8f909",
"text": "This paper presents the extension of a birth simulator for medical training with an augmented reality system. The system presents an add-on of the user interface for our previous work on a mixed reality delivery simulator system [1]. This simulation system comprised direct haptic and auditory feedback, and provided important physiological data including values of blood pressure, heart rates, pain and oxygen supply, necessary for training physicians. Major drawback of the system was the indirect viewing of both the virtual models and the final delivery process. The current paper extends the existing system by bringing in the in-situ visualization. This plays an important role in increasing the efficiency of the training, since the physician now concentrates on the vaginal delivery rather than the remote computer screen. In addition, forceps are modeled and an external optical tracking system is integrated in order to provide visual feedback while training with the simulator for complicated procedures such as forceps delivery.",
"title": ""
},
{
"docid": "004e9fbdfadae9c41738d3ad8e7392b1",
"text": "In this paper, we conduct a cross-layer analysis of both the jamming capability of the cognitiveradio-based jammers and the anti-jamming capability of the cognitive radio networks (CRN). We use a Markov chain to model the CRN operations in spectrum sensing, channel access and channel switching under jamming. With various jamming models, the jamming probabilities and the throughputs of the CRN are obtained in closed-form expressions. Furthermore, the models and expressions are simplified to determine the minimum and the maximum CRN throughput expressions under jamming, and to optimize important anti-jamming parameters. The results are helpful for the optimal anti-jamming CRN design. The model and the analysis results are verified by simulations.",
"title": ""
},
{
"docid": "e5f0bca200dc4ef5a806feb06b4cf2a4",
"text": "Supply chain finance is a new financing model that makes the industry chain as an organic whole chain to develop financing services. Its purpose is to combine with financial institutions, companies and third-party logistics companies to achieve win-win situation. The supply chain is designed to maximize the financial value. The supply chain finance business in our country is still in its early stages. Conducting the research on risk assessment and control of the supply chain finance business has an important significance for the promotion of the development of our country supply chain finance business. The paper investigates the dynamic multiple attribute decision making problems, in which the decision information, provided by decision makers at different periods, is expressed in intuitionistic fuzzy numbers. We first develop one new aggregation operators called dynamic intuitionistic fuzzy Hamacher weighted averaging (DIFHWA) operator. Moreover, a procedure based on the DIFHWA and IFHWA operators is developed to solve the dynamic multiple attribute decision making problems where all the decision information about attribute values takes the form of intuitionistic fuzzy numbers collected at different periods. Finally, an illustrative example for risk assessment of supply chain finance is given to verify the developed approach and to demonstrate its practicality and effectiveness.",
"title": ""
}
] |
scidocsrr
|
42d1b7fdc92d332b20596cee8e4bfc2b
|
Joint Semantic Segmentation and 3D Reconstruction from Monocular Video
|
[
{
"docid": "689e4936c818fd9b40ac8a4990cc693f",
"text": "We address the problem of image-based scene analysis from streaming video, as would be seen from a moving platform, in order to efficiently generate spatially and temporally consistent predictions of semantic categories over time. In contrast to previous techniques which typically address this problem in batch and/or through graphical models, we demonstrate that by learning visual similarities between pixels across frames, a simple filtering algorithfiltering algorithmm is able to achieve high performance predictions in an efficient and online/causal manner. Our technique is a meta-algorithm that can be efficiently wrapped around any scene analysis technique that produces a per-pixel semantic category distribution. We validate our approach over three different scene analysis techniques on three different datasets that contain different semantic object categories. Our experiments demonstrate that our approach is very efficient in practice and substantially improves the consistency of the predictions over time.",
"title": ""
},
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
},
{
"docid": "14360f8801fcff22b7a0059b322ebf9a",
"text": "Supplying realistically textured 3D city models at ground level promises to be useful for pre-visualizing upcoming traffic situations in car navigation systems. Because this pre-visualization can be rendered from the expected future viewpoints of the driver, the required maneuver will be more easily understandable. 3D city models can be reconstructed from the imagery recorded by surveying vehicles. The vastness of image material gathered by these vehicles, however, puts extreme demands on vision algorithms to ensure their practical usability. Algorithms need to be as fast as possible and should result in compact, memory efficient 3D city models for future ease of distribution and visualization. For the considered application, these are not contradictory demands. Simplified geometry assumptions can speed up vision algorithms while automatically guaranteeing compact geometry models. In this paper, we present a novel city modeling framework which builds upon this philosophy to create 3D content at high speed. Objects in the environment, such as cars and pedestrians, may however disturb the reconstruction, as they violate the simplified geometry assumptions, leading to visually unpleasant artifacts and degrading the visual realism of the resulting 3D city model. Unfortunately, such objects are prevalent in urban scenes. We therefore extend the reconstruction framework by integrating it with an object recognition module that automatically detects cars in the input video streams and localizes them in 3D. The two components of our system are tightly integrated and benefit from each other’s continuous input. 3D reconstruction delivers geometric scene context, which greatly helps improve detection precision. The detected car locations, on the other hand, are used to instantiate virtual placeholder models which augment the visual realism of the reconstructed city model.",
"title": ""
}
] |
[
{
"docid": "28a859bed62033aed005ee5895109953",
"text": "Eating out has recently become part of our lifestyle. However, when eating out in restaurants, many people find it difficult to make meal choices consistent with their health goals. Bad eating choices and habits are in part responsible for the alarming increase in the prevalence of chronic diseases such as obesity, diabetes, and high blood pressure, which burden the health care system. Therefore, there is a need for an intervention that educates the public on how to make healthy choices while eating away from home. In this paper, we propose a goal-based slow-casual game approach that addresses this need. This approach acknowledges different groups of users with varying health goals and adopts slow technology to promote learning and reflection. We model two recognized determinants of well-being into dietary interventions and provide feedback accordingly. To demonstrate the suitability of our approach for long-term sustained learning, reflection, and attitude and/or behavior change, we develop and evaluate LunchTime—a goal-based slow-casual game that educates players on how to make healthier meal choices. The result from the evaluation shows that LunchTime facilitates learning and reflection and promotes positive dietary attitude change.",
"title": ""
},
{
"docid": "447c008d30a6f86830d49bd74bd7a551",
"text": "OBJECTIVES\nTo investigate the effects of 24 weeks of whole-body-vibration (WBV) training on knee-extension strength and speed of movement and on counter-movement jump performance in older women.\n\n\nDESIGN\nA randomized, controlled trial.\n\n\nSETTING\nExercise Physiology and Biomechanics Laboratory, Leuven, Belgium.\n\n\nPARTICIPANTS\nEighty-nine postmenopausal women, off hormone replacement therapy, aged 58 to 74, were randomly assigned to a WBV group (n=30), a resistance-training group (RES, n=30), or a control group (n=29).\n\n\nINTERVENTION\nThe WBV group and the RES group trained three times a week for 24 weeks. The WBV group performed unloaded static and dynamic knee-extensor exercises on a vibration platform, which provokes reflexive muscle activity. The RES group trained knee-extensors by performing dynamic leg-press and leg-extension exercises increasing from low (20 repetitions maximum (RM)) to high (8RM) resistance. The control group did not participate in any training.\n\n\nMEASUREMENTS\nPre-, mid- (12 weeks), and post- (24 weeks) isometric strength and dynamic strength of knee extensors were measured using a motor-driven dynamometer. Speed of movement of knee extension was assessed using an external resistance equivalent to 1%, 20%, 40%, and 60% of isometric maximum. Counter-movement jump performance was determined using a contact mat.\n\n\nRESULTS\nIsometric and dynamic knee extensor strength increased significantly (P<.001) in the WBV group (mean+/-standard error 15.0+/-2.1% and 16.1+/-3.1%, respectively) and the RES group (18.4+/-2.8% and 13.9+/-2.7%, respectively) after 24 weeks of training, with the training effects not significantly different between the groups (P=.558). Speed of movement of knee extension significantly increased at low resistance (1% or 20% of isometric maximum) in the WBV group only (7.4+/-1.8% and 6.3+/-2.0%, respectively) after 24 weeks of training, with no significant differences in training effect between the WBV and the RES groups (P=.391; P=.142). Counter-movement jump height enhanced significantly (P<.001) in the WBV group (19.4+/-2.8%) and the RES group (12.9+/-2.9%) after 24 weeks of training. Most of the gain in knee-extension strength and speed of movement and in counter-movement jump performance had been realized after 12 weeks of training.\n\n\nCONCLUSION\nWBV is a suitable training method and is as efficient as conventional RES training to improve knee-extension strength and speed of movement and counter-movement jump performance in older women. As previously shown in young women, it is suggested that the strength gain in older women is mainly due to the vibration stimulus and not only to the unloaded exercises performed on the WBV platform.",
"title": ""
},
{
"docid": "e602cb626418ff3dbb38fd171bfd359e",
"text": "File carving is an important technique for digital forensics investigation and for simple data recovery. By using a database of headers and footers (essentially, strings of bytes at predictable offsets) for specific file types, file carvers can retrieve files from raw disk images, regardless of the type of filesystem on the disk image. Perhaps more importantly, file carving is possible even if the filesystem metadata has been destroyed. This paper presents some requirements for high performance file carving, derived during design and implementation of Scalpel, a new open source file carving application. Scalpel runs on machines with only modest resources and performs carving operations very rapidly, outperforming most, perhaps all, of the current generation of carving tools. The results of a number of experiments are presented to support this assertion.",
"title": ""
},
{
"docid": "07409cd81cc5f0178724297245039878",
"text": "In recent years, the number of sensor network deployments for real-life applications has rapidly increased and it is expected to expand even more in the near future. Actually, for a credible deployment in a real environment three properties need to be fulfilled, i.e., energy efficiency, scalability and reliability. In this paper we focus on IEEE 802.15.4 sensor networks and show that they can suffer from a serious MAC unreliability problem, also in an ideal environment where transmission errors never occur. This problem arises whenever power management is enabled - for improving the energy efficiency - and results in a very low delivery ratio, even when the number of nodes in the network is very low (e.g., 5). We carried out an extensive analysis, based on simulations and real measurements, to investigate the ultimate reasons of this problem. We found that it is caused by the default MAC parameter setting suggested by the 802.15.4 standard. We also found that, with a more appropriate parameter setting, it is possible to achieve the desired level of reliability (as well as a better energy efficiency). However, in some scenarios this is possible only by choosing parameter values formally not allowed by the standard.",
"title": ""
},
{
"docid": "057b397d3b72a30352697ce0940e490a",
"text": "Recent events of multiple earthquakes in Nepal, Italy and New Zealand resulting loss of life and resources bring our attention to the ever growing significance of disaster management, especially in the context of large scale nature disasters such as earthquake and Tsunami. In this paper, we focus on how disaster communication system can benefit from recent advances in wireless communication technologies especially mobile technologies and devices. The paper provides an overview of how the new generation of telecommunications and technologies such as 4G/LTE, Device to Device (D2D) and 5G can improve the potential of disaster networks. D2D is a promising technology for 5G networks, providing high data rates, increased spectral and energy efficiencies, reduced end-to-end delay and transmission power. We examine a scenario of multi-hop D2D communications where one UE may help other UEs to exchange information, by utilizing cellular network technique. Results show the average energy-efficiency spectral- efficiency of these transmission types are enhanced when the number of hops used in multi-hop links increases. The effect of resource group allocation is also pointed out for efficient design of system.",
"title": ""
},
{
"docid": "e447a0129f01a096f03b16c2ee16c888",
"text": "Many authors use feedforward neural networks for modeling and forecasting time series. Most of these applications are mainly experimental, and it is often difficult to extract a general methodology from the published studies. In particular, the choice of architecture is a tricky problem. We try to combine the statistical techniques of linear and nonlinear time series with the connectionist approach. The asymptotical properties of the estimators lead us to propose a systematic methodology to determine which weights are nonsignificant and to eliminate them to simplify the architecture. This method (SSM or statistical stepwise method) is compared to other pruning techniques and is applied to some artificial series, to the famous Sunspots benchmark, and to daily electrical consumption data.",
"title": ""
},
{
"docid": "be398b849ba0caf2e714ea9cc8468d78",
"text": "Gadolinium based contrast agents (GBCAs) play an important role in the diagnostic evaluation of many patients. The safety of these agents has been once again questioned after gadolinium deposits were observed and measured in brain and bone of patients with normal renal function. This retention of gadolinium in the human body has been termed \"gadolinium storage condition\". The long-term and cumulative effects of retained gadolinium in the brain and elsewhere are not as yet understood. Recently, patients who report that they suffer from chronic symptoms secondary to gadolinium exposure and retention created gadolinium-toxicity on-line support groups. Their self-reported symptoms have recently been published. Bone and joint complaints, and skin changes were two of the most common complaints. This condition has been termed \"gadolinium deposition disease\". In this review we will address gadolinium toxicity disorders, from acute adverse reactions to GBCAs to gadolinium deposition disease, with special emphasis on the latter, as it is the most recently described and least known.",
"title": ""
},
{
"docid": "6c007a6e1a40f5f798d619fed9e9d5c9",
"text": "The physical unclonable function (PUF) has emerged as a popular and widely studied security primitive based on the randomness of the underlying physical medium. To date, most of the research emphasis has been placed on finding new ways to measure randomness, hardware realization and analysis of a few initially proposed structures, and conventional secret-key based protocols. In this work, we present our subjective analysis of the emerging and future trends in this area that aim to change the scope, widen the application domain, and make a lasting impact. We emphasize on the development of new PUF-based primitives and paradigms, robust protocols, public-key protocols, digital PUFs, new technologies, implementations, metrics and tests for evaluation/validation, as well as relevant attacks and countermeasures.",
"title": ""
},
{
"docid": "17d484e84b2d30d0108537112e6dc31d",
"text": "Surface speckle pattern intensity distribution resulting from laser light scattering from a rough surface contains various information about the surface geometrical and physical properties. A surface roughness measurement technique based on the texture analysis of surface speckle pattern texture images is put forward. In the surface roughness measurement technique, the speckle pattern texture images are taken by a simple setup configuration consisting of a laser and a CCD camera. Our experimental results show that the surface roughness contained in the surface speckle pattern texture images has a good monotonic relationship with their energy feature of the gray-level co-occurrence matrices. After the measurement system is calibrated by a standard surface roughness specimen, the surface roughness of the object surface composed of the same material and machined by the same method as the standard specimen surface can be evaluated from a single speckle pattern texture image. The robustness of the characterization of speckle pattern texture for surface roughness is also discussed. Thus the surface roughness measurement technique can be used for an in-process surface measurement.",
"title": ""
},
{
"docid": "e2c9c7c26436f0f7ef0067660b5f10b8",
"text": "The naive Bayesian classifier (NBC) is a simple yet very efficient classification technique in machine learning. But the unpractical condition independence assumption of NBC greatly degrades its performance. There are two primary ways to improve NBC's performance. One is to relax the condition independence assumption in NBC. This method improves NBC's accuracy by searching additional condition dependencies among attributes of the samples in a scope. It usually involves in very complex search algorithms. Another is to change the representation of the samples by creating new attributes from the original attributes, and construct NBC from these new attributes while keeping the condition independence assumption. Key problem of this method is to guarantee strong condition independencies among the new attributes. In the paper, a new means of making attribute set, which maps the original attributes to new attributes according to the information geometry and Fisher score, is presented, and then the FS-NBC on the new attributes is constructed. The condition dependence relation among the new attributes theoretically is discussed. We prove that these new attributes are condition independent of each other under certain conditions. The experimental results show that our method improves performance of NBC excellently",
"title": ""
},
{
"docid": "e0633afb6f4dcb1561dbb23b6e3aa713",
"text": "Software security vulnerabilities are one of the critical issues in the realm of computer security. Due to their potential high severity impacts, many different approaches have been proposed in the past decades to mitigate the damages of software vulnerabilities. Machine-learning and data-mining techniques are also among the many approaches to address this issue. In this article, we provide an extensive review of the many different works in the field of software vulnerability analysis and discovery that utilize machine-learning and data-mining techniques. We review different categories of works in this domain, discuss both advantages and shortcomings, and point out challenges and some uncharted territories in the field.",
"title": ""
},
{
"docid": "7d391483dfe60f4ad60735264a0b7ab2",
"text": "The growing interest and the market for indoor Location Based Service (LBS) have been drivers for a huge demand for building data and reconstructing and updating of indoor maps in recent years. The traditional static surveying and mapping methods can't meet the requirements for accuracy, efficiency and productivity in a complicated indoor environment. Utilizing a Simultaneous Localization and Mapping (SLAM)-based mapping system with ranging and/or camera sensors providing point cloud data for the maps is an auspicious alternative to solve such challenges. There are various kinds of implementations with different sensors, for instance LiDAR, depth cameras, event cameras, etc. Due to the different budgets, the hardware investments and the accuracy requirements of indoor maps are diverse. However, limited studies on evaluation of these mapping systems are available to offer a guideline of appropriate hardware selection. In this paper we try to characterize them and provide some extensive references for SLAM or mapping system selection for different applications. Two different indoor scenes (a L shaped corridor and an open style library) were selected to review and compare three different mapping systems, namely: (1) a commercial Matterport system equipped with depth cameras; (2) SLAMMER: a high accuracy small footprint LiDAR with a fusion of hector-slam and graph-slam approaches; and (3) NAVIS: a low-cost large footprint LiDAR with Improved Maximum Likelihood Estimation (IMLE) algorithm developed by the Finnish Geospatial Research Institute (FGI). Firstly, an L shaped corridor (2nd floor of FGI) with approximately 80 m length was selected as the testing field for Matterport testing. Due to the lack of quantitative evaluation of Matterport indoor mapping performance, we attempted to characterize the pros and cons of the system by carrying out six field tests with different settings. The results showed that the mapping trajectory would influence the final mapping results and therefore, there was optimal Matterport configuration for better indoor mapping results. Secondly, a medium-size indoor environment (the FGI open library) was selected for evaluation of the mapping accuracy of these three indoor mapping technologies: SLAMMER, NAVIS and Matterport. Indoor referenced maps were collected with a small footprint Terrestrial Laser Scanner (TLS) and using spherical registration targets. The 2D indoor maps generated by these three mapping technologies were assessed by comparing them with the reference 2D map for accuracy evaluation; two feature selection methods were also utilized for the evaluation: interactive selection and minimum bounding rectangles (MBRs) selection. The mapping RMS errors of SLAMMER, NAVIS and Matterport were 2.0 cm, 3.9 cm and 4.4 cm, respectively, for the interactively selected features, and the corresponding values using MBR features were 1.7 cm, 3.2 cm and 4.7 cm. The corresponding detection rates for the feature points were 100%, 98.9%, 92.3% for the interactive selected features and 100%, 97.3% and 94.7% for the automated processing. The results indicated that the accuracy of all the evaluated systems could generate indoor map at centimeter-level, but also variation of the density and quality of collected point clouds determined the applicability of a system into a specific LBS.",
"title": ""
},
{
"docid": "908f862dea52cd9341d2127928baa7de",
"text": "Arsenic's history in science, medicine and technology has been overshadowed by its notoriety as a poison in homicides. Arsenic is viewed as being synonymous with toxicity. Dangerous arsenic concentrations in natural waters is now a worldwide problem and often referred to as a 20th-21st century calamity. High arsenic concentrations have been reported recently from the USA, China, Chile, Bangladesh, Taiwan, Mexico, Argentina, Poland, Canada, Hungary, Japan and India. Among 21 countries in different parts of the world affected by groundwater arsenic contamination, the largest population at risk is in Bangladesh followed by West Bengal in India. Existing overviews of arsenic removal include technologies that have traditionally been used (oxidation, precipitation/coagulation/membrane separation) with far less attention paid to adsorption. No previous review is available where readers can get an overview of the sorption capacities of both available and developed sorbents used for arsenic remediation together with the traditional remediation methods. We have incorporated most of the valuable available literature on arsenic remediation by adsorption ( approximately 600 references). Existing purification methods for drinking water; wastewater; industrial effluents, and technological solutions for arsenic have been listed. Arsenic sorption by commercially available carbons and other low-cost adsorbents are surveyed and critically reviewed and their sorption efficiencies are compared. Arsenic adsorption behavior in presence of other impurities has been discussed. Some commercially available adsorbents are also surveyed. An extensive table summarizes the sorption capacities of various adsorbents. Some low-cost adsorbents are superior including treated slags, carbons developed from agricultural waste (char carbons and coconut husk carbons), biosorbents (immobilized biomass, orange juice residue), goethite and some commercial adsorbents, which include resins, gels, silica, treated silica tested for arsenic removal come out to be superior. Immobilized biomass adsorbents offered outstanding performances. Desorption of arsenic followed by regeneration of sorbents has been discussed. Strong acids and bases seem to be the best desorbing agents to produce arsenic concentrates. Arsenic concentrate treatment and disposal obtained is briefly addressed. This issue is very important but much less discussed.",
"title": ""
},
{
"docid": "74160a53096cbd67c442e5e653fdd99b",
"text": "The general disproportion of urban development and the socio-economical crisis in Serbia, followed by a number of acute and chronic stressors, as well as years of accumulated trauma, prevented the parallel physical, mental and social adaptation of society as a whole. These trends certainly affected the quality of mental health and well-being, particularly on the vulnerable urban population, increasing the absolute number of people with depression, stress and psychosomatic disorders. This study was pioneering in Serbia and was conducted in collaboration with the Faculty of Forestry, the Institute of Mental Health and the Botanical Garden in Belgrade, in order to understand how spending time and performing horticulture therapy in specially designed urban green environments can improve mental health. The participants were psychiatric patients (n=30), users of the day hospital of the Institute who were randomly selected for the study, and the control group, assessed for depression, anxiety and stress before and after the intervention, using a DASS21 scale. During the intervention period the study group stayed in the Botanical garden and participated in a special programme of horticulture therapy. In order to exclude any possible \"special treatment'' or ''placebo effect\", the control group was included in occupational art therapy while it continued to receive conventional therapy. The test results indicated that nature based therapy had a positive influence on the mental health and well-being of the participants. Furthermore, the difference in the test results of the subscale stress before and after the intervention for the study group was F1.28 = 5.442 and p<;.05. According to socio demographic and clinical variables, the interesting trend was recorded on the subscale of anxiety showing that the male participants in the study group were more anxious, with the most pronounced inflection noted on this scale after treatment. The results of this study have shown that recuperation from stress, depression and anxiety was possible and much more complete when participants were involved in horticulture therapy as a nature-based solution for improving mental health.",
"title": ""
},
{
"docid": "9cd92fa5085c1f7edec5c2ba53c549cc",
"text": "Support theory represents probability judgment in terms of the support, or strength of evidence, of the focal relative to the alternative hypothesis. It assumes that the judged probability of an event generally increases when its description is unpacked into disjoint components (implicit subadditivity). This article presents a significant extension of the theory in which the judged probability of an explicit disjunction is less than or equal to the sum of the judged probabilities of its disjoint components (explicit subadditivity). Several studies of probability and frequency judgment demonstrate both implicit and explicit subadditivity. The former is attributed to enhanced availability, whereas the latter is attributed to repacking and anchoring.",
"title": ""
},
{
"docid": "1933e3f26cae0b1a1cef204acbbb9ebd",
"text": "Should individuals include actively managed mutual funds in their investment portfolios? They should if and only if the result of active management is superior performance due to skill. This paper employs a previously ignored statistical technique to detect whether skill drives the superior performance of some mutual funds. This technique, the generalized binomial distribution, models a sequence of n Bernoulli events in which the result of each event is either success or failure (successive quarters during which funds outperform or do not outperform the market). Results display a statistically significant proportion of mutual funds, though small in number, outperform their peers on a risk–adjusted basis and do so as a result of skill, not luck. This result signifies the rationality of entrusting one’s wealth to successful and skillfully managed mutual funds. Hence, a well–designed portfolio that includes actively managed funds may trump a wholly passive index fund strategy. JEL Classifications: G10, G11, G12",
"title": ""
},
{
"docid": "51b7cf820e3a46b5daeee6eb83058077",
"text": "Previous taxonomies of software change have focused on the purpose of the change (i.e., the why) rather than the underlying mechanisms. This paper proposes a taxonomy of software change based on characterizing the mechanisms of change and the factors that influence these mechanisms. The ultimate goal of this taxonomy is to provide a framework that positions concrete tools, formalisms and methods within the domain of software evolution. Such a framework would considerably ease comparison between the various mechanisms of change. It would also allow practitioners to identify and evaluate the relevant tools, methods and formalisms for a particular change scenario. As an initial step towards this taxonomy, the paper presents a framework that can be used to characterize software change support tools and to identify the factors that impact on the use of these tools. The framework is evaluated by applying it to three different change support tools and by comparing these tools based on this analysis. Copyright c © 2005 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "343f45efbdbf654c421b99927c076c5d",
"text": "As software engineering educators, it is important for us to realize the increasing domain-specificity of software, and incorporate these changes in our design of teaching material. Bioinformatics software is an example of immensely complex and critical scientific software and this domain provides an excellent illustration of the role of computing in the life sciences. To study bioinformatics from a software engineering standpoint, we conducted an exploratory survey of bioinformatics developers. The survey had a range of questions about people, processes and products. We learned that practices like extreme programming, requirements engineering and documentation. As software engineering educators, we realized that the survey results had important implications for the education of bioinformatics professionals. We also investigated the current status of software engineering education in bioinformatics, by examining the curricula of more than fifty bioinformatics programs and the contents of over fifteen textbooks. We observed that there was no mention of the role and importance of software engineering practices essential for creating dependable software systems. Based on our findings and existing literature we present a set of recommendations for improving software engineering education in bioinformatics.",
"title": ""
},
{
"docid": "73dcb2e355679f2e466029fbbb24a726",
"text": "Many of the world's most popular websites catalyze their growth through invitations from existing members. New members can then in turn issue invitations, and so on, creating cascades of member signups that can spread on a global scale. Although these diffusive invitation processes are critical to the popularity and growth of many websites, they have rarely been studied, and their properties remain elusive. For instance, it is not known how viral these cascades structures are, how cascades grow over time, or how diffusive growth affects the resulting distribution of member characteristics present on the site. In this paper, we study the diffusion of LinkedIn, an online professional network comprising over 332 million members, a large fraction of whom joined the site as part of a signup cascade. First we analyze the structural patterns of these signup cascades, and find them to be qualitatively different from previously studied information diffusion cascades. We also examine how signup cascades grow over time, and observe that diffusion via invitations on LinkedIn occurs over much longer timescales than are typically associated with other types of online diffusion. Finally, we connect the cascade structures with rich individual-level attribute data to investigate the interplay between the two. Using novel techniques to study the role of homophily in diffusion, we find striking differences between the local, edge-wise homophily and the global, cascade-level homophily we observe in our data, suggesting that signup cascades form surprisingly coherent groups of members.",
"title": ""
},
{
"docid": "e605e0417160dec6badddd14ec093843",
"text": "Within both academic and policy discourses, the concept of media literacy is being extended from its traditional focus on print and audiovisual media to encompass the internet and other new media. The present article addresses three central questions currently facing the public, policy-makers and academy: What is media literacy? How is it changing? And what are the uses of literacy? The article begins with a definition: media literacy is the ability to access, analyse, evaluate and create messages across a variety of contexts. This four-component model is then examined for its applicability to the internet. Having advocated this skills-based approach to media literacy in relation to the internet, the article identifies some outstanding issues for new media literacy crucial to any policy of promoting media literacy among the population. The outcome is to extend our understanding of media literacy so as to encompass the historically and culturally conditioned relationship among three processes: (i) the symbolic and material representation of knowledge, culture and values; (ii) the diffusion of interpretative skills and abilities across a (stratified) population; and (iii) the institutional, especially, the state management of the power that access to and skilled use of knowledge brings to those who are ‘literate’.",
"title": ""
}
] |
scidocsrr
|
524c817ec1f456df3dcb2d52a17995c9
|
Predicting online e-marketplace sales performances: A big data approach
|
[
{
"docid": "66ad4513ed36329c299792ce35b2b299",
"text": "Reducing social uncertainty—understanding, predicting, and controlling the behavior of other people—is a central motivating force of human behavior. When rules and customs are not su4cient, people rely on trust and familiarity as primary mechanisms to reduce social uncertainty. The relative paucity of regulations and customs on the Internet makes consumer familiarity and trust especially important in the case of e-Commerce. Yet the lack of an interpersonal exchange and the one-time nature of the typical business transaction on the Internet make this kind of consumer trust unique, because trust relates to other people and is nourished through interactions with them. This study validates a four-dimensional scale of trust in the context of e-Products and revalidates it in the context of e-Services. The study then shows the in:uence of social presence on these dimensions of this trust, especially benevolence, and its ultimate contribution to online purchase intentions. ? 2004 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "721a64c9a5523ba836318edcdb8de021",
"text": "Highly-produced audio stories often include musical scores that reflect the emotions of the speech. Yet, creating effective musical scores requires deep expertise in sound production and is time-consuming even for experts. We present a system and algorithm for re-sequencing music tracks to generate emotionally relevant music scores for audio stories. The user provides a speech track and music tracks and our system gathers emotion labels on the speech through hand-labeling, crowdsourcing, and automatic methods. We develop a constraint-based dynamic programming algorithm that uses these emotion labels to generate emotionally relevant musical scores. We demonstrate the effectiveness of our algorithm by generating 20 musical scores for audio stories and showing that crowd workers rank their overall quality significantly higher than stories without music.",
"title": ""
},
{
"docid": "140a9255e8ee104552724827035ee10a",
"text": "Our goal is to design architectures that retain the groundbreaking performance of CNNs for landmark localization and at the same time are lightweight, compact and suitable for applications with limited computational resources. To this end, we make the following contributions: (a) we are the first to study the effect of neural network binarization on localization tasks, namely human pose estimation and face alignment. We exhaustively evaluate various design choices, identify performance bottlenecks, and more importantly propose multiple orthogonal ways to boost performance. (b) Based on our analysis, we propose a novel hierarchical, parallel and multi-scale residual architecture that yields large performance improvement over the standard bottleneck block while having the same number of parameters, thus bridging the gap between the original network and its binarized counterpart. (c) We perform a large number of ablation studies that shed light on the properties and the performance of the proposed block. (d) We present results for experiments on the most challenging datasets for human pose estimation and face alignment, reporting in many cases state-of-the-art performance. Code can be downloaded from https://www.adrianbulat.com/binary-cnn-landmarks",
"title": ""
},
{
"docid": "9c4c13c38e2b96aa3141b1300ca356c6",
"text": "Awareness plays a major role in human cognition and adaptive behaviour, though mechanisms involved remain unknown. Awareness is not an objectively established fact, therefore, despite extensive research, scientists have not been able to fully interpret its contribution in multisensory integration and precise neural firing, hence, questions remain: (1) How the biological neuron integrates the incoming multisensory signals with respect to different situations? (2) How are the roles of incoming multisensory signals defined (selective amplification or attenuation) that help neuron(s) to originate a precise neural firing complying with the anticipated behavioural-constraint of the environment? (3) How are the external environment and anticipated behaviour integrated? Recently, scientists have exploited deep learning architectures to integrate multimodal cues and capture context-dependent meanings. Yet, these methods suffer from imprecise behavioural representation and a limited understanding of neural circuitry or underlying information processing mechanisms with respect to the outside world. In this research, we introduce a new theory on the role of awareness and universal context that can help answering the aforementioned crucial neuroscience questions. Specifically, we propose a class of spiking conscious neuron in which the output depends on three functionally distinctive integrated input variables: receptive field (RF), local contextual field (LCF), and universal contextual field (UCF) a newly proposed dimension. The RF defines the incoming ambiguous sensory signal, LCF defines the modulatory sensory signal coming from other parts of the brain, and UCF defines the awareness. It is believed that the conscious neuron inherently contains enough knowledge about the situation in which the problem is to be solved based on past learning and reasoning and it defines the precise role of incoming multisensory signals (amplification or attenuation) to originate a precise neural firing (exhibiting switch-like behaviour). It is shown, when implemented within an SCNN, the conscious neuron helps modelling a more precise human behaviour e.g., when exploited to model human audiovisual speech processing, the SCNN performed comparably to deep long-short-term memory (LSTM) network. We believe that the proposed theory could be applied to address a range of real-world problems including elusive neural disruptions, explainable artificial intelligence, human-like computing, low-power neuromorphic chips etc.",
"title": ""
},
{
"docid": "d3eeb9e96881dc3bd60433bdf3e89749",
"text": "The first € price and the £ and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for Germany, the €(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. M. Bushnell, V.D. Agrawal Essentials of Electronic Testing for Digital, Memory and MixedSignal VLSI Circuits",
"title": ""
},
{
"docid": "486e15d89ea8d0f6da3b5133c9811ee1",
"text": "Frequency-modulated continuous wave radar systems suffer from permanent leakage of the transmit signal into the receive path. Besides leakage within the radar device itself, an unwanted object placed in front of the antennas causes so-called short-range (SR) leakage. In an automotive application, for instance, it originates from signal reflections of the car’s own bumper. Particularly the residual phase noise of the downconverted SR leakage signal causes a severe degradation of the achievable sensitivity. In an earlier work, we proposed an SR leakage cancellation concept that is feasible for integration in a monolithic microwave integrated circuit. In this brief, we present a hardware prototype that holistically proves our concept with discrete components. The fundamental theory and properties of the concept are proven with measurements. Further, we propose a digital design for real-time operation of the cancellation algorithm on a field programmable gate array. Ultimately, by employing measurements with a bumper mounted in front of the antennas, we show that the leakage canceller significantly improves the sensitivity of the radar.",
"title": ""
},
{
"docid": "226bdf9c36a13900cf11f37bef816f04",
"text": "We describe a new class of subsampling techniques for CNNs, termed multisampling, that significantly increases the amount of information kept by feature maps through subsampling layers. One version of our method, which we call checkered subsampling, significantly improves the accuracy of state-of-the-art architectures such as DenseNet and ResNet without any additional parameters and, remarkably, improves the accuracy of certain pretrained ImageNet models without any training or fine-tuning. We glean new insight into the nature of data augmentations and demonstrate, for the first time, that coarse feature maps are significantly bottlenecking the performance of neural networks in image classification.",
"title": ""
},
{
"docid": "ecd4dd9d8807df6c8194f7b4c7897572",
"text": "Nitric oxide (NO) mediates activation of satellite precursor cells to enter the cell cycle. This provides new precursor cells for skeletal muscle growth and muscle repair from injury or disease. Targeting a new drug that specifically delivers NO to muscle has the potential to promote normal function and treat neuromuscular disease, and would also help to avoid side effects of NO from other treatment modalities. In this research, we examined the effectiveness of the NO donor, iosorbide dinitrate (ISDN), and a muscle relaxant, methocarbamol, in promoting satellite cell activation assayed by muscle cell DNA synthesis in normal adult mice. The work led to the development of guaifenesin dinitrate (GDN) as a new NO donor for delivering nitric oxide to muscle. The results revealed that there was a strong increase in muscle satellite cell activation and proliferation, demonstrated by a significant 38% rise in DNA synthesis after a single transdermal treatment with the new compound for 24 h. Western blot and immunohistochemistry analyses showed that the markers of satellite cell myogenesis, expression of myf5, myogenin, and follistatin, were increased after 24 h oral administration of the compound in adult mice. This research extends our understanding of the outcomes of NO-based treatments aimed at promoting muscle regeneration in normal tissue. The potential use of such treatment for conditions such as muscle atrophy in disuse and aging, and for the promotion of muscle tissue repair as required after injury or in neuromuscular diseases such as muscular dystrophy, is highlighted.",
"title": ""
},
{
"docid": "fef24d203d0a2e5d52aa887a0a442cf3",
"text": "The property that has given humans a dominant advantage over other species is not strength or speed, but intelligence. If progress in artificial intelligence continues unabated, AI systems will eventually exceed humans in general reasoning ability. A system that is “superintelligent” in the sense of being “smarter than the best human brains in practically every field” could have an enormous impact upon humanity (Bostrom 2014). Just as human intelligence has allowed us to develop tools and strategies for controlling our environment, a superintelligent system would likely be capable of developing its own tools and strategies for exerting control (Muehlhauser and Salamon 2012). In light of this potential, it is essential to use caution when developing AI systems that can exceed human levels of general intelligence, or that can facilitate the creation of such systems.",
"title": ""
},
{
"docid": "75ef3706a44edf1a96bcb0ce79b07761",
"text": "Bag-of-words (BOW), which represents an image by the histogram of local patches on the basis of a visual vocabulary, has attracted intensive attention in visual categorization due to its good performance and flexibility. Conventional BOW neglects the contextual relations between local patches due to its Naïve Bayesian assumption. However, it is well known that contextual relations play an important role for human beings to recognize visual categories from their local appearance. This paper proposes a novel contextual bag-of-words (CBOW) representation to model two kinds of typical contextual relations between local patches, i.e., a semantic conceptual relation and a spatial neighboring relation. To model the semantic conceptual relation, visual words are grouped on multiple semantic levels according to the similarity of class distribution induced by them, accordingly local patches are encoded and images are represented. To explore the spatial neighboring relation, an automatic term extraction technique is adopted to measure the confidence that neighboring visual words are relevant. Word groups with high relevance are used and their statistics are incorporated into the BOW representation. Classification is taken using the support vector machine with an efficient kernel to incorporate the relational information. The proposed approach is extensively evaluated on two kinds of visual categorization tasks, i.e., video event and scene categorization. Experimental results demonstrate the importance of contextual relations of local patches and the CBOW shows superior performance to conventional BOW.",
"title": ""
},
{
"docid": "22b52198123909ff7b9a7d296eb88f7e",
"text": "This paper addresses the problem of outdoor terrain modeling for the purposes of mobile robot navigation. We propose an approach in which a robot acquires a set of terrain models at differing resolutions. Our approach addresses one of the major shortcomings of Bayesian reasoning when applied to terrain modeling, namely artifacts that arise from the limited spatial resolution of robot perception. Limited spatial resolution causes small obstacles to be detectable only at close range. Hence, a Bayes filter estimating the state of terrain segments must consider the ranges at which that terrain is observed. We develop a multi-resolution approach that maintains multiple navigation maps, and derive rational arguments for the number of layers and their resolutions. We show that our approach yields significantly better results in a practical robot system, capable of acquiring detailed 3-D maps in large-scale outdoor environments.",
"title": ""
},
{
"docid": "bbb06abacfd8f4eb01fac6b11a4447bf",
"text": "In this paper, we present a novel tightly-coupled monocular visual-inertial Simultaneous Localization and Mapping algorithm following an inertial assisted Kalman Filter and reusing the estimated 3D map. By leveraging an inertial assisted Kalman Filter, we achieve an efficient motion tracking bearing fast dynamic movement in the front-end. To enable place recognition and reduce the trajectory estimation drift, we construct a factor graph based non-linear optimization in the back-end. We carefully design a feedback mechanism to balance the front/back ends ensuring the estimation accuracy. We also propose a novel initialization method that accurately estimate the scale factor, the gravity, the velocity, and gyroscope and accelerometer biases in a very robust way. We evaluated the algorithm on a public dataset, when compared to other state-of-the-art monocular Visual-Inertial SLAM approaches, our algorithm achieves better accuracy and robustness in an efficient way. By the way, we also evaluate our algorithm in a MonocularInertial setup with a low cost IMU to achieve a robust and lowdrift realtime SLAM system.",
"title": ""
},
{
"docid": "27bff398452f746a643bd3f4fcff2949",
"text": "Spectrum management is a crucial task in wireless networks. The research in cognitive radio networks by applying Markov is highly significant suitable model for spectrum management. This research work is the simulation study of variants of basic Markov models with a specific application for channel allocation problem in cognitive radio networks by applying continuous Markov process. The Markov channel allocation model is designed and implemented in MATLAB environment, and simulation results are analyzed.",
"title": ""
},
{
"docid": "3afa9f84c76bdca939c0a3dc645b4cbf",
"text": "Recurrent neural networks are theoretically capable of learning complex temporal sequences, but training them through gradient-descent is too slow and unstable for practical use in reinforcement learning environments. Neuroevolution, the evolution of artificial neural networks using genetic algorithms, can potentially solve real-world reinforcement learning tasks that require deep use of memory, i.e. memory spanning hundreds or thousands of inputs, by searching the space of recurrent neural networks directly. In this paper, we introduce a new neuroevolution algorithm called Hierarchical Enforced SubPopulations that simultaneously evolves networks at two levels of granularity: full networks and network components or neurons. We demonstrate the method in two POMDP tasks that involve temporal dependencies of up to thousands of time-steps, and show that it is faster and simpler than the current best conventional reinforcement learning system on these tasks.",
"title": ""
},
{
"docid": "95b825ee3290572189ba8d6957b6a307",
"text": "This paper proposes a working definition of the term gamification as the use of game design elements in non-game contexts. This definition is related to similar concepts such as serious games, serious gaming, playful interaction, and game-based technologies. Origins Gamification as a term originated in the digital media industry. The first documented uses dates back to 2008, but gamification only entered widespread adoption in the second half of 2010, when several industry players and conferences popularized it. It is also—still—a heavily contested term; even its entry into Wikipedia has been contested. Within the video game and digital media industry, discontent with some interpretations have already led designers to coin different terms for their own practice (e.g., gameful design) to distance themselves from recent negative connotations [13]. Until now, there has been hardly any academic attempt at a definition of gamification. Current uses of the word seem to fluctuate between two major ideas. The first is the increasing societal adoption and institutionalization of video games and the influence games and game elements have in shaping our everyday life and interactions. Game designer Jesse Schell summarized this as the trend towards a Gamepocalypse, \" when Copyright is held by the author/owner(s).",
"title": ""
},
{
"docid": "4f186e992cd7d5eadb2c34c0f26f4416",
"text": "a r t i c l e i n f o Mobile devices, namely phones and tablets, have long gone \" smart \". Their growing use is both a cause and an effect of their technological advancement. Among the others, their increasing ability to store and exchange sensitive information, has caused interest in exploiting their vulnerabilities, and the opposite need to protect users and their data through secure protocols for access and identification on mobile platforms. Face and iris recognition are especially attractive, since they are sufficiently reliable, and just require the webcam normally equipping the involved devices. On the contrary, the alternative use of fingerprints requires a dedicated sensor. Moreover, some kinds of biometrics lend themselves to uses that go beyond security. Ambient intelligence services bound to the recognition of a user, as well as social applications, such as automatic photo tagging on social networks, can especially exploit face recognition. This paper describes FIRME (Face and Iris Recognition for Mobile Engagement) as a biometric application based on a multimodal recognition of face and iris, which is designed to be embedded in mobile devices. Both design and implementation of FIRME rely on a modular architecture , whose workflow includes separate and replaceable packages. The starting one handles image acquisition. From this point, different branches perform detection, segmentation, feature extraction, and matching for face and iris separately. As for face, an antispoofing step is also performed after segmentation. Finally, results from the two branches are fused. In order to address also security-critical applications, FIRME can perform continuous reidentification and best sample selection. To further address the possible limited resources of mobile devices, all algorithms are optimized to be low-demanding and computation-light. The term \" mobile \" referred to capture equipment for different kinds of signals, e.g. images, has been long used in many cases where field activities required special portability and flexibility. As an example we can mention mobile biometric identification devices used by the U.S. army for different kinds of security tasks. Due to the critical task involving them, such devices have to offer remarkable quality, in terms of resolution and quality of the acquired data. Notwithstanding this formerly consolidated reference for the term mobile, nowadays, it is most often referred to modern phones, tablets and similar smart devices, for which new and engaging applications are designed. For this reason, from now on, the term mobile will refer only to …",
"title": ""
},
{
"docid": "046f6c5cc6065c1cb219095fb0dfc06f",
"text": "In this paper, we describe COLABA, a large effort to create resources and processing tools for Dialectal Arabic Blogs. We describe the objectives of the project, the process flow and the interaction between the different components. We briefly describe the manual annotation effort and the resources created. Finally, we sketch how these resources and tools are put together to create DIRA, a termexpansion tool for information retrieval over dialectal Arabic collections using Modern Standard Arabic queries.",
"title": ""
},
{
"docid": "8b2d6ce5158c94f2e21ff4ebd54af2b5",
"text": "Chambers and Jurafsky (2009) demonstrated that event schemas can be automatically induced from text corpora. However, our analysis of their schemas identifies several weaknesses, e.g., some schemas lack a common topic and distinct roles are incorrectly mixed into a single actor. It is due in part to their pair-wise representation that treats subjectverb independently from verb-object. This often leads to subject-verb-object triples that are not meaningful in the real-world. We present a novel approach to inducing open-domain event schemas that overcomes these limitations. Our approach uses cooccurrence statistics of semantically typed relational triples, which we call Rel-grams (relational n-grams). In a human evaluation, our schemas outperform Chambers’s schemas by wide margins on several evaluation criteria. Both Rel-grams and event schemas are freely available to the research community.",
"title": ""
},
{
"docid": "7fc35d2bb27fb35b5585aad8601a0cbd",
"text": "We introduce Anita: a flexible and intelligent Text Adaptation tool for web content that provides Text Simplification and Text Enhancement modules. Anita’s simplification module features a state-of-the-art system that adapts texts according to the needs of individual users, and its enhancement module allows the user to search for a word’s definitions, synonyms, translations, and visual cues through related images. These utilities are brought together in an easy-to-use interface of a freely available web browser extension.",
"title": ""
},
{
"docid": "f5ac213265b9ac8674af92fb2541cebd",
"text": "BACKGROUND\nCorneal oedema is a common post-operative problem that delays or prevents visual recovery from ocular surgery. Honey is a supersaturated solution of sugars with an acidic pH, high osmolarity and low water content. These characteristics inhibit the growth of micro-organisms, reduce oedema and promote epithelialisation. This clinical case series describes the use of a regulatory approved Leptospermum species honey ophthalmic product, in the management of post-operative corneal oedema and bullous keratopathy.\n\n\nMETHODS\nA retrospective review of 18 consecutive cases (30 eyes) with corneal oedema persisting beyond one month after single or multiple ocular surgical procedures (phacoemulsification cataract surgery and additional procedures) treated with Optimel Antibacterial Manuka Eye Drops twice to three times daily as an adjunctive therapy to conventional topical management with corticosteroid, aqueous suppressants, hypertonic sodium chloride five per cent, eyelid hygiene and artificial tears. Visual acuity and central corneal thickness were measured before and at the conclusion of Optimel treatment.\n\n\nRESULTS\nA temporary reduction in corneal epithelial oedema lasting up to several hours was observed after the initial Optimel instillation and was associated with a reduction in central corneal thickness, resolution of epithelial microcysts, collapse of epithelial bullae, improved corneal clarity, improved visualisation of the intraocular structures and improved visual acuity. Additionally, with chronic use, reduction in punctate epitheliopathy, reduction in central corneal thickness and improvement in visual acuity were achieved. Temporary stinging after Optimel instillation was experienced. No adverse infectious or inflammatory events occurred during treatment with Optimel.\n\n\nCONCLUSIONS\nOptimel was a safe and effective adjunctive therapeutic strategy in the management of persistent post-operative corneal oedema and warrants further investigation in clinical trials.",
"title": ""
},
{
"docid": "857d8003dff05b8e1ba5eeb8f6b3c14e",
"text": "Traditional static spectrum allocation policies have been to grant each wireless service exclusive usage of certain frequency bands, leaving several spectrum bands unlicensed for industrial, scientific and medical purposes. The rapid proliferation of low-cost wireless applications in unlicensed spectrum bands has resulted in spectrum scarcity among those bands. Since most applications in Wireless Sensor Networks (WSNs) utilize the unlicensed spectrum, network-wide performance of WSNs will inevitably degrade as their popularity increases. Sharing of under-utilized licensed spectrum among unlicensed devices is a promising solution to the spectrum scarcity issue. Cognitive Radio (CR) is a new paradigm in wireless communication that allows sensor nodes as the unlicensed users or Secondary Users (SUs) to detect and use the under-utilized licensed spectrum temporarily. Given that the licensed or Primary Users (PUs) are oblivious to the presence of SUs, the SUs access the licensed spectrum opportunistically without interfering the PUs, while improving their own performance. In this paper, we propose an approach to build Cognitive Radio-based Wireless Sensor Networks (CR-WSNs). We believe that CR-WSN is the next-generation WSN. Realizing that both WSNs and CR present unique challenges to the design of CR-WSNs, we provide an overview and conceptual design of WSNs from the perspective of CR. The open issues are discussed to motivate new research interests in this field. We also present our method to achieving context-awareness and intelligence, which are the key components in CR networks, to address an open issue in CR-WSN.",
"title": ""
}
] |
scidocsrr
|
ee6006237f6001fc7561fd07e99a6cd3
|
Telco Churn Prediction with Big Data
|
[
{
"docid": "d83d672642531e1744afe77ed8379b90",
"text": "Customer churn prediction in Telecom Industry is a core research topic in recent years. A huge amount of data is generated in Telecom Industry every minute. On the other hand, there is lots of development in data mining techniques. Customer churn has emerged as one of the major issues in Telecom Industry. Telecom research indicates that it is more expensive to gain a new customer than to retain an existing one. In order to retain existing customers, Telecom providers need to know the reasons of churn, which can be realized through the knowledge extracted from Telecom data. This paper surveys the commonly used data mining techniques to identify customer churn patterns. The recent literature in the area of predictive data mining techniques in customer churn behavior is reviewed and a discussion on the future research directions is offered.",
"title": ""
}
] |
[
{
"docid": "467d953d489ca8f7d75c798d6e948a86",
"text": "The ability to detect recent natural selection in the human population would have profound implications for the study of human history and for medicine. Here, we introduce a framework for detecting the genetic imprint of recent positive selection by analysing long-range haplotypes in human populations. We first identify haplotypes at a locus of interest (core haplotypes). We then assess the age of each core haplotype by the decay of its association to alleles at various distances from the locus, as measured by extended haplotype homozygosity (EHH). Core haplotypes that have unusually high EHH and a high population frequency indicate the presence of a mutation that rose to prominence in the human gene pool faster than expected under neutral evolution. We applied this approach to investigate selection at two genes carrying common variants implicated in resistance to malaria: G6PD and CD40 ligand. At both loci, the core haplotypes carrying the proposed protective mutation stand out and show significant evidence of selection. More generally, the method could be used to scan the entire genome for evidence of recent positive selection.",
"title": ""
},
{
"docid": "fc8063bddea3c70d77636683a03a52d7",
"text": "Speaker attributed variability are undesirable in speaker independent speech recognition systems. The gender of the speaker is one of the influential sources of this variability. Common speech recognition systems tuned to the ensemble statistics over many speakers to compensate the inherent variability of speech signal. In this paper we will separate the datasets based on the gender to build gender dependent hidden Markov model for each word. The gender separation criterion is the average pitch frequency of the speaker. Experimental evaluation shows significant improvement in word recognition accuracy over the gender independent method with a slight increase in the processing computation.",
"title": ""
},
{
"docid": "f886f9ff8281b6ad34af111a06834c43",
"text": "Brain waves can aptly define the state of a person's mind. High activity and attention lead to dominant beta waves while relaxation and focus lead to dominant alpha waves in the brain. Alpha state of mind is ideal for learning and memory retention. In our experiment we aim to increase alpha waves and decrease beta waves in a person with the help of music to measure improvement in memory retention. Our hypothesis is that, when a person listens to music which causes relaxation, he is more likely to attain the alpha state of mind and enhance his memory retention ability. To verify this hypothesis, we conducted an experiment on 5 participants. The participants were asked to take a similar quiz twice, under different states of mind. During the experimentation process, the brain activity of the participants was recorded and analyzed using MUSE, an off-the-shelf device for brainwave capturing and analysis.",
"title": ""
},
{
"docid": "86c998f5ffcddb0b74360ff27b8fead4",
"text": "Attention networks in multimodal learning provide an efficient way to utilize given visual information selectively. However, the computational cost to learn attention distributions for every pair of multimodal input channels is prohibitively expensive. To solve this problem, co-attention builds two separate attention distributions for each modality neglecting the interaction between multimodal inputs. In this paper, we propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly. BAN considers bilinear interactions among two groups of input channels, while low-rank bilinear pooling extracts the joint representations for each pair of channels. Furthermore, we propose a variant of multimodal residual networks to exploit eight-attention maps of the BAN efficiently. We quantitatively and qualitatively evaluate our model on visual question answering (VQA 2.0) and Flickr30k Entities datasets, showing that BAN significantly outperforms previous methods and achieves new state-of-the-arts on both datasets.",
"title": ""
},
{
"docid": "b1b842bed367be06c67952c34921f6f6",
"text": "Definitions and uses of the concept of empowerment are wide-ranging: the term has been used to describe the essence of human existence and development, but also aspects of organizational effectiveness and quality. The empowerment ideology is rooted in social action where empowerment was associated with community interests and with attempts to increase the power and influence of oppressed groups (such as workers, women and ethnic minorities). Later, there was also growing recognition of the importance of the individual's characteristics and actions. Based on a review of the literature, this paper explores the uses of the empowerment concept as a framework for nurses' professional growth and development. Given the complexity of the concept, it is vital to understand the underlying philosophy before moving on to define its substance. The articles reviewed were classified into three groups on the basis of their theoretical orientation: critical social theory, organization theory and social psychological theory. Empowerment seems likely to provide for an umbrella concept of professional development in nursing.",
"title": ""
},
{
"docid": "ce9b9cc57277b635262a5d4af999dc32",
"text": "Age invariant face recognition has received increasing attention due to its great potential in real world applications. In spite of the great progress in face recognition techniques, reliably recognizing faces across ages remains a difficult task. The facial appearance of a person changes substantially over time, resulting in significant intra-class variations. Hence, the key to tackle this problem is to separate the variation caused by aging from the person-specific features that are stable. Specifically, we propose a new method, called Hidden Factor Analysis (HFA). This method captures the intuition above through a probabilistic model with two latent factors: an identity factor that is age-invariant and an age factor affected by the aging process. Then, the observed appearance can be modeled as a combination of the components generated based on these factors. We also develop a learning algorithm that jointly estimates the latent factors and the model parameters using an EM procedure. Extensive experiments on two well-known public domain face aging datasets: MORPH (the largest public face aging database) and FGNET, clearly show that the proposed method achieves notable improvement over state-of-the-art algorithms.",
"title": ""
},
{
"docid": "62aa091313743dda4fc8211eccd78f83",
"text": "We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, t he most successful technique for regularizing neural networks, does n ot work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropo ut t LSTMs, and show that it substantially reduces overfitting on a varie ty of tasks. These tasks include language modeling, speech recognition, image capt ion generation, and machine translation.",
"title": ""
},
{
"docid": "bd42bffcbb76d4aadde3df502326655a",
"text": "We present a novel class of actor-critic algorithms for actors consisting of sets of interacting modules. We present, analyze theoretically, and empirically evaluate an update rule for each module, which requires only local information: the module’s input, output, and the TD error broadcast by a critic. Such updates are necessary when computation of compatible features becomes prohibitively difficult and are also desirable to increase the biological plausibility of reinforcement learning methods.",
"title": ""
},
{
"docid": "70c6aaf0b0fc328c677d7cb2249b68bf",
"text": "In this paper, we discuss and review how combined multiview imagery from satellite to street level can benefit scene analysis. Numerous works exist that merge information from remote sensing and images acquired from the ground for tasks such as object detection, robots guidance, or scene understanding. What makes the combination of overhead and street-level images challenging are the strongly varying viewpoints, the different scales of the images, their illuminations and sensor modality, and time of acquisition. Direct (dense) matching of images on a per-pixel basis is thus often impossible, and one has to resort to alternative strategies that will be discussed in this paper. For such purpose, we review recent works that attempt to combine images taken from the ground and overhead views for purposes like scene registration, reconstruction, or classification. After the theoretical review, we present three recent methods to showcase the interest and potential impact of such fusion on real applications (change detection, image orientation, and tree cataloging), whose logic can then be reused to extend the use of ground-based images in remote sensing and vice versa. Through this review, we advocate that cross fertilization between remote sensing, computer vision, and machine learning is very valuable to make the best of geographic data available from Earth observation sensors and ground imagery. Despite its challenges, we believe that integrating these complementary data sources will lead to major breakthroughs in Big GeoData. It will open new perspectives for this exciting and emerging field.",
"title": ""
},
{
"docid": "cc6cf6557a8be12d8d3a4550163ac0a9",
"text": "In this study, different S/D contacting options for lateral NWFET devices are benchmarked at 7nm node dimensions and beyond. Comparison is done at both DC and ring oscillator levels. It is demonstrated that implementing a direct contact to a fin made of Si/SiGe super-lattice results in 13% performance improvement. Also, we conclude that the integration of internal spacers between the NWs is a must for lateral NWFETs in order to reduce device parasitic capacitance.",
"title": ""
},
{
"docid": "8988aaa4013ef155cbb09644ca491bab",
"text": "Uses and gratification theory aids in the assessment of how audiences use a particular medium and the gratifications they derive from that use. In this paper this theory has been applied to derive Internet uses and gratifications for Indian Internet users. This study proceeds in four stages. First, six first-order gratifications namely self development, wide exposure, user friendliness, relaxation, career opportunities, and global exchange were identified using an exploratory factor analysis. Then the first order gratifications were subjected to firstorder confirmatory factor analysis. Third, using second-order confirmatory factor analysis three types of secondorder gratifications were obtained, namely process gratifications, content gratifications and social gratifications. Finally, with the use of t-tests the study has shown that males and females differ significantly on the gratification factors “self development”, “user friendliness”, “wide exposure” and “relaxation.” The intended audience consists of masters’ level students and doctoral students who want to learn exploratory factor analysis and confirmatory factor analysis. This case study can also be used to teach the basics of structural equation modeling using the software AMOS.",
"title": ""
},
{
"docid": "e291f7ada6890ae9db8417b29f35d061",
"text": "This study proposes a new framework for citation content analysis (CCA), for syntactic and semantic analysis of citation content that can be used to better analyze the rich sociocultural context of research behavior. This framework could be considered the next generation of citation analysis. The authors briefly review the history and features of content analysis in traditional social sciences and its previous application in library and information science (LIS). Based on critical discussion of the theoretical necessity of a new method as well as the limits of citation analysis, the nature and purposes of CCA are discussed, and potential procedures to conduct CCA, including principles to identify the reference scope, a two-dimensional (citing and cited) and two-module (syntactic and semantic) codebook, are provided and described. Future work and implications are also suggested.",
"title": ""
},
{
"docid": "a926341e8b663de6c412b8e3a61ee171",
"text": "— Studies within the EHEA framework include the acquisition of skills such as the ability to learn autonomously, which requires students to devote much of their time to individual and group work to reinforce and further complement the knowledge acquired in the classroom. In order to consolidate the results obtained from classroom activities, lecturers must develop tools to encourage learning and facilitate the process of independent learning. The aim of this work is to present the use of virtual laboratories based on Easy Java Simulations to assist in the understanding and testing of electrical machines. con los usuarios integrándose fácilmente en plataformas de e-aprendizaje. Para nuestra aplicación hemos escogido el Java Ejs (Easy Java Simulations), ya que es una herramienta de software gratuita, diseñada para el desarrollo de laboratorios virtuales interactivos, dispone de elementos visuales parametrizables",
"title": ""
},
{
"docid": "91eac59a625914805a22643c6fe79ad1",
"text": "Channel state information at the transmitter (CSIT) is essential for frequency-division duplexing (FDD) massive MIMO systems, but conventional solutions involve overwhelming overhead both for downlink channel training and uplink channel feedback. In this letter, we propose a joint CSIT acquisition scheme to reduce the overhead. Particularly, unlike conventional schemes where each user individually estimates its own channel and then feed it back to the base station (BS), we propose that all scheduled users directly feed back the pilot observation to the BS, and then joint CSIT recovery can be realized at the BS. We further formulate the joint CSIT recovery problem as a low-rank matrix completion problem by utilizing the low-rank property of the massive MIMO channel matrix, which is caused by the correlation among users. Finally, we propose a hybrid low-rank matrix completion algorithm based on the singular value projection to solve this problem. Simulations demonstrate that the proposed scheme can provide accurate CSIT with lower overhead than conventional schemes.",
"title": ""
},
{
"docid": "a39091796e8f679f246baa8dce08f213",
"text": "Resource scheduling in cloud is a challenging job and the scheduling of appropriate resources to cloud workloads depends on the QoS requirements of cloud applications. In cloud environment, heterogeneity, uncertainty and dispersion of resources encounters problems of allocation of resources, which cannot be addressed with existing resource allocation policies. Researchers still face troubles to select the efficient and appropriate resource scheduling algorithm for a specific workload from the existing literature of resource scheduling algorithms. This research depicts a broad methodical literature analysis of resource management in the area of cloud in general and cloud resource scheduling in specific. In this survey, standard methodical literature analysis technique is used based on a complete collection of 110 research papers out of large collection of 1206 research papers published in 19 foremost workshops, symposiums and conferences and 11 prominent journals. The current status of resource scheduling in cloud computing is distributed into various categories. Methodical analysis of resource scheduling in cloud computing is presented, resource scheduling algorithms and management, its types and benefits with tools, resource scheduling aspects and resource distribution policies are described. The literature concerning to thirteen types of resource scheduling algorithms has also been stated. Further, eight types of resource distribution policies are described. Methodical analysis of this research work will help researchers to find the important characteristics of resource scheduling algorithms and also will help to select most suitable algorithm for scheduling a specific workload. Future research directions have also been suggested in this research work.",
"title": ""
},
{
"docid": "551f1dca9718125b385794d8e12f3340",
"text": "Social media provides increasing opportunities for users to voluntarily share their thoughts and concerns in a large volume of data. While user-generated data from each individual may not provide considerable information, when combined, they include hidden variables, which may convey significant events. In this paper, we pursue the question of whether social media context can provide socio-behavior \"signals\" for crime prediction. The hypothesis is that crowd publicly available data in social media, in particular Twitter, may include predictive variables, which can indicate the changes in crime rates. We developed a model for crime trend prediction where the objective is to employ Twitter content to identify whether crime rates have dropped or increased for the prospective time frame. We also present a Twitter sampling model to collect historical data to avoid missing data over time. The prediction model was evaluated for different cities in the United States. The experiments revealed the correlation between features extracted from the content and crime rate directions. Overall, the study provides insight into the correlation of social content and crime trends as well as the impact of social data in providing predictive indicators.",
"title": ""
},
{
"docid": "d9bd41c14c5e37ad08fc4811bb943089",
"text": "With the increased global use of online media platforms, there are more opportunities than ever to misuse those platforms or perpetrate fraud. One such fraud is within the music industry, where perpetrators create automated programs, streaming songs to generate revenue or increase popularity of an artist. With growing annual revenue of the digital music industry, there are significant financial incentives for perpetrators with fraud in mind. The focus of the study is extracting user behavioral patterns and utilising them to train and compare multiple supervised classification method to detect fraud. The machine learning algorithms examined are Logistic Regression, Support Vector Machines, Random Forest and Artificial Neural Networks. The study compares performance of these algorithms trained on imbalanced datasets carrying different fractions of fraud. The trained models are evaluated using the Precision Recall Area Under the Curve (PR AUC) and a F1-score. Results show that the algorithms achieve similar performance when trained on balanced and imbalanced datasets. It also shows that Random Forest outperforms the other methods for all datasets tested in this experiment.",
"title": ""
},
{
"docid": "db483f6aab0361ce5a3ad1a89508541b",
"text": "In this paper, we describe Swoop, a hypermedia inspired Ontology Browser and Editor based on OWL, the recently standardized Web-oriented ontology language. After discussing the design rationale and architecture of Swoop, we focus mainly on its features, using illustrative examples to highlight its use. We demonstrate that with its web-metaphor, adherence to OWL recommendations and key unique features such as Collaborative Annotation using Annotea, Swoop acts as a useful and efficient web ontology development tool. We conclude with a list of future plans for Swoop, that should further increase its overall appeal and accessibility.",
"title": ""
},
{
"docid": "0dfc905792374c8224cbe2d34fb51fe5",
"text": "Randomized direct search algorithms for continuous domains, such as evolution strategies, are basic tools in machine learning. They are especially needed when the gradient of an objective function (e.g., loss, energy, or reward function) cannot be computed or estimated efficiently. Application areas include supervised and reinforcement learning as well as model selection. These randomized search strategies often rely on normally distributed additive variations of candidate solutions. In order to efficiently search in non-separable and ill-conditioned landscapes the covariance matrix of the normal distribution must be adapted, amounting to a variable metric method. Consequently, covariance matrix adaptation (CMA) is considered state-of-the-art in evolution strategies. In order to sample the normal distribution, the adapted covariance matrix needs to be decomposed, requiring in general Θ(n 3) operations, where n is the search space dimension. We propose a new update mechanism which can replace a rank-one covariance matrix update and the computationally expensive decomposition of the covariance matrix. The newly developed update rule reduces the computational complexity of the rank-one covariance matrix adaptation to Θ(n 2) without resorting to outdated distributions. We derive new versions of the elitist covariance matrix adaptation evolution strategy (CMA-ES) and the multi-objective CMA-ES. These algorithms are equivalent to the original procedures except that the update step for the variable metric distribution scales better in the problem dimension. We also introduce a simplified variant of the non-elitist CMA-ES with the incremental covariance matrix update and investigate its performance. Apart from the reduced time-complexity of the distribution update, the algebraic computations involved in all new algorithms are simpler compared to the original versions. The new update rule improves the performance of the CMA-ES for large scale machine learning problems in which the objective function can be evaluated fast.",
"title": ""
}
] |
scidocsrr
|
0511724627520aecf7a2f1eac77c26cf
|
LH-CAM: Logic-Based Higher Performance Binary CAM Architecture on FPGA
|
[
{
"docid": "e9655e2cf800d32d2a0f427a7b056e80",
"text": "Although content addressable memory (CAM) provides fast search operation; however, CAM has disadvantages like low bit density and high cost per bit. This paper presents a novel memory architecture called hybrid partitioned static random access memory-based ternary content addressable memory (HP SRAM-based TCAM), which emulates TCAM functionality with conventional SRAM, thereby eliminating the inherited disadvantages of conventional TCAMs. HP SRAM-based TCAM logically dissects conventional TCAM table in a hybrid way (column-wise and row-wise) into TCAM sub-tables, which are then processed to be mapped to their corresponding SRAM memory units. Search operation in HP SRAM-based TCAM involves two SRAM accesses followed by a logical ANDing operation. To validate and justify our approach, 512 × 36 HP SRAM-based TCAM has been implemented in Xilinx Virtex-5 field programmable gate array (FPGA) and designed using 65-nm CMOS technology. Implementation in FPGA is advantageous and a beauty of our proposed TCAM because classical TCAMs cannot be implemented in FPGA. After a thorough analysis, we have concluded that energy/bit/search of the proposed TCAM is 85.72 fJ.",
"title": ""
}
] |
[
{
"docid": "9a980844ee86080e78d16022875a4a62",
"text": "Online social networks have become a major communication platform, where people share their thoughts and opinions about any topic real-time. The short text updates people post in these network contain emotions and moods, which when measured collectively can unveil the public mood at population level and have exciting implications for businesses, governments, and societies. Therefore, there is an urgent need for developing solid methods for accurately measuring moods from large-scale social media data. In this paper, we propose PANAS-t, which measures sentiments from short text updates in Twitter based on a well-established psychometric scale, PANAS (Positive and Negative Affect Schedule). We test the efficacy of PANAS-t over 10 real notable events drawn from 1.8 billion tweets and demonstrate that it can efficiently capture the expected sentiments of a wide variety of issues spanning tragedies, technology releases, political debates, and healthcare.",
"title": ""
},
{
"docid": "37efaf5cbd7fb400b713db6c7c980d76",
"text": "Social media users who post bullying related tweets may later experience regret, potentially causing them to delete their posts. In this paper, we construct a corpus of bullying tweets and periodically check the existence of each tweet in order to infer if and when it becomes deleted. We then conduct exploratory analysis in order to isolate factors associated with deleted posts. Finally, we propose the construction of a regrettable posts predictor to warn users if a tweet might cause regret.",
"title": ""
},
{
"docid": "c917f335a36fc28fc9228fd3370477e7",
"text": "The increasing interest in user privacy is leading to new privacy preserving machine learning paradigms. In the Federated Learning paradigm, a master machine learning model is distributed to user clients, the clients use their locally stored data and model for both inference and calculating model updates. The model updates are sent back and aggregated on the server to update the master model then redistributed to the clients. In this paradigm, the user data never leaves the client, greatly enhancing the user’ privacy, in contrast to the traditional paradigm of collecting, storing and processing user data on a backend server beyond the user’s control. In this paper we introduce, as far as we are aware, the first federated implementation of a Collaborative Filter. The federated updates to the model are based on a stochastic gradient approach. As a classical case study in machine learning, we explore a personalized recommendation system based on users’ implicit feedback and demonstrate the method’s applicability to both the MovieLens and an in-house dataset. Empirical validation confirms a collaborative filter can be federated without a loss of accuracy compared to a standard implementation, hence enhancing the user’s privacy in a widely used recommender application while maintaining recommender performance.",
"title": ""
},
{
"docid": "66f684ba92fe735fecfbfb53571bad5f",
"text": "Some empirical learning tasks are concerned with predicting values rather than the more familiar categories. This paper describes a new system, m5, that constructs tree-based piecewise linear models. Four case studies are presented in which m5 is compared to other methods.",
"title": ""
},
{
"docid": "3c635de0cc71f3744b3496069633bdd2",
"text": "Where malaria prospers most, human societies have prospered least. The global distribution of per-capita gross domestic product shows a striking correlation between malaria and poverty, and malaria-endemic countries also have lower rates of economic growth. There are multiple channels by which malaria impedes development, including effects on fertility, population growth, saving and investment, worker productivity, absenteeism, premature mortality and medical costs.",
"title": ""
},
{
"docid": "579db3cec4e49d53090ee13f35385c35",
"text": "In cloud computing environments, multiple tenants are often co-located on the same multi-processor system. Thus, preventing information leakage between tenants is crucial. While the hypervisor enforces software isolation, shared hardware, such as the CPU cache or memory bus, can leak sensitive information. For security reasons, shared memory between tenants is typically disabled. Furthermore, tenants often do not share a physical CPU. In this setting, cache attacks do not work and only a slow cross-CPU covert channel over the memory bus is known. In contrast, we demonstrate a high-speed covert channel as well as the first side-channel attack working across processors and without any shared memory. To build these attacks, we use the undocumented DRAM address mappings. We present two methods to reverse engineer the mapping of memory addresses to DRAM channels, ranks, and banks. One uses physical probing of the memory bus, the other runs entirely in software and is fully automated. Using this mapping, we introduce DRAMA attacks, a novel class of attacks that exploit the DRAM row buffer that is shared, even in multi-processor systems. Thus, our attacks work in the most restrictive environments. First, we build a covert channel with a capacity of up to 2 Mbps, which is three to four orders of magnitude faster than memory-bus-based channels. Second, we build a side-channel template attack that can automatically locate and monitor memory accesses. Third, we show how using the DRAM mappings improves existing attacks and in particular enables practical Rowhammer attacks on DDR4.",
"title": ""
},
{
"docid": "1b4c26d27fafff9d0d89b0d3c5a98b4a",
"text": "Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.",
"title": ""
},
{
"docid": "d85d0a26adfaf0253c1883329358ab8b",
"text": "Recent progress in using Long Short-Term Memory (LSTM) for image description has motivated the exploration of their applications for automatically describing video content with natural language sentences. By taking a video as a sequence of features, LSTM model is trained on video-sentence pairs to learn association of a video to a sentence. However, most existing methods compress an entire video shot or frame into a static representation, without considering attention which allows for salient features. Furthermore, most existing approaches model the translating error, but ignore the correlations between sentence semantics and visual content.\n To tackle these issues, we propose a novel end-to-end framework named aLSTMs, an attention-based LSTM model with semantic consistency, to transfer videos to natural sentences. This framework integrates attention mechanism with LSTM to capture salient structures of video, and explores the correlation between multi-modal representations for generating sentences with rich semantic content. More specifically, we first propose an attention mechanism which uses the dynamic weighted sum of local 2D Convolutional Neural Network (CNN) and 3D CNN representations. Then, a LSTM decoder takes these visual features at time $t$ and the word-embedding feature at time $t$-$1$ to generate important words. Finally, we uses multi-modal embedding to map the visual and sentence features into a joint space to guarantee the semantic consistence of the sentence description and the video visual content. Experiments on the benchmark datasets demonstrate the superiority of our method than the state-of-the-art baselines for video captioning in both BLEU and METEOR.",
"title": ""
},
{
"docid": "3033ef7f981399614efc45c62b1ac475",
"text": "This paper describes an integrated system architecture for automotive electronic systems based on multicore systems-on-chips (SoCs). We integrate functions from different suppliers into a few powerful electronic control units using a dedicated core for each function. This work is fueled by technological opportunities resulting from recent advances in the semiconductor industry and the challenges of providing dependable automotive electronic systems at competitive costs. The presented architecture introduces infrastructure IP cores to overcome key challenges in moving to automotive multicore SoCs: a time-triggered network-on-a-chip with fault isolation for the interconnection of functional IP cores, a diagnostic IP core for error detection and state recovery, a gateway IP core for interfacing legacy systems, and an IP core for reconfiguration. This paper also outlines the migration from today's federated architectures to the proposed integrated architecture using an exemplary automotive E/E system.",
"title": ""
},
{
"docid": "8cd970e1c247478f01a9fe2f62530fc4",
"text": "In this paper, we propose a method for grasping unknown objects from piles or cluttered scenes, given a point cloud from a single depth camera. We introduce a shape-based method - Symmetry Height Accumulated Features (SHAF) - that reduces the scene description complexity such that the use of machine learning techniques becomes feasible. We describe the basic Height Accumulated Features and the Symmetry Features and investigate their quality using an F-score metric. We discuss the gain from Symmetry Features for grasp classification and demonstrate the expressive power of Height Accumulated Features by comparing it to a simple height based learning method. In robotic experiments of grasping single objects, we test 10 novel objects in 150 trials and show significant improvement of 34% over a state-of-the-art method, achieving a success rate of 92%. An improvement of 29% over the competitive method was achieved for a task of clearing a table with 5 to 10 objects and overall 90 trials. Furthermore we show that our approach is easily adaptable for different manipulators by running our experiments on a second platform.",
"title": ""
},
{
"docid": "53d8734d66ffa4398d0105d6d2b55a66",
"text": "Inspite of long years of research, problem of manipulator path tracking control is the thrust area for researchers to work upon. Non-linear systems like manipulator are multi-input-multi-output, non-linear and time variant complex problem. A number of different approaches presently followed for the control of manipulator vary from classical PID (Proportional Integral Derivative) to CTC (Computed Torque Control) control techniques. This paper presents design and implementation of PID and CTC controller for robotic manipulator. Comparative study of simulated results of conventional controllers, like PID and CTC are also shown. Tracking performance and error comparison graphs are presented to show the performance of the proposed controllers.",
"title": ""
},
{
"docid": "b68e09f879e51aad3ed0ce8b696da957",
"text": "The status of current model-driven engineering technologies has matured over the last years whereas the infrastructure supporting model management is still in its infancy. Infrastructural means include version control systems, which are successfully used for the management of textual artifacts like source code. Unfortunately, they are only limited suitable for models. Consequently, dedicated solutions emerge. These approaches are currently hard to compare, because no common quality measure has been established yet and no structured test cases are available. In this paper, we analyze the challenges coming along with merging different versions of one model and derive a first categorization of typical changes and the therefrom resulting conflicts. On this basis we create a set of test cases on which we apply state-of-the-art versioning systems and report our experiences.",
"title": ""
},
{
"docid": "ed8fef21796713aba1a6375a840c8ba3",
"text": "PURPOSE\nThe novel self-paced maximal-oxygen-uptake (VO2max) test (SPV) may be a more suitable alternative to traditional maximal tests for elite athletes due to the ability to self-regulate pace. This study aimed to examine whether the SPV can be administered on a motorized treadmill.\n\n\nMETHODS\nFourteen highly trained male distance runners performed a standard graded exercise test (GXT), an incline-based SPV (SPVincline), and a speed-based SPV (SPVspeed). The GXT included a plateau-verification stage. Both SPV protocols included 5×2-min stages (and a plateau-verification stage) and allowed for self-pacing based on fixed increments of rating of perceived exertion: 11, 13, 15, 17, and 20. The participants varied their speed and incline on the treadmill by moving between different marked zones in which the tester would then adjust the intensity.\n\n\nRESULTS\nThere was no significant difference (P=.319, ES=0.21) in the VO2max achieved in the SPVspeed (67.6±3.6 mL·kg(-1)·min(-1), 95%CI=65.6-69.7 mL·kg(-1)·min(-1)) compared with that achieved in the GXT (68.6±6.0 mL·kg(-1)·min(-1), 95%CI=65.1-72.1 mL·kg(-1)·min(-1)). Participants achieved a significantly higher VO2max in the SPVincline (70.6±4.3 mL·kg(-1)·min(-1), 95%CI=68.1-73.0 mL·kg(-1)·min(-1)) than in either the GXT (P=.027, ES=0.39) or SPVspeed (P=.001, ES=0.76).\n\n\nCONCLUSIONS\nThe SPVspeed protocol produces VO2max values similar to those obtained in the GXT and may represent a more appropriate and athlete-friendly test that is more oriented toward the variable speed found in competitive sport.",
"title": ""
},
{
"docid": "263485ca833637a55f18abcdfff096e2",
"text": "We propose an efficient and parameter-free scoring criterio n, the factorized conditional log-likelihood (̂fCLL), for learning Bayesian network classifiers. The propo sed score is an approximation of the conditional log-likelihood criterion. The approximation is devised in order to guarantee decomposability over the network structure, as w ell as efficient estimation of the optimal parameters, achieving the same time and space complexity as the traditional log-likelihood scoring criterion. The resulting criterion has an information-the oretic interpretation based on interaction information, which exhibits its discriminative nature. To evaluate the performance of the proposed criterion, we present an empirical comparison with state-o f-the-art classifiers. Results on a large suite of benchmark data sets from the UCI repository show tha t f̂CLL-trained classifiers achieve at least as good accuracy as the best compared classifiers, us ing significantly less computational resources.",
"title": ""
},
{
"docid": "926dd2056bbe5fc5b0aef37fd6550a9c",
"text": "There is considerable, although not entirely consistent, evidence that the hippocampus inhibits most aspects of HPA activity, including basal (circadian nadir) and circadian peak secretion as well as the onset and termination of responses to stress. Although much of the evidence for these effects rests only on the measurement of corticosteroids, recent lesion and implant studies indicate that the hippocampus regulates adrenocortical activity at the hypothalamic level, via the expression and secretion of ACTH secretagogues. Such inhibition results largely from the mediation of corticosteroid feedback, although more work is required to determine whether the hippocampus supplies a tonic inhibitory input in the absence of corticosteroids. It must be noted that the hippocampus is not the only feedback site in the adrenocortical system, since removal of its input only reduces, but does not abolish, the efficacy of corticosteroid inhibition, and since other elements of the axis appear eventually to compensate for deficits in feedback regulation. The importance of other feedback sites is further suggested not only by the presence of corticosteroid receptors in other parts of the brain and pituitary, but also by the improved prediction of CRF levels by combined hypothalamic and hippocampal receptor occupancy. The likelihood of feedback mediated by nonhippocampal sites underscores the need for future work to characterize hippocampal influence on HPA activity in the absence of changes in corticosteroid secretion. However, despite the fact that the hippocampus is not the only feedback site, it is distinguished from most potential feedback sites, including the hypothalamus and pituitary, by its high content of both type I and II corticosteroid receptors. The hippocampus is therefore capable of mediating inhibition over a wide range of steroid levels. The low end of this range is represented by corticosteroid inhibition of basal (circadian nadir) HPA activity. The apparent type I receptor specificity of this inhibition and the elevation of trough corticosteroid levels after hippocampal damage support a role for hippocampal type I receptors in regulating basal HPA activity. It is possible that basal activity is controlled in part through hippocampal inhibition of vasopressin, since the inhibition of portal blood vasopressin correlates with lower levels of hippocampal receptor occupancy, and the expression of vasopressin by some CRF neurons is sensitive to very low corticosteroid levels. At the high end of the physiological range, stress-induced or circadian peak corticosteroid secretion correlates strongly with occupancy of the lower affinity hippocampal type II receptors.(ABSTRACT TRUNCATED AT 400 WORDS)",
"title": ""
},
{
"docid": "d08773d7b7ec8ca1d1183c6586cfc19a",
"text": "Proprietary cryptography is a term used to describe custom encryption techniques that are kept secret by its designers to add additional security. It is questionable if such an approach increases the cryptographic strength of the underlying mathematical algorithms. The security of proprietary encryption techniques relies entirely on the competence of the semi-conductor companies, which keep the technical description strictly confidential after designing. It is difficult to give a public and independent security assessment of the cryptography, without having access to the detailed information of the design. Proprietary cryptography is currently deployed in many products which are used on a daily basis by the majority of people world-wide. It is embedded in the computational core of many wireless and contactless devices used in access control systems and vehicle immobilizers. Contactless access control cards are used in various security systems. Examples include the use in public transport, payment terminals, office buildings and even in highly secure facilities such as ministries, banks, nuclear power plants and prisons. Many of these access control cards are based on proprietary encryption techniques. Prominent examples are the widely deployed contactless access control systems that use the MIFARE Classic, iClass and Cryptomemory technology. A vehicle immobilizer is an electronic device that prevents the engine of the vehicle from starting when the corresponding transponder is not present. This transponder is a wireless radio frequency chip which is typically embedded in the plastic casing of the car key. When the driver tries to start the vehicle, the car authenticates the transponder before starting the engine, thus preventing hot-wiring. According to European Commission directive (95/56/EC) it is mandatory that all cars, sold in the EU from 1995 onwards, are fitted with an electronic immobilizer. In practice, almost all recently sold cars in Europe are protected by transponders that embed one of the two proprietary encryption techniques Hitag2 or Megamos Crypto. In this doctoral thesis well-known techniques are combined with novel methods",
"title": ""
},
{
"docid": "c3c0de7f448c08ff8316ac2caed78b87",
"text": "Wearable robots, i.e. active orthoses, exoskeletons, and mechatronic prostheses, represent a class of biomechatronic systems posing severe constraints in terms of safety and controllability. Additionally, whenever the worn system is required to establish a well-tuned dynamic interaction with the human body, in order to exploit emerging dynamical behaviours, the possibility of having modular joints, able to produce a controllable viscoelastic behaviour, becomes crucial. Controllability is a central issue in wearable robotics applications, because it impacts robot safety and effectiveness. Under this regard, DC motors offer very good performances, provided that a proper mounting scheme is used in order to mimic the typical viscoelastici behaviour exhibited by biological systems, as required by the selected application. In this paper we report on the design of two compact devices for controlling the active and passive torques applied to the joint of a wearable robot for the lower limbs. The first device consists of a rotary Serial Elastic Actuator (SEA), incorporating a custom made torsion spring. The second device is a purely mechanical passive viscoelastici joint, functionally equivalent to a torsion spring mounted in parallel to a rotary viscous damper. The torsion stiffness and the damping coefficient can be easily tuned by acting on specific elements, thanks to the modular design of the device. The working principles and basic design choices regarding the overall architectures and the single components are presented and discussed.",
"title": ""
},
{
"docid": "03b7c9146ff404d7c5d2404d1f08a88b",
"text": "We propose an approach to detect flying objects such as UAVs and aircrafts when they occupy a small portion of the field of view, possibly moving against complex backgrounds, and are filmed by a camera that itself moves.",
"title": ""
},
{
"docid": "b86ea36ee5a3b6c27713de3f809841b8",
"text": "From a group of 1,189 AA patients seen in our dermatology unit, thirteen (3 males, 10 females) experienced hair shedding that started profusely and diffusely over the entire scalp. They were under observation for about 5 years, histopathology and trichograms being performed in all instances. The mean age of the patients was 26.7 years. It took only 2.3 months on average from the onset of hair shedding to total denudation of the scalp. The trichogram at the time of diffuse shedding showed that about 80% had dystrophic roots and the remaining 20% had telogen roots. Histopathological findings and exclamation mark hairs were compatible with alopecia areata. Regrowth of hair was noted 3.2 month after the onset of hair shedding and recovery observed in 4.8 months. All patients were treated by methylprednisolone pulse therapy. During the follow-up period, 53 months on average after recovery, 8 of the 13 patients (61.5%) showed normal scalp hair without recurrence, in 4 patients the recovery was cosmetically acceptable in spite of focal recurrences and only 1 patient showed a severe relapse after recovery. Considering all of the above findings, this group of the patients should be delineated by the term acute alopecia totalis.",
"title": ""
},
{
"docid": "52c9d8a1bf6fabbe0771eef75a64c1d8",
"text": "This paper presents a convolutional neural network based approach for estimating the relative pose between two cameras. The proposed network takes RGB images from both cameras as input and directly produces the relative rotation and translation as output. The system is trained in an end-to-end manner utilising transfer learning from a large scale classification dataset. The introduced approach is compared with widely used local feature based methods (SURF, ORB) and the results indicate a clear improvement over the baseline. In addition, a variant of the proposed architecture containing a spatial pyramid pooling (SPP) layer is evaluated and shown to further improve the performance.",
"title": ""
}
] |
scidocsrr
|
9aec7682c9507086ab1022b9cec8ac9c
|
Pricing Digital Marketing : Information , Risk Sharing and Performance
|
[
{
"docid": "f7562e0540e65fdfdd5738d559b4aad1",
"text": "An important aspect of marketing practice is the targeting of consumer segments for differential promotional activity. The premise of this activity is that there exist distinct segments of homogeneous consumers who can be identified by readily available demographic information. The increased availability of individual consumer panel data open the possibility of direct targeting of individual households. The goal of this paper is to assess the information content of various information sets available for direct marketing purposes. Information on the consumer is obtained from the current and past purchase history as well as demographic characteristics. We consider the situation in which the marketer may have access to a reasonably long purchase history which includes both the products purchased and information on the causal environment. Short of this complete purchase history, we also consider more limited information sets which consist of only the current purchase occasion or only information on past product choice without causal variables. Proper evaluation of this information requires a flexible model of heterogeneity which can accommodate observable and unobservable heterogeneity as well as produce household level inferences for targeting purposes. We develop new econometric methods to imple0732-2399/96/1504/0321$01.25 Copyright C 1996, Institute for Operations Research and the Management Sciences ment a random coefficient choice model in which the heterogeneity distribution is related to observable demographics. We couple this approach to modeling heterogeneity with a target couponing problem in which coupons are customized to specific households on the basis of various information sets. The couponing problem allows us to place a monetary value on the information sets. Our results indicate there exists a tremendous potential for improving the profitability of direct marketing efforts by more fully utilizing household purchase histories. Even rather short purchase histories can produce a net gain in revenue from target couponing which is 2.5 times the gain from blanket couponing. The most popular current electronic couponing trigger strategy uses only one observation to customize the delivery of coupons. Surprisingly, even the information contained in observing one purchase occasion boasts net couponing revenue by 50% more than that which would be gained by the blanket strategy. This result, coupled with increased competitive pressures, will force targeted marketing strategies to become much more prevalent in the future than they are today. (Target Marketing; Coupons; Heterogeneity; Bayesian Hierarchical Models) MARKETING SCIENCE/Vol. 15, No. 4, 1996 pp. 321-340 THE VALUE OF PURCHASE HISTORY DATA IN TARGET MARKETING",
"title": ""
}
] |
[
{
"docid": "dc67945b32b2810a474acded3c144f68",
"text": "This paper presents an overview of the eld of Intelligent Products. As Intelligent Products have many facets, this paper is mainly focused on the concept behind Intelligent Products, the technical foundations, and the achievable practical goals of Intelligent Products. A novel classi cation of Intelligent Products is introduced, which distinguishes between three orthogonal dimensions. Furthermore, the technical foundations in the areas of automatic identi cation and embedded processing, distributed information storage and processing, and agent-based systems are discussed, as well as the achievable practical goals in the contexts of manufacturing, supply chains, asset management, and product life cycle management.",
"title": ""
},
{
"docid": "4d7c0222317fbd866113e1a244a342f3",
"text": "A simple method of \"tuning up\" a multiple-resonant-circuit filter quickly and exactly is demonstrated. The method may be summarized as follows: Very loosely couple a detector to the first resonator of the filter; then, proceeding in consecutive order, tune all odd-numbered resonators for maximum detector output, and all even-numbered resonators for minimum detector output (always making sure that the resonator immediately following the one to be resonated is completely detuned). Also considered is the correct adjustment of the two other types of constants in a filter. Filter constants can always be reduced to only three fundamental types: f0, dr(1/Qr), and Kr(r+1). This is true whether a lumped-element 100-kc filter or a distributed-element 5,000-mc unit is being considered. dr is adjusted by considering the rth resonator as a single-tuned circuit (all other resonators completely detuned) and setting the bandwidth between the 3-db-down-points to the required value. Kr(r+1) is adjusted by considering the rth and (r+1)th adjacent resonators as a double-tuned circuit (all other resonators completely detuned) and setting the bandwidth between the resulting response peaks to the required value. Finally, all the required values for K and Q are given for an n-resonant-circuit filter that will produce the response (Vp/V)2=1 +(Δf/Δf3db)2n.",
"title": ""
},
{
"docid": "7def66c81180a73282cd7e463dc4938c",
"text": "Drug abuse in Nigeria has been indicated to be on the rise in recent years. The use of hard drugs and misuse of prescription drugs for nonmedical purposes cuts across all strata, especially the youths. Tramadol (2[(Dimethylamin) methyl]-1-(3-methoxyphenyl)cyclohexanol) is known for its analgesic potentials. This potent opioid pain killer is misused by Nigerian youths, owing to its suspicion as sexual performance drug. This study therefore is aimed at determining the effect of tramadol on hormone levels its improved libido properties and possibly fertility. Twenty seven (27) European rabbits weighing 1.0 to 2.0 kg were used. Animals were divided into four major groups consisting of male and female control, and male and female tramadol treated groups. Treated groups were further divided into oral and intramuscular (IM) administered groups. Oral groups were administered 25 mg/kg b.w. of tramadol per day while the IM groups received 15 mg/kg b.w. per Original Research Article Osadolor and Omo-Erhabor; BJMMR, 14(8): 1-11, 2016; Article no.BJMMR.24620 2 day over a period of thirty days. Blood samples were collected at the end of the experiment for progesterone, testosterone, estrogen (E2), luteinizing hormone, follicle stimulating hormone (FSH), β-human chorionic gonadotropin and prolactin estimation. Tramadol treated groups were compared with control groups at the end of the study, as well as within group comparison was done. From the results, FSH was found to be significantly reduced (p<0.05) while LH increased significantly (p<0.05). A decrease was observed for testosterone (p<0.001), and estrogen, FSH, progesterone also decreased (p<0.05). Significant changes weren’t observed when IM groups were compared with oral groups. This study does not support an improvement of libido by tramadol, though its possible usefulness in the treatment of premature ejaculation may have been established, but its capabilities to induce male and female infertility is still in doubt.",
"title": ""
},
{
"docid": "95cd9d6572700e2b118c7cb0ffba549a",
"text": "Non-volatile main memory (NVRAM) has the potential to fundamentally change the persistency of software. Applications can make their state persistent by directly placing data structures on NVRAM instead of volatile DRAM. However, the persistent nature of NVRAM requires significant changes for memory allocators that are now faced with the additional tasks of data recovery and failure-atomicity. In this paper, we present nvm malloc, a general-purpose memory allocator concept for the NVRAM era as a basic building block for persistent applications. We introduce concepts for managing named allocations for simplified recovery and using volatile and non-volatile memory in combination to provide both high performance and failure-atomic allocations.",
"title": ""
},
{
"docid": "ed2c198cf34fe63d99a53dd5315bde53",
"text": "The article briefly elaborated the ship hull optimization research development of domestic and foreign based on CFD, proposed that realizing the key of ship hull optimization based on CFD is the hull form parametrization geometry modeling technology. On the foundation of the domestic and foreign hull form parametrization, we proposed the ship blending method, and clarified the principle, had developed the hull form parametrization blending module. Finally, we realized the integration of hull form parametrization blending module and CFD using the integrated optimization frame, has realized hull form automatic optimization design based on CFD, build the foundation for the research of ship multi-disciplinary optimization.",
"title": ""
},
{
"docid": "b25cfcd6ceefffe3039bb5a6a53e216c",
"text": "With the increasing applications in the domains of ubiquitous and context-aware computing, Internet of Things (IoT) are gaining importance. In IoTs, literally anything can be part of it, whether it is sensor nodes or dumb objects, so very diverse types of services can be produced. In this regard, resource management, service creation, service management, service discovery, data storage, and power management would require much better infrastructure and sophisticated mechanism. The amount of data IoTs are going to generate would not be possible for standalone power-constrained IoTs to handle. Cloud computing comes into play here. Integration of IoTs with cloud computing, termed as Cloud of Things (CoT) can help achieve the goals of envisioned IoT and future Internet. This IoT-Cloud computing integration is not straight-forward. It involves many challenges. One of those challenges is data trimming. Because unnecessary communication not only burdens the core network, but also the data center in the cloud. For this purpose, data can be preprocessed and trimmed before sending to the cloud. This can be done through a Smart Gateway, accompanied with a Smart Network or Fog Computing. In this paper, we have discussed this concept in detail and present the architecture of Smart Gateway with Fog Computing. We have tested this concept on the basis of Upload Delay, Synchronization Delay, Jitter, Bulk-data Upload Delay, and Bulk-data Synchronization Delay.",
"title": ""
},
{
"docid": "31865d8e75ee9ea0c9d8c575bbb3eb90",
"text": "Magicians use misdirection to prevent you from realizing the methods used to create a magical effect, thereby allowing you to experience an apparently impossible event. Magicians have acquired much knowledge about misdirection, and have suggested several taxonomies of misdirection. These describe many of the fundamental principles in misdirection, focusing on how misdirection is achieved by magicians. In this article we review the strengths and weaknesses of past taxonomies, and argue that a more natural way of making sense of misdirection is to focus on the perceptual and cognitive mechanisms involved. Our psychologically-based taxonomy has three basic categories, corresponding to the types of psychological mechanisms affected: perception, memory, and reasoning. Each of these categories is then divided into subcategories based on the mechanisms that control these effects. This new taxonomy can help organize magicians' knowledge of misdirection in a meaningful way, and facilitate the dialog between magicians and scientists.",
"title": ""
},
{
"docid": "d59c6a2dd4b6bf7229d71f3ae036328a",
"text": "Community search over large graphs is a fundamental problem in graph analysis. Recent studies propose to compute top-k influential communities, where each reported community not only is a cohesive subgraph but also has a high influence value. The existing approaches to the problem of top-k influential community search can be categorized as index-based algorithms and online search algorithms without indexes. The index-based algorithms, although being very efficient in conducting community searches, need to pre-compute a specialpurpose index and only work for one built-in vertex weight vector. In this paper, we investigate online search approaches and propose an instance-optimal algorithm LocalSearch whose time complexity is linearly proportional to the size of the smallest subgraph that a correct algorithm needs to access without indexes. In addition, we also propose techniques to make LocalSearch progressively compute and report the communities in decreasing influence value order such that k does not need to be specified. Moreover, we extend our framework to the general case of top-k influential community search regarding other cohesiveness measures. Extensive empirical studies on real graphs demonstrate that our algorithms outperform the existing online search algorithms by several orders of magnitude.",
"title": ""
},
{
"docid": "9e208e6beed62575a92f32031b7af8ad",
"text": "Recently, interests on cleaning robots workable in pipes (termed as in-pipe cleaning robot) are increasing because Garbage Automatic Collection Facilities (i.e, GACF) are widely being installed in Seoul metropolitan area of Korea. So far research on in-pipe robot has been focused on inspection rather than cleaning. In GACF, when garbage is moving, we have to remove the impurities which are stuck to the inner face of the pipe (diameter: 300mm or 400mm). Thus, in this paper, by using TRIZ (Inventive Theory of Problem Solving in Russian abbreviation), we will propose an in-pipe cleaning robot of GACF with the 6-link sliding mechanism which can be adjusted to fit into the inner face of pipe using pneumatic pressure(not spring). The proposed in-pipe cleaning robot for GACF can have forward/backward movement itself as well as rotation of brush in cleaning. The robot body should have the limited size suitable for the smaller pipe with diameter of 300mm. In addition, for the pipe with diameter of 400mm, the links of robot should stretch to fit into the diameter of the pipe by using the sliding mechanism. Based on the conceptual design using TRIZ, we will set up the initial design of the robot in collaboration with a field engineer of Robot Valley, Inc. in Korea. For the optimal design of in-pipe cleaning robot, the maximum impulsive force of collision between the robot and the inner face of pipe is simulated by using RecurDyn® when the link of sliding mechanism is stretched to fit into the 400mm diameter of the pipe. The stresses exerted on the 6 links of sliding mechanism by the maximum impulsive force will be simulated by using ANSYS® Workbench based on the Design Of Experiment(in short DOE). Finally the optimal dimensions including thicknesses of 4 links will be decided in order to have the best safety factor as 2 in this paper as well as having the minimum mass of 4 links. It will be verified that the optimal design of 4 links has the best safety factor close to 2 as well as having the minimum mass of 4 links, compared with the initial design performed by the expert of Robot Valley, Inc. In addition, the prototype of in-pipe cleaning robot will be stated with further research.",
"title": ""
},
{
"docid": "b86711e8a418bde07e16bcb9a394d92c",
"text": "This paper reviews and evaluates the evidence for the existence of distinct varieties of developmental dyslexia, analogous to those found in the acquired dyslexic population. Models of the normal adult reading process and of the development of reading in children are used to provide a framework for considering the issues. Data from a large-sample study of the reading patterns of developmental dyslexics are then reported. The lexical and sublexical reading skills of 56 developmental dyslexics were assessed through close comparison with the skills of 56 normally developing readers. The results indicate that there are at least two varieties of developmental dyslexia, the first of which is characterised by a specific difficulty using the lexical procedure, and the second by a difficulty using the sublexical procedure. These subtypes are apparently not rare, but are relatively prevalent in the developmental dyslexic population. The results of a second experiment, which suggest that neither of these reading patterns can be accounted for in terms of a general language disorder, are then reported.",
"title": ""
},
{
"docid": "93afb696fa395a7f7c2a4f3fc2ac690d",
"text": "We present a framework for recognizing isolated and continuous American Sign Language (ASL) sentences from three-dimensional data. The data are obtained by using physics-based three-dimensional tracking methods and then presented as input to Hidden Markov Models (HMMs) for recognition. To improve recognition performance, we model context-dependent HMMs and present a novel method of coupling three-dimensional computer vision methods and HMMs by temporally segmenting the data stream with vision methods. We then use the geometric properties of the segments to constrain the HMM framework for recognition. We show in experiments with a 53 sign vocabulary that three-dimensional features outperform two-dimensional features in recognition performance. Furthermore, we demonstrate that contextdependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.",
"title": ""
},
{
"docid": "ac168ff92c464cb90a9a4ca0eb5bfa5c",
"text": "Path computing is a new paradigm that generalizes the edge computing vision into a multi-tier cloud architecture deployed over the geographic span of the network. Path computing supports scalable and localized processing by providing storage and computation along a succession of datacenters of increasing sizes, positioned between the client device and the traditional wide-area cloud data-center. CloudPath is a platform that implements the path computing paradigm. CloudPath consists of an execution environment that enables the dynamic installation of light-weight stateless event handlers, and a distributed eventual consistent storage system that replicates application data on-demand. CloudPath handlers are small, allowing them to be rapidly instantiated on demand on any server that runs the CloudPath execution framework. In turn, CloudPath automatically migrates application data across the multiple datacenter tiers to optimize access latency and reduce bandwidth consumption.",
"title": ""
},
{
"docid": "ccf1f3cb6a9efda6c7d6814ec01d8329",
"text": "Twitter as a micro-blogging platform rose to instant fame mainly due to its minimalist features that allow seamless communication between users. As the conversations grew thick and faster, a placeholder feature called as Hashtags became important as it captured the themes behind the tweets. Prior studies have investigated the conversation dynamics, interplay with other media platforms and communication patterns between users for specific event-based hashtags such as the #Occupy movement. Commonplace hashtags which are used on a daily basis have been largely ignored due to their seemingly innocuous presence in tweets and also due to the lack of connection with real-world events. However, it can be postulated that utility of these hashtags is the main reason behind their continued usage. This study is aimed at understanding the rationale behind the usage of a particular type of commonplace hashtags:-location hashtags such as country and city name hashtags. Tweets with the hashtag #singapore were extracted for a week’s duration. Manual and automatic tweet classification was performed along with social network analysis, to identify the underlying themes. Seven themes were identified. Findings indicate that the hashtag is prominent in tweets about local events, local news, users’ current location and landmark related information sharing. Users who share content from social media sites such as Instagram make use of the hashtag in a more prominent way when compared to users who post textual content. News agencies, commercial bodies and celebrities make use of the hashtag more than common individuals. Overall, the results show the non-conversational nature of the hashtag. The findings are to be validated with other country names and crossvalidated with hashtag data from other social media platforms.",
"title": ""
},
{
"docid": "7b5331b0e6ad693fc97f5f3b543bf00c",
"text": "Relational learning deals with data that are characterized by relational structures. An important task is collective classification, which is to jointly classify networked objects. While it holds a great promise to produce a better accuracy than noncollective classifiers, collective classification is computational challenging and has not leveraged on the recent breakthroughs of deep learning. We present Column Network (CLN), a novel deep learning model for collective classification in multirelational domains. CLN has many desirable theoretical properties: (i) it encodes multi-relations between any two instances; (ii) it is deep and compact, allowing complex functions to be approximated at the network level with a small set of free parameters; (iii) local and relational features are learned simultaneously; (iv) longrange, higher-order dependencies between instances are supported naturally; and (v) crucially, learning and inference are efficient, linear in the size of the network and the number of relations. We evaluate CLN on multiple real-world applications: (a) delay prediction in software projects, (b) PubMed Diabetes publication classification and (c) film genre classification. In all applications, CLN demonstrates a higher accuracy than state-of-the-art rivals.",
"title": ""
},
{
"docid": "418e29af01be9655c06df63918f41092",
"text": "A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this goal is approached by minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target a later desired task by meta-learning an unsupervised learning rule, which leads to representations useful for that task. Here, our desired task (meta-objective) is the performance of the representation on semi-supervised classification, and we meta-learn an algorithm – an unsupervised weight update rule – that produces representations that perform well under this meta-objective. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to novel neural network architectures. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We show that the metalearned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.",
"title": ""
},
{
"docid": "011332e3d331d461e786fd2827b0434d",
"text": "In this manuscript we present various robust statistical methods popular in the social sciences, and show how to apply them in R using the WRS2 package available on CRAN. We elaborate on robust location measures, and present robust t-test and ANOVA versions for independent and dependent samples, including quantile ANOVA. Furthermore, we present on running interval smoothers as used in robust ANCOVA, strategies for comparing discrete distributions, robust correlation measures and tests, and robust mediator models.",
"title": ""
},
{
"docid": "c5fc804aa7f98a575a0e15b7c28650e8",
"text": "In the past few years, a great attention has been received by web documents as a new source of individual opinions and experience. This situation is producing increasing interest in methods for automatically extracting and analyzing individual opinion from web documents such as customer reviews, weblogs and comments on news. This increase was due to the easy accessibility of documents on the web, as well as the fact that all these were already machine-readable on gaining. At the same time, Machine Learning methods in Natural Language Processing (NLP) and Information Retrieval were considerably increased development of practical methods, making these widely available corpora. Recently, many researchers have focused on this area. They are trying to fetch opinion information and analyze it automatically with computers. This new research domain is usually called Opinion Mining and Sentiment Analysis. . Until now, researchers have developed several techniques to the solution of the problem. This paper try to cover some techniques and approaches that be used in this area.",
"title": ""
},
{
"docid": "789de6123795ad8950c21b0ee8df7315",
"text": "This paper introduces new optimality-preserving operators on Q-functions. We first describe an operator for tabular representations, the consistent Bellman operator, which incorporates a notion of local policy consistency. We show that this local consistency leads to an increase in the action gap at each state; increasing this gap, we argue, mitigates the undesirable effects of approximation and estimation errors on the induced greedy policies. This operator can also be applied to discretized continuous space and time problems, and we provide empirical results evidencing superior performance in this context. Extending the idea of a locally consistent operator, we then derive sufficient conditions for an operator to preserve optimality, leading to a family of operators which includes our consistent Bellman operator. As corollaries we provide a proof of optimality for Baird’s advantage learning algorithm and derive other gap-increasing operators with interesting properties. We conclude with an empirical study on 60 Atari 2600 games illustrating the strong potential of these new operators. Value-based reinforcement learning is an attractive solution to planning problems in environments with unknown, unstructured dynamics. In its canonical form, value-based reinforcement learning produces successive refinements of an initial value function through repeated application of a convergent operator. In particular, value iteration (Bellman 1957) directly computes the value function through the iterated evaluation of Bellman’s equation, either exactly or from samples (e.g. Q-Learning, Watkins 1989). In its simplest form, value iteration begins with an initial value function V0 and successively computes Vk+1 := T Vk, where T is the Bellman operator. When the environment dynamics are unknown, Vk is typically replaced by Qk, the state-action value function, and T is approximated by an empirical Bellman operator. The fixed point of the Bellman operator, Q∗, is the optimal state-action value function or optimal Q-function, from which an optimal policy π∗ can be recovered. In this paper we argue that the optimal Q-function is inconsistent, in the sense that for any action a which is subop∗Now at Carnegie Mellon University. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. timal in state x, Bellman’s equation for Q∗(x, a) describes the value of a nonstationary policy: upon returning to x, this policy selects π∗(x) rather than a. While preserving global consistency appears impractical, we propose a simple modification to the Bellman operator which provides us a with a first-order solution to the inconsistency problem. Accordingly, we call our new operator the consistent Bellman operator. We show that the consistent Bellman operator generally devalues suboptimal actions but preserves the set of optimal policies. As a result, the action gap – the value difference between optimal and second best actions – increases. This increasing of the action gap is advantageous in the presence of approximation or estimation error, and may be crucial for systems operating at a fine time scale such as video games (Togelius et al. 2009; Bellemare et al. 2013), real-time markets (Jiang and Powell 2015), and robotic platforms (Riedmiller et al. 2009; Hoburg and Tedrake 2009; Deisenroth and Rasmussen 2011; Sutton et al. 2011). In fact, the idea of devaluating suboptimal actions underpins Baird’s advantage learning (Baird 1999), designed for continuous time control, and occurs naturally when considering the discretized solution of continuous time and space MDPs (e.g. Munos and Moore 1998; 2002), whose limit is the HamiltonJacobi-Bellman equation (Kushner and Dupuis 2001). Our empirical results on the bicycle domain (Randlov and Alstrom 1998) show a marked increase in performance from using the consistent Bellman operator. In the second half of this paper we derive novel sufficient conditions for an operator to preserve optimality. The relative weakness of these new conditions reveal that it is possible to deviate significantly from the Bellman operator without sacrificing optimality: an optimality-preserving operator needs not be contractive, nor even guarantee convergence of the Q-values for suboptimal actions. While numerous alternatives to the Bellman operator have been put forward (e.g. recently Azar et al. 2011; Bertsekas and Yu 2012), we believe our work to be the first to propose such a major departure from the canonical fixed-point condition required from an optimality-preserving operator. As proof of the richness of this new operator family we describe a few practical instantiations with unique properties. We use our operators to obtain state-of-the-art empirical results on the Arcade Learning Environment (Bellemare et al. 2013). We consider the Deep Q-Network (DQN) architecture of Mnih et al. (2015), replacing only its learning rule with one of our operators. Remarkably, this one-line change produces agents that significantly outperform the original DQN. Our work, we believe, demonstrates the potential impact of rethinking the core components of value-based reinforcement learning.",
"title": ""
}
] |
scidocsrr
|
a5d0a4a7fb20a3ce8298187b5467fe91
|
Faster Algorithms for Max-Product Message-Passing
|
[
{
"docid": "4ac3c3fb712a1121e0990078010fe4b0",
"text": "1.1 Introduction Relational data has two characteristics: first, statistical dependencies exist between the entities we wish to model, and second, each entity often has a rich set of features that can aid classification. For example, when classifying Web documents, the page's text provides much information about the class label, but hyperlinks define a relationship between pages that can improve classification [Taskar et al., 2002]. Graphical models are a natural formalism for exploiting the dependence structure among entities. Traditionally, graphical models have been used to represent the joint probability distribution p(y, x), where the variables y represent the attributes of the entities that we wish to predict, and the input variables x represent our observed knowledge about the entities. But modeling the joint distribution can lead to difficulties when using the rich local features that can occur in relational data, because it requires modeling the distribution p(x), which can include complex dependencies. Modeling these dependencies among inputs can lead to intractable models, but ignoring them can lead to reduced performance. A solution to this problem is to directly model the conditional distribution p(y|x), which is sufficient for classification. This is the approach taken by conditional random fields [Lafferty et al., 2001]. A conditional random field is simply a conditional distribution p(y|x) with an associated graphical structure. Because the model is",
"title": ""
}
] |
[
{
"docid": "47fd07d8f2f540ee064e1c674c550637",
"text": "Virtual reality and 360-degree video streaming are growing rapidly, yet, streaming high-quality 360-degree video is still challenging due to high bandwidth requirements. Existing solutions reduce bandwidth consumption by streaming high-quality video only for the user's viewport. However, adding the spatial domain (viewport) to the video adaptation space prevents the existing solutions from buffering future video chunks for a duration longer than the interval that user's viewport is predictable. This makes playback more prone to video freezes due to rebuffering, which severely degrades the user's Quality of Experience especially under challenging network conditions. We propose a new method that alleviates the restrictions on buffer duration by utilizing scalable video coding. Our method significantly reduces the occurrence of rebuffering on links with varying bandwidth without compromising playback quality or bandwidth efficiency compared to the existing solutions. We demonstrate the efficiency of our proposed method using experimental results with real world cellular network bandwidth traces.",
"title": ""
},
{
"docid": "2b0cc3aa68c671c7c14726b51e1713ca",
"text": "The conflux of two growing areas of technology— collaboration and visualization—into a new research direction, collaborative visualization, provides new research challenges. Technology now allows us to easily connect and collaborate with one another—in settings as diverse as over networked computers, across mobile devices, or using shared displays such as interactive walls and tabletop surfaces. Digital information is now regularly accessed by multiple people in order to share information, to view it together, to analyze it, or to form decisions. Visualizations are used to deal more effectively with large amounts of information while interactive visualizations allow users to explore the underlying data. While researchers face many challenges in collaboration and in visualization, the emergence of collaborative visualization poses additional challenges but is also an exciting opportunity to reach new audiences and applications for visualization tools and techniques. The purpose of this article is (1) to provide a definition, clear scope, and overview of the evolving field of collaborative visualization, (2) to help pinpoint the unique focus of collaborative visualization with its specific aspects, challenges, and requirements within the intersection of general computer-supported cooperative work (CSCW) and visualization research, and (3) to draw attention to important future research questions to be addressed by the community. We conclude by discussing a research agenda for future work on collaborative visualization and urge for a new generation of visualization tools that are designed with collaboration in mind from their very inception.",
"title": ""
},
{
"docid": "518a9ed23b2989c131fa46b740ab26a6",
"text": "The idea is to identify security-critical software bugs so they can be fixed first.",
"title": ""
},
{
"docid": "6f3c44edc2bbfad62f4629b55ade0537",
"text": "Knowledge graph (KG) completion adds new facts to a KG by making inferences from existing facts. Most existing methods ignore the time information and only learn from time-unknown fact triples. In dynamic environments that evolve over time, it is important and challenging for knowledge graph completion models to take into account the temporal aspects of facts. In this paper, we present a novel time-aware knowledge graph completion model that is able to predict links in a KG using both the existing facts and the temporal information of the facts. To incorporate the happening time of facts, we propose a time-aware KG embedding model using temporal order information among facts. To incorporate the valid time of facts, we propose a joint time-aware inference model based on Integer Linear Programming (ILP) using temporal consistency information as constraints. We further integrate two models to make full use of global temporal information. We empirically evaluate our models on time-aware KG completion task. Experimental results show that our time-aware models achieve the state-of-the-art on temporal facts consistently.",
"title": ""
},
{
"docid": "aba6fb0d9e56a45801c782df37ed1616",
"text": "To improve muscular strength and hypertrophy the American College of Sports Medicine recommends moderate to high load resistance training. However, use of moderate to high loads are often not feasible in clinical populations. Therefore, the emergence of low load (LL) blood flow restriction (BFR) training as a rehabilitation tool for clinical populations is becoming popular. Although the majority of research on LL-BFR training has examined healthy populations, clinical applications are emerging. Overall, it appears BFR training is a safe and effective tool for rehabilitation. However, additional research is needed prior to widespread application.",
"title": ""
},
{
"docid": "b318cfcbe82314cc7fa898f0816dbab8",
"text": "Flow experience is often considered as an important standard of ideal user experience (UX). Till now, flow is mainly measured via self-report questionnaires, which cannot evaluate flow immediately and objectively. In this paper, we constructed a physiological evaluation model to evaluate flow in virtual reality (VR) game. The evaluation model consists of five first-level indicators and their respective second-level indicators. Then, we conducted an empirical experiment to test the effectiveness of partial indicators to predict flow experience. Most results supported the model and revealed that heart rate, interbeat interval, heart rate variability (HRV), low-frequency HRV (LF-HRV), high-frequency HRV (HF-HRV), and respiratory rate are all effective indicators in predicting flow experience. Further research should be conducted to improve the evaluation model and conclude practical implications in UX and VR game design.",
"title": ""
},
{
"docid": "cf51f466c72108d5933d070b307e5d6d",
"text": "The study reported here follows the suggestion by Caplan et al. (Justice Q, 2010) that risk terrain modeling (RTM) be developed by doing more work to elaborate, operationalize, and test variables that would provide added value to its application in police operations. Building on the ideas presented by Caplan et al., we address three important issues related to RTM that sets it apart from current approaches to spatial crime analysis. First, we address the selection criteria used in determining which risk layers to include in risk terrain models. Second, we compare the ‘‘best model’’ risk terrain derived from our analysis to the traditional hotspot density mapping technique by considering both the statistical power and overall usefulness of each approach. Third, we test for ‘‘risk clusters’’ in risk terrain maps to determine how they can be used to target police resources in a way that improves upon the current practice of using density maps of past crime in determining future locations of crime occurrence. This paper concludes with an in depth exploration of how one might develop strategies for incorporating risk terrains into police decisionmaking. RTM can be developed to the point where it may be more readily adopted by police crime analysts and enable police to be more effectively proactive and identify areas with the greatest probability of becoming locations for crime in the future. The targeting of police interventions that emerges would be based on a sound understanding of geographic attributes and qualities of space that connect to crime outcomes and would not be the result of identifying individuals from specific groups or characteristics of people as likely candidates for crime, a tactic that has led police agencies to be accused of profiling. In addition, place-based interventions may offer a more efficient method of impacting crime than efforts focused on individuals.",
"title": ""
},
{
"docid": "bf04d5a87fbac1157261fac7652b9177",
"text": "We consider the partitioning of a society into coalitions in purely hedonic settings; i.e., where each player's payo is completely determined by the identity of other members of her coalition. We rst discuss how hedonic and non-hedonic settings di er and some su cient conditions for the existence of core stable coalition partitions in hedonic settings. We then focus on a weaker stability condition: individual stability, where no player can bene t from moving to another coalition while not hurting the members of that new coalition. We show that if coalitions can be ordered according to some characteristic over which players have single-peaked preferences, or where players have symmetric and additively separable preferences, then there exists an individually stable coalition partition. Examples show that without these conditions, individually stable coalition partitions may not exist. We also discuss some other stability concepts, and the incompatibility of stability with other normative properties.",
"title": ""
},
{
"docid": "43d77f09655e0e34274f8a90075f8949",
"text": "Within this paper, CoBoLD — short for Cone Bolt Locking Device — a bonding mechanism for modular self-reconfigurable autonomous mobile robots is presented. The docking unit is used to physically connect modular robots and/or toolboxes in order to form complex structures and robotic organisms. CoBoLD is specially designed to combine essential features like genderlessness, symmetry, high stiffness, integrated force sensors and electrical contacts, while still using a simple and low cost, easy to manufacture setup. Next to a detailed description of CoBoLDs assembly, first trails and their results are shown in this paper. CoBoLD is used at several different robotic platforms and toolboxes within the EU founded projects SYMBRION and REPLICATOR.",
"title": ""
},
{
"docid": "8de4182b607888e6c7cbe6d6ae8ee122",
"text": "In this article, we focus on isolated gesture recognition and explore different modalities by involving RGB stream, depth stream, and saliency stream for inspection. Our goal is to push the boundary of this realm even further by proposing a unified framework that exploits the advantages of multi-modality fusion. Specifically, a spatial-temporal network architecture based on consensus-voting has been proposed to explicitly model the long-term structure of the video sequence and to reduce estimation variance when confronted with comprehensive inter-class variations. In addition, a three-dimensional depth-saliency convolutional network is aggregated in parallel to capture subtle motion characteristics. Extensive experiments are done to analyze the performance of each component and our proposed approach achieves the best results on two public benchmarks, ChaLearn IsoGD and RGBD-HuDaAct, outperforming the closest competitor by a margin of over 10% and 15%, respectively. Our project and codes will be released at https://davidsonic.github.io/index/acm_tomm_2017.html.",
"title": ""
},
{
"docid": "229605eada4ca390d17c5ff168c6199a",
"text": "The sharing economy is a new online community that has important implications for offline behavior. This study evaluates whether engagement in the sharing economy is associated with an actor’s aversion to risk. Using a web-based survey and a field experiment, we apply an adaptation of Holt and Laury’s (2002) risk lottery game to a representative sample of sharing economy participants. We find that frequency of activity in the sharing economy predicts risk aversion, but only in interaction with satisfaction. While greater satisfaction with sharing economy websites is associated with a decrease in risk aversion, greater frequency of usage is associated with greater risk aversion. This analysis shows the limitations of a static perspective on how risk attitudes relate to participation in the sharing economy.",
"title": ""
},
{
"docid": "07fc203735e9da22e0dc49c4a1153db0",
"text": "The implementation, diffusion and adoption of e-government in the public sector has been a topic that has been debated by the research community for some time. In particular, the limited adoption of e-government services is attributed to factors such as the heterogeneity of users, lack of user-orientation, the limited transformation of public sector and the mismatch between expectations and supply. In this editorial, we review theories and factors impacting implementation, diffusion and adoption of e-government. Most theories used in prior research follow mainstream information systems concepts, which can be criticized for not taking into account e-government specific characteristics. The authors argue that there is a need for e-government specific theories and methodologies that address the idiosyncratic nature of e-government as the well-known information systems concepts that are primarily developed for business contexts are not equipped to encapsulate the complexities surrounding e-government. Aspects like accountability, digital divide, legislation, public governance, institutional complexity and citizens' needs are challenging issues that have to be taken into account in e-government theory and practices. As such, in this editorial we argue that e-government should develop as an own strand of research, while information systems theories and concepts should not be neglected.",
"title": ""
},
{
"docid": "1e1cad07832b4f37ce5573592e3a8074",
"text": "The current BSC guidance issued by the FDA allows for biowaivers based on conservative criteria. Possible new criteria and class boundaries are proposed for additional biowaivers based on the underlying physiology of the gastrointestinal tract. The proposed changes in new class boundaries for solubility and permeability are as follows: 1. Narrow the required solubility pH range from 1.0-7.5 to 1.0-6.8. 2. Reduce the high permeability requirement from 90% to 85%. The following new criterion and potential biowaiver extension require more research: 1. Define a new intermediate permeability class boundary. 2. Allow biowaivers for highly soluble and intermediately permeable drugs in IR solid oral dosage forms with no less than 85% dissolved in 15 min in all physiologically relevant dissolution media, provided these IR products contain only known excipients that do not affect the oral drug absorption. The following areas require more extensive research: 1. Increase the dose volume for solubility classification to 500 mL. 2. Include bile salt in the solubility measurement. 3. Use the intrinsic dissolution method for solubility classification. 4. Define an intermediate solubility class for BCS Class II drugs. 5. Include surfactants in in vitro dissolution testing.",
"title": ""
},
{
"docid": "9e804b49534bedcde2611d70c40b255d",
"text": "PURPOSE\nScreening tool of older people's prescriptions (STOPP) and screening tool to alert to right treatment (START) criteria were first published in 2008. Due to an expanding therapeutics evidence base, updating of the criteria was required.\n\n\nMETHODS\nWe reviewed the 2008 STOPP/START criteria to add new evidence-based criteria and remove any obsolete criteria. A thorough literature review was performed to reassess the evidence base of the 2008 criteria and the proposed new criteria. Nineteen experts from 13 European countries reviewed a new draft of STOPP & START criteria including proposed new criteria. These experts were also asked to propose additional criteria they considered important to include in the revised STOPP & START criteria and to highlight any criteria from the 2008 list they considered less important or lacking an evidence base. The revised list of criteria was then validated using the Delphi consensus methodology.\n\n\nRESULTS\nThe expert panel agreed a final list of 114 criteria after two Delphi validation rounds, i.e. 80 STOPP criteria and 34 START criteria. This represents an overall 31% increase in STOPP/START criteria compared with version 1. Several new STOPP categories were created in version 2, namely antiplatelet/anticoagulant drugs, drugs affecting, or affected by, renal function and drugs that increase anticholinergic burden; new START categories include urogenital system drugs, analgesics and vaccines.\n\n\nCONCLUSION\nSTOPP/START version 2 criteria have been expanded and updated for the purpose of minimizing inappropriate prescribing in older people. These criteria are based on an up-to-date literature review and consensus validation among a European panel of experts.",
"title": ""
},
{
"docid": "d79c4cb4da1e84f0ddc6318b1d03cbcd",
"text": "Weproposeadatamodelandqueaylanguagethat integrates an explicit modeling and querying of graphssmoothlyintoastandsrddambaseenvimnment For standard applications, some hey featuresofobjectuientedmodeling~offesedsuchas object classes organized into a hierarchy, object identity, and attributes referencing objects. Queryingcanbedoneinafamiliarstylewitha &rive statement that can be used like a select . ..from . . . wkre.Gntheotherhand,themodel allows for an explicit mpresentation of graphs by partitioning object classes into simple classes, linkclasses,andpathclasseswhoseobjectscan be viewed as nodes, edges, and explicitly stored paths of a graph (which is the whole dambase ins-). For querying graphs, the derive statement has an extended meaning in that it allows onetoiefertosubgraphsofthedambasegraph.A powerful rewrite operation is offered for the manipulation of heterogeneous sequences of objects which often occur as a result of accessing thedambasegraph.Additionallytherearespecial graphoperationslikedekmnn@a&ortestpath or a subgmph and the model is extensible by such operations. Besides being attractive for standard applications, the model permits a natural representation and sophisticated querying of nelworks, in parhk of spatially embedded networks like highways, public transpart, etc. This work was suppoxted by the ESPRIT Basic Resepch Project 6881 AMUSING Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 20th VLDB Conference Santiago, Chile, 1994",
"title": ""
},
{
"docid": "b95fbff16afd8cc23b184a1a429501ad",
"text": "Phase shifting structured light illumination for range sensing involves projecting a set of grating patterns where accuracy is determined, in part, by the number of stripes. However, high pattern frequencies introduce ambiguities during phase unwrapping. This paper proposes a process for embedding a period cue into the projected pattern set without reducing the signal-to-noise ratio. As a result, each period of the high frequency signal can be identified. The proposed method can unwrap high frequency phase and achieve high measurement precision without increasing the pattern number. Therefore, the proposed method can significantly benefit real-time applications. The method is verified by theoretical and experimental analysis using prototype system built to achieve 120 fps at 640 × 480 resolution.",
"title": ""
},
{
"docid": "a9f9f918d0163e18cf6df748647ffb05",
"text": "In previous work, we have shown that using terms from around citations in citing papers to index the cited paper, in addition to the cited paper's own terms, can improve retrieval effectiveness. Now, we investigate how to select text from around the citations in order to extract good index terms. We compare the retrieval effectiveness that results from a range of contexts around the citations, including no context, the entire citing paper, some fixed windows and several variations with linguistic motivations. We conclude with an analysis of the benefits of more complex, linguistically motivated methods for extracting citation index terms, over using a fixed window of terms. We speculate that there might be some advantage to using computational linguistic techniques for this task.",
"title": ""
},
{
"docid": "6f5afc38b09fa4fd1e47d323cfe850c9",
"text": "In the past several years there has been extensive research into honeypot technologies, primarily for detection and information gathering against external threats. However, little research has been done for one of the most dangerous threats, the advance insider, the trusted individual who knows your internal organization. These individuals are not after your systems, they are after your information. This presentation discusses how honeypot technologies can be used to detect, identify, and gather information on these specific threats.",
"title": ""
},
{
"docid": "a00f39476d72dfd7e244c3588ced3ca5",
"text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract This paper holds a survey on leaf disease detection using various image processing technique. Digital image processing is fast, reliable and accurate technique for detection of diseases also various algorithms can be used for identification and classification of leaf diseases in plant. This paper presents techniques used by different author to identify disease such as clustering method, color base image analysis method, classifier and artificial neural network for classification of diseases. The main focus of our work is on the analysis of different leaf disease detection techniques and also provides an overview of different image processing techniques.",
"title": ""
},
{
"docid": "2cd327bd5a7814776825e090b12664ec",
"text": "is an open access repository that collects the work of Arts et Métiers ParisTech researchers and makes it freely available over the web where possible. This article proposes a method based on wavelet transform and neural networks for relating pupillary behavior to psychological stress. The proposed method was tested by recording pupil diameter and electrodermal activity during a simulated driving task. Self-report measures were also collected. Participants performed a baseline run with the driving task only, followed by three stress runs where they were required to perform the driving task along with sound alerts, the presence of two human evaluators, and both. Self-reports and pupil diameter successfully indexed stress manipulation, and significant correlations were found between these measures. However, electrodermal activity did not vary accordingly. After training, the four-way parallel neu-ral network classifier could guess whether a given unknown pupil diameter signal came from one of the four experimental trials with 79.2% precision. The present study shows that pupil diameter signal has good discriminating power for stress detection. 1. INTRODUCTION Stress detection and measurement are important issues in several human–computer interaction domains such as Affective Computing, Adaptive Automation, and Ambient Intelligence. In general, researchers and system designers seek to estimate the psychological state of operators in order to adapt or redesign the working environment accordingly (Sauter, 1991). The primary goal of such adaptation is to enhance overall system performance, trying to reduce workers' psychophysi-cal detriment (e. One key aspect of stress measurement concerns the recording of physiological parameters, which are known to be modulated by the autonomic nervous system (ANS). However, despite",
"title": ""
}
] |
scidocsrr
|
1ac5f95bda7ad949e010daf7a5c53987
|
Counting People With Low-Level Features and Bayesian Regression
|
[
{
"docid": "b28c0dd4c271dc8d9e15f5b4fdec72e0",
"text": "In its full generality, motion analysis of crowded objects necessitates recognition and segmentation of each moving entity. The difficulty of these tasks increases considerably with occlusions and therefore with crowding. When the objects are constrained to be of the same kind, however, partitioning of densely crowded semi-rigid objects can be accomplished by means of clustering tracked feature points. We base our approach on a highly parallelized version of the KLT tracker in order to process the video into a set of feature trajectories. While such a set of trajectories provides a substrate for motion analysis, their unequal lengths and fragmented nature present difficulties for subsequent processing. To address this, we propose a simple means of spatially and temporally conditioning the trajectories. Given this representation, we integrate it with a learned object descriptor to achieve a segmentation of the constituent motions. We present experimental results for the problem of estimating the number of moving objects in a dense crowd as a function of time.",
"title": ""
}
] |
[
{
"docid": "8bc095fca33d850db89ffd15a84335dc",
"text": "There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant's components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid.",
"title": ""
},
{
"docid": "5fc02317117c3068d1409a42b025b018",
"text": "Explaining the causes of infeasibility of Boolean formulas has practical applications in numerous fields, such as artificial intelligence (repairing inconsistent knowledge bases), formal verification (abstraction refinement and unbounded model checking), and electronic design (diagnosing and correcting infeasibility). Minimal unsatisfiable subformulas (MUSes) provide useful insights into the causes of infeasibility. An unsatisfiable formula often has many MUSes. Based on the application domain, however, MUSes with specific properties might be of interest. In this paper, we tackle the problem of finding a smallest-cardinality MUS (SMUS) of a given formula. An SMUS provides a succinct explanation of infeasibility and is valuable for applications that are heavily affected by the size of the explanation. We present (1) a baseline algorithm for finding an SMUS, founded on earlier work for finding all MUSes, and (2) a new branch-and-bound algorithm called Digger that computes a strong lower bound on the size of an SMUS and splits the problem into more tractable subformulas in a recursive search tree. Using two benchmark suites, we experimentally compare Digger to the baseline algorithm and to an existing incomplete genetic algorithm approach. Digger is shown to be faster in nearly all cases. It is also able to solve far more instances within a given runtime limit than either of the other approaches.",
"title": ""
},
{
"docid": "d7c0d9e43f8f894fbe21154c2a26c3fd",
"text": "Decision tree classification (DTC) is a widely used technique in data mining algorithms known for its high accuracy in forecasting. As technology has progressed and available storage capacity in modern computers increased, the amount of data available to be processed has also increased substantially, resulting in much slower induction and classification times. Many parallel implementations of DTC algorithms have already addressed the issues of reliability and accuracy in the induction process. In the classification process, larger amounts of data require proportionately more execution time, thus hindering the performance of legacy systems. We have devised a pipelined architecture for the implementation of axis parallel binary DTC that dramatically improves the execution time of the algorithm while consuming minimal resources in terms of area. Scalability is achieved when connected to a high-speed communication unit capable of performing data transfers at a rate similar to that of the DTC engine. We propose a hardware accelerated solution composed of parallel processing nodes capable of independently processing data from a streaming source. Each engine processes the data in a pipelined fashion to use resources more efficiently and increase the achievable throughput. The results show that this system is 3.5 times faster than the existing hardware implementation of classification.",
"title": ""
},
{
"docid": "2e89bc59f85b14cf40a868399a3ce351",
"text": "CONTEXT\nYouth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression.\n\n\nOBJECTIVE\nWe tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness.\n\n\nDESIGN\nIn 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months.\n\n\nPARTICIPANTS\nOne sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth.\n\n\nCONCLUSIONS\nThese longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.",
"title": ""
},
{
"docid": "f7ff2a89ed5aed67bbb2dc41defa30a8",
"text": "People with color-grapheme synesthesia experience color when viewing written letters or numerals, usually with a particular color evoked by each grapheme. Here, we report on data from 11 color-grapheme synesthetes who had startlingly similar color-grapheme pairings traceable to childhood toys containing colored letters. These are the first and only data to show learned synesthesia of this kind in more than a single individual. Whereas some researchers have focused on genetic and perceptual aspects of synesthesia, our results indicate that a complete explanation of synesthesia must also incorporate a central role for learning and memory. We argue that these two positions can be reconciled by thinking of synesthesia as the automatic retrieval of highly specific mnemonic associations, in which perceptual contents are brought to mind in a manner akin to mental imagery or the perceptual-reinstatement effects found in memory studies.",
"title": ""
},
{
"docid": "86aca69fa9d46e27a26c586962d9309f",
"text": "FX&MM MAY ISSUE 2010 To subscribe online visit: www.fx-mm.com REVERSE FACTORING – BENEFITS FOR ALL A growing number of transaction banks are implementing supplier finance programmes for their large credit-worthy customers who wish to support their supply chain partners. Reverse factoring is the most popular model, enabling banks to provide suppliers with finance at a lower cost than they would normally achieve through direct credit facilities. The credit arbitrage is achieved by the bank securing an undertaking from the buyer (who has a higher credit rating than the suppliers) to settle all invoices at maturity. By financing the buyer’s approved payables, the bank mitigates transaction and fraud risk. In addition to the lower borrowing costs and the off balance sheet treatment of these receivables purchase programmes, a further attraction for suppliers invoicing in foreign currencies is that by taking early payment they protect themselves against foreign exchange fluctuations. In return, the buyer ensures a more stable and robust supply chain, can choose to negotiate lower costs of goods and extend Days Payable Outstanding, improving working capital. Given the compelling benefits of reverse factoring, the market challenge is to drive these new programmes into mainstream acceptance.",
"title": ""
},
{
"docid": "5a6fc8dd2b73f5481cbba649e5e76c1b",
"text": "Mobile phones are becoming the latest target of electronic junk mail. Recent reports clearly indicate that the volume of SMS spam messages are dramatically increasing year by year. Probably, one of the major concerns in academic settings was the scarcity of public SMS spam datasets, that are sorely needed for validation and comparison of different classifiers. To address this issue, we have recently proposed a new SMS Spam Collection that, to the best of our knowledge, is the largest, public and real SMS dataset available for academic studies. However, as it has been created by augmenting a previously existing database built using roughly the same sources, it is sensible to certify that there are no duplicates coming from them. So, in this paper we offer a comprehensive analysis of the new SMS Spam Collection in order to ensure that this does not happen, since it may ease the task of learning SMS spam classifiers and, hence, it could compromise the evaluation of methods. The analysis of results indicate that the procedure followed does not lead to near-duplicates and, consequently, the proposed dataset is reliable to use for evaluating and comparing the performance achieved by different classifiers.",
"title": ""
},
{
"docid": "4d882c081dab44b941e3006b274fc91c",
"text": "A novel, highly efficient and broadband RF power amplifier (PA) operating in “continuous class-F” mode has been realized for first time. The introduction and experimental verification of this new PA mode demonstrates that it is possible to maintain expected output performance, both in terms of efficiency and power, over a very wide bandwidth. Using recently established continuous class-F theory, an output matching network was designed to terminate the first three harmonic impedances. This resulted in a PA delivering an average drain efficiency of 74% and average output power of 10.5W for an octave bandwidth between 0.55GHz and 1.1GHz. A commercially available 10W GaN HEMT transistor has been used for the PA design and realization.",
"title": ""
},
{
"docid": "dfefef6d2bd15cb4bc859338c76cbbff",
"text": "The determination of the most central agents in complex networks is important because they are responsible for a faster propagation of information, epidemics, failures and congestion, among others. A challenging problem is to identify them in networked systems characterized by different types of interactions, forming interconnected multilayer networks. Here we describe a mathematical framework that allows us to calculate centrality in such networks and rank nodes accordingly, finding the ones that play the most central roles in the cohesion of the whole structure, bridging together different types of relations. These nodes are the most versatile in the multilayer network. We investigate empirical interconnected multilayer networks and show that the approaches based on aggregating--or neglecting--the multilayer structure lead to a wrong identification of the most versatile nodes, overestimating the importance of more marginal agents and demonstrating the power of versatility in predicting their role in diffusive and congestion processes.",
"title": ""
},
{
"docid": "9e5ea2211fda032877c68de406b6cf44",
"text": "Two-dimensional crystals are emerging materials for nanoelectronics. Development of the field requires candidate systems with both a high carrier mobility and, in contrast to graphene, a sufficiently large electronic bandgap. Here we present a detailed theoretical investigation of the atomic and electronic structure of few-layer black phosphorus (BP) to predict its electrical and optical properties. This system has a direct bandgap, tunable from 1.51 eV for a monolayer to 0.59 eV for a five-layer sample. We predict that the mobilities are hole-dominated, rather high and highly anisotropic. The monolayer is exceptional in having an extremely high hole mobility (of order 10,000 cm(2) V(-1) s(-1)) and anomalous elastic properties which reverse the anisotropy. Light absorption spectra indicate linear dichroism between perpendicular in-plane directions, which allows optical determination of the crystalline orientation and optical activation of the anisotropic transport properties. These results make few-layer BP a promising candidate for future electronics.",
"title": ""
},
{
"docid": "36828667ce43ab5d489f74e112045639",
"text": "Zero-shot learning has received increasing interest as a means to alleviate the often prohibitive expense of annotating training data for large scale recognition problems. These methods have achieved great success via learning intermediate semantic representations in the form of attributes and more recently, semantic word vectors. However, they have thus far been constrained to the single-label case, in contrast to the growing popularity and importance of more realistic multi-label data. In this paper, for the first time, we investigate and formalise a general framework for multi-label zero-shot learning, addressing the unique challenge therein: how to exploit multi-label correlation at test time with no training data for those classes? In particular, we propose (1) a multi-output deep regression model to project an image into a semantic word space, which explicitly exploits the correlations in the intermediate semantic layer of word vectors; (2) a novel zero-shot learning algorithm for multi-label data that exploits the unique compositionality property of semantic word vector representations; and (3) a transductive learning strategy to enable the regression model learned from seen classes to generalise well to unseen classes. Our zero-shot learning experiments on a number of standard multi-label datasets demonstrate that our method outperforms a variety of baselines.",
"title": ""
},
{
"docid": "fed4de5870b41715d7f9abc0714db99d",
"text": "This paper presents an approach to stereovision applied to small water vehicles. By using a small low-cost computer and inexpensive off-the-shelf components, we were able to develop an autonomous driving system capable of following other vehicle and moving along paths delimited by coloured buoys. A pair of webcams was used and, with an ultrasound sensor, we were also able to implement a basic frontal obstacle avoidance system. With the help of the stereoscopic system, we inferred the position of specific objects that serve as references to the ASV guidance. The final system is capable of identifying and following targets in a distance of over 5 meters. This system was integrated with the framework already existent and shared by all the vehicles used in the OceanSys research group at INESC - DEEC/FEUP.",
"title": ""
},
{
"docid": "5e9f408e6b44afd868fb39bbfc4d7170",
"text": "With the advent of commodity autonomous mobiles, it is becoming increasingly prevalent to recognize under extreme conditions such as night, erratic illumination conditions. This need has caused the approaches using multi-modal sensors, which could be complementary to each other. The choice for the thermal camera provides a rich source of temperature information, less affected by changing illumination or background clutters. However, existing thermal cameras have a relatively smaller resolution than RGB cameras that has trouble for fully utilizing the information in recognition tasks. To mitigate this, we aim to enhance the low-resolution thermal image according to the extensive analysis of existing approaches. To this end, we introduce Thermal Image Enhancement using Convolutional Neural Network (CNN), called in TEN, which directly learns an end-to-end mapping a single low resolution image to the desired high resolution image. In addition, we examine various image domains to find the best representative of the thermal enhancement. Overall, we propose the first thermal image enhancement method based on CNN guided on RGB data. We provide extensive experiments designed to evaluate the quality of image and the performance of several object recognition tasks such as pedestrian detection, visual odometry, and image registration.",
"title": ""
},
{
"docid": "ae392fb6971ffb0e40366b1cf55fe715",
"text": "Objective:To assess possibility of polyphenol-enriched oolong tea to reduce dietary lipid absorption in humans.Design:Twelve healthy adult subjects, three males and nine females, aged (mean±s.d.) 22.0±1.8 years, respectively, were randomly divided into two groups. The participants were followed a double-blind placebo-controlled crossover design, including 7-day washout periods and 10-day treatment periods. During the treatment periods, subjects were given about 38 g of lipids from potato chips (19 g each within 30 min after lunch and dinner) and total 750 ml beverages (placebo- or polyphenol-enriched oolong tea) at three meals. Blood samples were collected for biochemical examination at days 8, 18, 25 and 35 of the study period. On the last 3 days of each treatment period, feces were collected to measure the excretion of lipids.Results:Lipid excretion into feces was significantly higher in the polyphenol-enriched oolong tea period (19.3±12.9 g/3day) than in the placebo period (9.4±7.3 g/3day) (P<0.01). Cholesterol excretion tended to increase in polyphenol-enriched oolong tea period (1.8±1.2 g/3day) compared with that of placebo (1.2±0.6 g/3day) (P=0.056).Conclusions:The results of this study indicated that polyphenol-enriched oolong tea could increase lipid excretion into feces when subjects took high-lipid diet.",
"title": ""
},
{
"docid": "cc976719dfc3e81c9a6b84905d7ed729",
"text": "ERP systems acceptance usually involves radical organizational change because it is often associated with fundamental organizational improvements that cut across functional and organizational boundaries. Recognizing that ERP systems involve organizational change and their implementation is overshadowed by a high failure rate, this study focuses attention on employees’ perceptions of such organizational change. For this purpose, the research incorporates a conceptual construct of attitude toward change that captures views about the need for organizational change. Structural equation analysis using LISREL provides significant support for the proposed relationships. Theoretical and practical implications are discussed along with limitations.",
"title": ""
},
{
"docid": "8c3ec9f28a21a5b1fb7b5b64bed2c49f",
"text": "While struggling to succeed in today’s complex market environment and provide better customer experience and services, enterprises encompass digital transformation as a means for reaching competitiveness and foster value creation. A digital transformation process consists of information technology implementation projects, as well as organizational factors such as top management support, digital transformation strategy, and organizational changes. However, to the best of our knowledge, there is little evidence about digital transformation endeavors in organizations and how they perceive it – is it only about digital technologies adoption or a true organizational shift is needed? In order to address this issue and as the first step in our research project, a literature review is conducted. The analysis included case study papers from Scopus and Web of Science databases. The following attributes are considered for classification and analysis of papers: time component; country of case origin; case industry and; digital transformation concept comprehension, i.e. focus. Research showed that organizations – public, as well as private ones, are aware of change necessity and employ digital transformation projects. Also, the changes concerning digital transformation affect both manufacturing and service-based industries. Furthermore, we discovered that organizations understand that besides technologies implementation, organizational changes must also be adopted. However, with only 29 relevant papers identified, research positioned digital transformation as an unexplored and emerging phenomenon in information systems research. The scarcity of evidence-based papers calls for further examination of this topic on cases from practice. Keywords—Digital strategy, digital technologies, digital transformation, literature review.",
"title": ""
},
{
"docid": "a8c2c646b1b85e98098a15dab0f64a46",
"text": "One of the fundamental discoveries in the field of biology is the ability to modulate the genome and to monitor the functional outputs derived from genomic alterations. In order to unravel new therapeutic options, scientists had initially focused on inducing genetic alterations in primary cells, in established cancer cell lines and mouse models using either RNA interference or cDNA overexpression or various programmable nucleases [zinc finger nucleases (ZNF), transcription activator-like effector nucleases (TALEN)]. Even though a huge volume of data was produced, its use was neither cheap nor accurate. Therefore, the clustered regularly interspaced short palindromic repeats (CRISPR) system was evidenced to be the next step in genome engineering tools. CRISPR-associated protein 9 (Cas9)-mediated genetic perturbation is simple, precise and highly efficient, empowering researchers to apply this method to immortalized cancerous cell lines, primary cells derived from mouse and human origins, xenografts, induced pluripotent stem cells, organoid cultures, as well as the generation of genetically engineered animal models. In this review, we assess the development of the CRISPR system and its therapeutic applications to a wide range of complex diseases (particularly distinct tumors), aiming at personalized therapy. Special emphasis is given to organoids and CRISPR screens in the design of innovative therapeutic approaches. Overall, the CRISPR system is regarded as an eminent genome engineering tool in therapeutics. We envision a new era in cancer biology during which the CRISPR-based genome engineering toolbox will serve as the fundamental conduit between the bench and the bedside; nonetheless, certain obstacles need to be addressed, such as the eradication of side-effects, maximization of efficiency, the assurance of delivery and the elimination of immunogenicity.",
"title": ""
},
{
"docid": "409d104fa3e992ac72c65b004beaa963",
"text": "The 19-item Body-Image Questionnaire, developed by our team and first published in this journal in 1987 by Bruchon-Schweitzer, was administered to 1,222 male and female French subjects. A principal component analysis of their responses yielded an axis we interpreted as a general Body Satisfaction dimension. The four-factor structure observed in 1987 was not replicated. Body Satisfaction was associated with sex, health, and with current and future emotional adjustment.",
"title": ""
},
{
"docid": "c419ec59ce6b9c2570afa881010566bd",
"text": "Nonlinear response of structures is usually evaluated considering two accelerograms acting simultaneously along orthogonal directions. In this paper the influence of the earthquake direction on the seismic response of buildings structures is examined. Three multi-story RC buildings, representing a very common structural typology in Italy, are assumed in the paper as case-studies for the evaluation. They are respectively a rectangular plan shape, a L plan shape and a rectangular plan shape with courtyard buildings. Nonlinear static and dynamic analyses are performed considering different seismic levels, characterized by a peak ground acceleration on stiff soil equal to 0.35 g, 0.25 g and 0.15 g. Nonlinear dynamic analyses are carried out considering twelve different earthquake directions, rotating the direction of both the orthogonal components by 30° for each analysis (from 0° to 330°). The survey is carried out on the L plan shape structure. The results show that the angle of the seismic input motion significantly influences the response of RC structures: the critical seismic angle, i.e. the incidence angle that produces the maximum demand, provides an increase up to 37% both in roof displacements and in terms of plastic hinge rotations.",
"title": ""
},
{
"docid": "08faae46f98a8eab45049c9d3d7aa48e",
"text": "One of the assumptions of attachment theory is that individual differences in adult attachment styles emerge from individuals' developmental histories. To examine this assumption empirically, the authors report data from an age 18 follow-up (Booth-LaForce & Roisman, 2012) of the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development, a longitudinal investigation that tracked a cohort of children and their parents from birth to age 15. Analyses indicate that individual differences in adult attachment can be traced to variations in the quality of individuals' caregiving environments, their emerging social competence, and the quality of their best friendship. Analyses also indicate that assessments of temperament and most of the specific genetic polymorphisms thus far examined in the literature on genetic correlates of attachment styles are essentially uncorrelated with adult attachment, with the exception of a polymorphism in the serotonin receptor gene (HTR2A rs6313), which modestly predicted higher attachment anxiety and which revealed a Gene × Environment interaction such that changes in maternal sensitivity across time predicted attachment-related avoidance. The implications of these data for contemporary perspectives and debates concerning adult attachment theory are discussed.",
"title": ""
}
] |
scidocsrr
|
bc9d20257bc2d7a54550b8907db9ca9e
|
Table-processing paradigms: a research survey
|
[
{
"docid": "93cec060a420f2ffc3e67eb532186f8e",
"text": "This paper presents an efficient approach to identify tabular structures within either electronic or paper documents. The resulting T—Recs system takes word bounding box information as input, and outputs the corresponding logical text block units (e.g. the cells within a table environment). Starting with an arbitrary word as block seed the algorithm recursively expands this block to all words that interleave with their vertical (north and south) neighbors. Since even smallest gaps of table columns prevent their words from mutual interleaving, this initial segmentation is able to identify and isolate such columns. In order to deal with some inherent segmentation errors caused by isolated lines (e.g. headers), overhanging words, or cells spawning more than one column, a series of postprocessing steps is added. These steps benefit from a very simple distinction between type 1 and type 2 blocks: type 1 blocks are those of at most one word per line, all others are of type 2. This distinction allows the selective application of heuristics to each group of blocks. The conjoint decomposition of column blocks into subsets of table cells leads to the final block segmentation of a homogeneous abstraction level. These segments serve the final layout analysis which identifies table environments and cells that are stretching over several rows and/or columns.",
"title": ""
}
] |
[
{
"docid": "b1d8f5309972b5fe116e491cc738a2a5",
"text": "An important approach for describing a region is to quantify its structure content. In this paper the use of functions for computing texture based on statistical measures is prescribed. MPM (Maximizer of the posterior margins) algorithm is employed. The segmentation based on texture feature would classify the breast tissue under various categories. The algorithm evaluates the region properties of the mammogram image and thereby would classify the image into important segments. Images from mini-MIAS data base (Mammogram Image Analysis Society database (UK)) have been considered to conduct our experiments. The segmentation thus obtained is comparatively better than the other normal methods. The validation of the work has been done by visual inspection of the segmented image by an expert radiologist. This is our basic step for developing a computer aided detection (CAD) system for early detection of breast cancer.",
"title": ""
},
{
"docid": "dd278c37683fb482cc5c3f82d2a7fe10",
"text": "Data mining responsibilities show the broad features of the data in the database and also examine the current data in order to determine some arrangements. While, clustering establishes an important part of professed data mining, a procedure of exploring and analyzing large volumes of data in order to determine valuable information. Outliers are points that do not conform with the common performance of the data. Therefore, we applied the K-Means and K-Medians clustering algorithms to calculate the run time complexity analysis in identification of outliers in clustering analysis. The result shows that the K-Medians clustering algorithm with the use of medians served as robust for its faster and effective run time facilities better performance compared to the K-Means clustering algorithm in detecting outliers. AMS subject classification:",
"title": ""
},
{
"docid": "ca70ba5ad592708e1681f823d09bcd52",
"text": "The causal discovery of Bayesian networks is an active and important research area, and it is based upon searching the space of causal models for those which can best explain a pattern of probabilistic dependencies shown in the data. However, some of those dependencies are generated by causal structures involving variables which have not been measured, i.e., latent variables. Some such patterns of dependency “reveal” themselves, in that no model based solely upon the observed variables can explain them as well as a model using a latent variable. That is what latent variable discovery is based upon. Here we did a search for finding them systematically, so that they may be applied in latent variable discovery in a more rigorous fashion.",
"title": ""
},
{
"docid": "1102e06f7dfcb6749e3e01a671501c52",
"text": "Past behavior guides future responses through 2 processes. Well-practiced behaviors in constant contexts recur because the processing that initiates and controls their performance becomes automatic. Frequency of past behavior then reflects habit strength and has a direct effect on future performance. Alternately, when behaviors are not well learned or when they are performed in unstable or difficult contexts, conscious decision making is likely to be necessary to initiate and carry out the behavior. Under these conditions, past behavior (along with attitudes and subjective norms) may contribute to intentions, and behavior is guided by intentions. These relations between past behavior and future behavior are substantiated in a meta-analytic synthesis of prior research on behavior prediction and in a primary research investigation.",
"title": ""
},
{
"docid": "8e0d5c838647f3999c5bf6d351413dd1",
"text": "We present the results of the first large-scale study of the uniqueness of Web browsing histories, gathered from a total of 368, 284 Internet users who visited a history detection demonstration website. Our results show that for a majority of users (69%), the browsing history is unique and that users for whom we could detect at least 4 visited websites were uniquely identified by their histories in 97% of cases. We observe a significant rate of stability in browser history fingerprints: for repeat visitors, 38% of fingerprints are identical over time, and differing ones were correlated with original history contents, indicating static browsing preferences (for history subvectors of size 50). We report a striking result that it is enough to test for a small number of pages in order to both enumerate users’ interests and perform an efficient and unique behavioral fingerprint; we show that testing 50 web pages is enough to fingerprint 42% of users in our database, increasing to 70% with 500 web pages. Finally, we show that indirect history data, such as information about categories of visited websites can also be effective in fingerprinting users, and that similar fingerprinting can be performed by common script providers such as Google or Facebook.",
"title": ""
},
{
"docid": "6649a93b48e8ee0187f2b7d85315a968",
"text": "a r t i c l e i n f o Keywords: Quayside operations Berth allocation Crane scheduling Container terminal operations This paper proposes a decision support system for optimizing operations on the quayside of a container terminal. Due to the existence of multiple parties involved in the decision making processes within port operations, it is essential to pay attention to each parties' concerns and demands which by nature are frequently conflicting with each other. This calls for a DSS that offers the flexibility of adjusting the balance within conflicting objectives, guiding the decision maker towards the final decision. Consequently, this study provides a DSS that determines the berthing and crane allocations simultaneously. To show the practical application of the DSS presented, a real life case study at a container terminal has been conducted. Implementation of the model shows that improvements ranging from 10% to 25% on service time and costs can be attained. The prolonged economic recession together with weak economic growth is leading container terminals to follow a cautious approach when serving their customers. Before the economic crisis, where more optimistic figures were prospected, concentrating on satisfying these customers to the highest level has been the more focused approach to gain share in the market. Nowadays, the necessity for considering cost issues while providing quality service to the customers has increased. The decisions for operations within the terminal depend on the balance of influence between terminal operators and shipping companies. From the terminal operator's perspective operating for high productivity and container throughput at low costs is a critical element to stay competitive. However, from the shipping companies' perspective low turnaround time and reliability regarding adherence to promised handling times are more critical elements. Hence, there are different views existing between the parties involved at the container terminals [24,29]. With shipping companies on one side and terminal operators on the other, each having its own concerns and demands and which by nature are frequently conflicting with each other, the decision makers call for supporting instruments that help them to attain a balance among those differing intentions. Yet, recent literature on quayside operations within a container terminal does not provide adequate support to resolve the issue via practical considerations. Subsequently, this study attempts to provide a decision support tool that determines the berthing and crane allocations simultaneously under multiple objectives. At a container terminal, vessels are docked on a berth …",
"title": ""
},
{
"docid": "bd90744763b80725fa616590b896a438",
"text": "When a message is transformed into a ciphertext in a way designed to protect both its privacy and authenticity, there may be additional information, such as a packet header, that travels alongside the ciphertext (at least conceptually) and must get authenticated with it. We formalize and investigate this authenticated-encryption with associated-data (AEAD) problem. Though the problem has long been addressed in cryptographic practice, it was never provided a definition or even a name. We do this, and go on to look at efficient solutions for AEAD, both in general and for the authenticated-encryption scheme OCB. For the general setting we study two simple ways to turn an authenticated-encryption scheme that does not support associated-data into one that does: nonce stealing and ciphertext translation. For the case of OCB we construct an AEAD-scheme by combining OCB and the pseudorandom function PMAC, using the same key for both algorithms. We prove that, despite \"interaction\" between the two schemes when using a common key, the combination is sound. We also consider achieving AEAD by the generic composition of a nonce-based, privacy-only encryption scheme and a pseudorandom function.",
"title": ""
},
{
"docid": "75d5241d400981ba9ca2113602c73f2d",
"text": "Motivated by the fact that characteristics of different sound classes are highly diverse in different temporal scales and hierarchical levels, a novel deep convolutional neural network (CNN) architecture is proposed for the environmental sound classification task. This network architecture takes raw waveforms as input, and a set of separated parallel CNNs are utilized with different convolutional filter sizes and strides, in order to learn feature representations with multi-temporal resolutions. On the other hand, the proposed architecture also aggregates hierarchical features from multi-level CNN layers for classification using direct connections between convolutional layers, which is beyond the typical single-level CNN features employed by the majority of previous studies. This network architecture also improves the flow of information and avoids vanishing gradient problem. The combination of multi-level features boosts the classification performance significantly. Comparative experiments are conducted on two datasets: the environmental sound classification dataset (ESC-50), and DCASE 2017 audio scene classification dataset. Results demonstrate that the proposed method is highly effective in the classification tasks by employing multi-temporal resolution and multi-level features, and it outperforms the previous methods which only account for single-level features.",
"title": ""
},
{
"docid": "0b0b313c16697e303522fef245d97ba8",
"text": "The development of novel targeted therapies with acceptable safety profiles is critical to successful cancer outcomes with better survival rates. Immunotherapy offers promising opportunities with the potential to induce sustained remissions in patients with refractory disease. Recent dramatic clinical responses in trials with gene modified T cells expressing chimeric antigen receptors (CARs) in B-cell malignancies have generated great enthusiasm. This therapy might pave the way for a potential paradigm shift in the way we treat refractory or relapsed cancers. CARs are genetically engineered receptors that combine the specific binding domains from a tumor targeting antibody with T cell signaling domains to allow specifically targeted antibody redirected T cell activation. Despite current successes in hematological cancers, we are only in the beginning of exploring the powerful potential of CAR redirected T cells in the control and elimination of resistant, metastatic, or recurrent nonhematological cancers. This review discusses the application of the CAR T cell therapy, its challenges, and strategies for successful clinical and commercial translation.",
"title": ""
},
{
"docid": "c78ebe9d42163142379557068b652a9c",
"text": "A tumor is a mass of tissue that's formed by an accumulation of abnormal cells. Normally, the cells in your body age, die, and are replaced by new cells. With cancer and other tumors, something disrupts this cycle. Tumor cells grow, even though the body does not need them, and unlike normal old cells, they don't die. As this process goes on, the tumor continues to grow as more and more cells are added to the mass. Image processing is an active research area in which medical image processing is a highly challenging field. Brain tumor analysis is done by doctors but its grading gives different conclusions which may vary from one doctor to another. In this project, it provides a foundation of segmentation and edge detection, as the first step towards brain tumor grading. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. There are dissimilar types of algorithm were developed for brain tumor detection. Comparing to the other algorithms the performance of fuzzy c-means plays a major role. The patient's stage is determined by this process, whether it can be cured with medicine or not. Also we study difficulty to detect Mild traumatic brain injury (mTBI) the current tools are qualitative, which can lead to poor diagnosis and treatment and to overcome these difficulties, an algorithm is proposed that takes advantage of subject information and texture information from MR images. A contextual model is developed to simulate the progression of the disease using multiple inputs, such as the time post injury and the location of injury. Textural features are used along with feature selection for a single MR modality.",
"title": ""
},
{
"docid": "2cb0c74e57dea6fead692d35f8a8fac6",
"text": "Matching local image descriptors is a key step in many computer vision applications. For more than a decade, hand-crafted descriptors such as SIFT have been used for this task. Recently, multiple new descriptors learned from data have been proposed and shown to improve on SIFT in terms of discriminative power. This paper is dedicated to an extensive experimental evaluation of learned local features to establish a single evaluation protocol that ensures comparable results. In terms of matching performance, we evaluate the different descriptors regarding standard criteria. However, considering matching performance in isolation only provides an incomplete measure of a descriptors quality. For example, finding additional correct matches between similar images does not necessarily lead to a better performance when trying to match images under extreme viewpoint or illumination changes. Besides pure descriptor matching, we thus also evaluate the different descriptors in the context of image-based reconstruction. This enables us to study the descriptor performance on a set of more practical criteria including image retrieval, the ability to register images under strong viewpoint and illumination changes, and the accuracy and completeness of the reconstructed cameras and scenes. To facilitate future research, the full evaluation pipeline is made publicly available.",
"title": ""
},
{
"docid": "713d709c14c8943638d2c80e3aeaded2",
"text": "Microfluidics-based biochips combine electronics with biology to open new application areas such as point-of-care medical diagnostics, on-chip DNA analysis, and automated drug discovery. Bioassays are mapped to microfluidic arrays using synthesis tools, and they are executed through the manipulation of sample and reagent droplets by electrical means. Most prior work on CAD for biochips has assumed independent control of electrodes using a large number of (electrical) input pins. Such solutions are not feasible for low-cost disposable biochips that are envisaged for many field applications. A more promising design strategy is to divide the microfluidic array into smaller partitions and use a small number of electrodes to control the electrodes in each partition. We propose a partitioning algorithm based on the concept of \"droplet trace\", which is extracted from the scheduling and droplet routing results produced by a synthesis tool. An efficient pin assignment method, referred to as the \"Connect-5 algorithm\", is combined with the array partitioning technique based on droplet traces. The array partitioning and pin assignment methods are evaluated using a set of multiplexed bioassays.",
"title": ""
},
{
"docid": "f15a7d48f3c42ccc97480204dc5c8622",
"text": "We have developed a wearable upper limb support system (ULSS) for support during heavy overhead tasks. The purpose of this study is to develop the voluntary motion support algorithm for the ULSS, and to confirm the effectiveness of the ULSS with the developed algorithm through dynamic evaluation experiments. The algorithm estimates the motor intention of the wearer based on a bioelectrical signal (BES). The ULSS measures the BES via electrodes attached onto the triceps brachii, deltoid, and clavicle. The BES changes in synchronization with the motion of the wearer's upper limbs. The algorithm changes a control phase by comparing the BES and threshold values. The algorithm achieves voluntary motion support for dynamic tasks by changing support torques of the ULSS in synchronization with the control phase. Five healthy adult males moved heavy loads vertically overhead in the evaluation experiments. In a random instruction experiment, the volunteers moved in synchronization with random instructions, and we confirmed that the control phase changes in synchronization with the random instructions. In a motion support experiment, we confirmed that the average number of the vertical motion with the ULSS increased 2.3 times compared to the average number without the ULSS. As a result, the ULSS with the algorithm supports the motion voluntarily, and it has a positive effect on the support. In conclusion, we could develop the novel voluntary motion support algorithm of the ULSS.",
"title": ""
},
{
"docid": "7ad194d865b92f1956ef89f9e8ede31e",
"text": "The Social Media Intelligence Analyst is a new operational role within a State Control Centre in Victoria, Australia dedicated to obtaining situational awareness from social media to support decision making for emergency management. We outline where this role fits within the structure of a command and control organization, describe the requirements for such a position and detail the operational activities expected during an emergency event. As evidence of the importance of this role, we provide three real world examples where important information was obtained from social media which led to improved outcomes for the community concerned. This is the first time a dedicated role has been formally established solely for monitoring social media for emergency management intelligence gathering purposes in Victoria. To the best of our knowledge, it is also the first time such a dedicated position in an operational crisis coordination centre setting has been described in the literature.",
"title": ""
},
{
"docid": "8d947d08bc78467c14be3be23a345312",
"text": "This Working Paper should not be reported as representing the views of the IMF. The views expressed herein are those of the author and should not be attributed to the IMF, its Executive Board, or its management. Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. We examine the effects of oil rents on corruption and state stability exploiting the exogenous within-country variation of a new measure of oil rents for a panel of 31 oil-exporting countries during the period 1992 to 2005. We find that an increase in oil rents significantly increases corruption, significantly deteriorates political rights while at the same time leading to a significant improvement in civil liberties. We argue that these findings can be explained by the political elite having an incentive to extend civil liberties but reduce political rights in the presence of oil windfalls to evade redistribution and conflict. We support our argument documenting that there is a significant effect of oil rents on corruption in countries with a high share of state participation in oil production while no such link exists in countries where state participation in oil production is low. JEL Classification Numbers: C33, D73, D74, D72, H21",
"title": ""
},
{
"docid": "d597b9229a3f9a9c680d25180a4b6308",
"text": "Mental health problems are highly prevalent and increasing in frequency and severity among the college student population. The upsurge in mobile and wearable wireless technologies capable of intense, longitudinal tracking of individuals, provide enormously valuable opportunities in mental health research to examine temporal patterns and dynamic interactions of key variables. In this paper, we present an integrative framework for social anxiety and depression (SAD) monitoring, two of the most common disorders in the college student population. We have developed a smartphone application and the supporting infrastructure to collect both passive sensor data and active event-driven data. This supports intense, longitudinal, dynamic tracking of anxious and depressed college students to evaluate how their emotions and social behaviors change in the college campus environment. The data will provide critical information about how student mental health problems are maintained and, ultimately, how student patterns on campus shift following treatment.",
"title": ""
},
{
"docid": "f772d3bbec3d92669ff28b616d7a0bde",
"text": "This paper reports on the preliminary results of an ongoing study examining the teaching of new primary school topics based on Computational Thinking in New Zealand. We analyse detailed feedback from 13 teachers participating in the study, who had little or no previous experience teaching Computer Science or related topics. From this we extract key themes identified by the teachers that are likely to be encountered when deploying a new curriculum, including unexpected opportunities for cross-curricula learning, development of students' social skills, and engaging a wide range of students. From here we articulate key concepts and issues that arise in the primary school context, based on feedback during professional development for the study, and direct feedback from teachers on the experience of delivering the new material in the classroom.",
"title": ""
},
{
"docid": "c06e1491b0aabbbd73628c2f9f45d65d",
"text": "With the integration of deep learning into the traditional field of reinforcement learning in the recent decades, the spectrum of applications that artificial intelligence caters is currently very broad. As using AI to play games is a traditional application of reinforcement learning, the project’s objective is to implement a deep reinforcement learning agent that can defeat a video game. Since it is often difficult to determine which algorithms are appropriate given the wide selection of state-of-the-art techniques in the discipline, proper comparisons and investigations of the algorithms are a prerequisite to implementing such an agent. As a result, this paper serves as a platform for exploring the possibility and effectiveness of using conventional state-of-the-art reinforcement learning methods for playing Pacman maps. In particular, this paper demonstrates that Combined DQN, a variation of Rainbow DQN, is able to attain high performance in small maps such as 506Pacman, smallGrid and mediumGrid. It was also demonstrated that the trained agents could also play Pacman maps similar to training with limited performance. Nevertheless, the algorithm suffers due to its data inefficiency and lack of human-like features, which may be remedied in the future by introducing more human-like features into the algortihm, such as intrinsic motivation and imagination.",
"title": ""
},
{
"docid": "88a5ff63e8dc768def8fd92b8e57fd13",
"text": "We propose using profile compatibility to differentiate genuine and fake product reviews. For each product, a collective profile is derived from a separate collection of reviews. Such a profile contains a number of aspects of the product, together with their descriptions. For a given unseen review about the same product, we build a test profile using the same approach. We then perform a bidirectional alignment between the test and the collective profile, to compute a list of aspect-wise compatible features. We adopt Ott et al. (2011)’s op spam v1.3 dataset for identifying truthful vs. deceptive reviews. We extend the recently proposed N-GRAM+SYN model of Feng et al. (2012a) by incorporating profile compatibility features, showing such an addition significantly improves upon their state-ofart classification performance.",
"title": ""
}
] |
scidocsrr
|
f129a355f72a6b01e9866ddc603f63ec
|
The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions.
|
[
{
"docid": "4b16a9f76f4db350fcd82b3cdbbf189a",
"text": "CASE-CONTF~OL studies are highly attractive. They can be executed quickly and at low cost, even when the disorders of interest are rare. Furthermore, the execution of pilot case-control studies is becoming automated; strategies have been devised for the ‘computer scanning’ of large files of hospital admission diagnoses and prior drug exposures, with more detailed analyses carried out in the same data set on an ad hoc basis [l]. As evidence of their growing popularity, when one original article was randomly selected from each issue of The New England Journal of Medicine, The Lancet, and the Journal of the American Medical Association for the years, 1956, 1966 and 1976, the proportion reporting case-control analytic studies increased fourfold over these two decades (2-8”;) whereas the proportion reporting cohort analytic studies fell by half (30-159); incidentally, a general trend toward fewer study subjects but more study authors was also noted [2]. If an ebullition of case-control studies is in progress, a review of their merits and shortcomings is of more than academic interest, and this symposium was well-timed. Because this meeting also coincided with the completion of some work we had been doing on biases in analytic research (Appendix 3), I offered to summarize a portion of this work for presentation and discussion here. A first draft of a catalog of biases which may distort the design, execution, analysis and interpretation of research appears as an appendix to this paper (additions, corrections and citations of examples would be welcomed by the author). For this paper, I have considered those biases which arise in analytic studies and have focused on two subsets which affect the specification and selection of the study sample and the measurement of exposures and outcomes, since these attributes most clearly distinguish the case-control study from its relatives. * Furthermore. I have included occasional discussions of cohort analytic studies because they represent a common, alternative, subexperimental approach to determining causation. Finally, after describing the prospects for the prevention (or at least the measurement) of these biases in these two forms of analytic studies, this paper closes with suggestions for further methodologic research.",
"title": ""
}
] |
[
{
"docid": "9c698f09275057887803010fb6dc789e",
"text": "Type 2 diabetes is now a pandemic and shows no signs of abatement. In this Seminar we review the pathophysiology of this disorder, with particular attention to epidemiology, genetics, epigenetics, and molecular cell biology. Evidence is emerging that a substantial part of diabetes susceptibility is acquired early in life, probably owing to fetal or neonatal programming via epigenetic phenomena. Maternal and early childhood health might, therefore, be crucial to the development of effective prevention strategies. Diabetes develops because of inadequate islet β-cell and adipose-tissue responses to chronic fuel excess, which results in so-called nutrient spillover, insulin resistance, and metabolic stress. The latter damages multiple organs. Insulin resistance, while forcing β cells to work harder, might also have an important defensive role against nutrient-related toxic effects in tissues such as the heart. Reversal of overnutrition, healing of the β cells, and lessening of adipose tissue defects should be treatment priorities.",
"title": ""
},
{
"docid": "292eea3f09d135f489331f876052ce88",
"text": "-Steganography is a term used for covered writing. Steganography can be applied on different file formats, such as audio, video, text, image etc. In image steganography, data in the form of image is hidden under some image by using transformations such as ztransformation, integer wavelet transformation, DWT etc and then sent to the destination. At the destination, the data is extracted from the cover image using the inverse transformation. This paper presents a new approach for image steganography using DWT. The cover image is divided into higher and lower frequency sub-bands and data is embedded into higher frequency sub-bands. Arnold Transformation is used to increase the security. The proposed approach is implemented in MATLAB 7.0 and evaluated on the basis of PSNR, capacity and correlation. The proposed approach results in high capacity image steganography as compared to existing approaches. Keywords-Image Steganography, PSNR, Discrete Wavelet Transform.",
"title": ""
},
{
"docid": "4ddfa45a585704edcca612f188cc6b78",
"text": "This paper presents a case study of using distributed word representations, word2vec in particular, for improving performance of Named Entity Recognition for the eCommerce domain. We also demonstrate that distributed word representations trained on a smaller amount of in-domain data are more effective than word vectors trained on very large amount of out-of-domain data, and that their combination gives the best results.",
"title": ""
},
{
"docid": "1bf796a1b7e802076e25b9d0742a7f91",
"text": "Modern computing devices and user interfaces have necessitated highly interactive querying. Some of these interfaces issue a large number of dynamically changing and continuous queries to the backend. In others, users expect to inspect results during the query formulation process, in order to guide or help them towards specifying a full-fledged query. Thus, users end up issuing a fast-changing workload to the underlying database. In such situations, the user's query intent can be thought of as being in flux. In this paper, we show that the traditional query execution engines are not well-suited for this new class of highly interactive workloads. We propose a novel model to interpret the variability of likely queries in a workload. We implemented a cyclic scan-based approach to process queries from such workloads in an efficient and practical manner while reducing the overall system load. We evaluate and compare our methods with traditional systems and demonstrate the scalability of our approach, enabling thousands of queries to run simultaneously within interactive response times given low memory and CPU requirements.",
"title": ""
},
{
"docid": "a0787399eaca5b59a87ed0644da10fc6",
"text": "This work faces the problem of combining the outputs of two co-siting BTS, one operating with 2G networks and the other with 3G (or 4G) networks. This requirement is becoming more and more frequent because many operators, for increasing the capacity for data and voice signal transmission, have overlaid the new network in 3G or 4G technology to the existing 2G infrastructure. The solution here proposed is constituted by a low loss combiner realized through a directional double single-sided filtering system, which manages both TX and RX signals from each BTS output. The design approach for the combiner architecture is described with a particular emphasis on the synthesis of the double single-sided filters (realized by means of extracted pole technique). A prototype of the low-loss combiner has been designed and fabricated for validating the proposed approach. The results obtained are here discussed making into evidence the pros & cons of the proposed solution.",
"title": ""
},
{
"docid": "e99c12645fd14528a150f915b3849c2b",
"text": "Teaching in the cyberspace classroom requires moving beyond old models of. pedagogy into new practices that are more facilitative. It involves much more than simply taking old models of pedagogy and transferring them to a different medium. Unlike the face-to-face classroom, in online distance education, attention needs to be paid to the development of a sense of community within the group of participants in order for the learning process to be successful. The transition to the cyberspace classroom can be successfully achieved if attention is paid to several key areas. These include: ensuring access to and familiarity with the technology in use; establishing guidelines and procedures which are relatively loose and free-flowing, and generated with significant input from participants; striving to achieve maximum participation and \"buy-in\" from the participants; promoting collaborative learning; and creating a double or triple loop in the learning process to enable participants to reflect on their learning process. All of these practices significantly contribute to the development of an online learning community, a powerful tool for enhancing the learning experience. Each of these is reviewed in detail in the paper. (AEF) Reproductions supplied by EDRS are the best that can be made from the original document. Making the Transition: Helping Teachers to Teach Online Rena M. Palloff, Ph.D. Crossroads Consulting Group and The Fielding Institute Alameda, CA",
"title": ""
},
{
"docid": "286fc2c4342a9269f40aa2701271f33a",
"text": "While Blockchain network brings tremendous benefits, there are concerns whether their performance would match up with the mainstream IT systems. This paper aims to investigate whether the consensus process using Practical Byzantine Fault Tolerance (PBFT) could be a performance bottleneck for networks with a large number of peers. We model the PBFT consensus process using Stochastic Reward Nets (SRN) to compute the mean time to complete consensus for networks up to 100 peers. We create a blockchain network using IBM Bluemix service, running a production-grade IoT application and use the data to parameterize and validate our models. We also conduct sensitivity analysis over a variety of system parameters and examine the performance of larger networks",
"title": ""
},
{
"docid": "f767e0a9711522b06b8d023453f42f3a",
"text": "A novel low-cost method for generating circular polarization in a dielectric resonator antenna is proposed. The antenna comprises four rectangular dielectric layers, each one being rotated by an angle of 30 ° relative to its adjacent layers. Utilizing such an approach has provided a circular polarization over a bandwidth of 6% from 9.55 to 10.15 GHz. This has been achieved in conjunction with a 21% impedance-matching bandwidth over the same frequency range. Also, the radiation efficiency of the proposed circularly polarized dielectric resonator antenna is 93% in this frequency band of operation",
"title": ""
},
{
"docid": "0b6ec738616ee187c69eb3b3d5b924ea",
"text": "The process of collecting annotated data is expensive and time-consuming. Making use of crowdsourcing instead of experts in a laboratory setting is a viable alternative to reduce these costs. However, without adequate quality control the obtained labels may be less reliable. Whereas crowdsourcing reduces only the costs per annotation, another technique, active learning, aims at reducing the overall annotation costs by selecting the most important instances of the dataset and only asking for manual annotations for these selected samples. Herein, we investigate the advantages of combining crowdsourcing and different iterative active learning paradigms for audio data annotation. Further, we incorporate an annotator trustability score to further reduce the labelling effort needed and, at the same time, to achieve better classification results. In this context, we introduce a novel active learning algorithm, called Trustability-based dynamic active learning, which accumulates manual annotations in each step until a trustability-weighted agreement level of annotators is reached. Furthermore, we bring this approach into the real world and integrate it in our gamified intelligent crowdsourcing platform iHEARu-PLAY. Key experimental results on an emotion recognition task indicate that a considerable relative annotation cost reduction of up to 90.57 % can be achieved when compared with a non-intelligent annotation approach. Moreover, our proposed method reaches an unweighted average recall value of 73.71 %, while a conventional passive learning algorithm peaks at 60.03 %. Therefore, our novel approach not only efficiently reduces the manual annotation work load but also improves the classification performance.",
"title": ""
},
{
"docid": "bf2394d7095cdd7fbe6d59a781e761b0",
"text": "Fingerprint segmentation is one of the most important preprocessing steps in an automatic fingerprint identification system (AFIS). It is used to separate a fingerprint area (foreground) from the image background. Accurate segmentation of a fingerprint will greatly reduce the computation time of the following processing steps, and discard many spurious minutiae. In this paper, a new segmentation algorithm is presented. Apart from its simplicity, it is characterized by being neither depend on empirical thresholds chosen by experts or a learned model trained by elements generated from manually segmented fingerprints. The algorithm uses the block range as a feature to achieve fingerprint segmentation. Then, some Morphological closing and opening operations are performed, to extract the foreground from the image. The performance of the proposed technique is checked by evaluating the classification error (Err). Experimental results have shown that when analyzing FVC2004, FVC2002, and FVC2000 databases using the proposed algorithm, the average classification error rates are much less than those obtained by other approaches. Several illustrative examples are given to verify this conclusion.",
"title": ""
},
{
"docid": "388101f40ff79f2543b111aad96c4180",
"text": "Based on available literature, ecology and economy of light emitting diode (LED) lights in plant foods production were assessed and compared to high pressure sodium (HPS) and compact fluorescent light (CFL) lamps. The assessment summarises that LEDs are superior compared to other lamp types. LEDs are ideal in luminous efficiency, life span and electricity usage. Mercury, carbon dioxide and heat emissions are also lowest in comparison to HPS and CFL lamps. This indicates that LEDs are indeed economic and eco-friendly lighting devices. The present review indicates also that LEDs have many practical benefits compared to other lamp types. In addition, they are applicable in many purposes in plant foods production. The main focus of the review is the targeted use of LEDs in order to enrich phytochemicals in plants. This is an expedient to massive improvement in production efficiency, since it diminishes the number of plants per phytochemical unit. Consequently, any other production costs (e.g. growing space, water, nutrient and transport) may be reduced markedly. Finally, 24 research articles published between 2013 and 2017 were reviewed for targeted use of LEDs in the specific, i.e. blue range (400-500 nm) of spectrum. The articles indicate that blue light is efficient in enhancing the accumulation of health beneficial phytochemicals in various species. The finding is important for global food production. © 2017 Society of Chemical Industry.",
"title": ""
},
{
"docid": "72147e489de9053bf1a4844c2f0de717",
"text": "Video Question Answering is a challenging problem in visual information retrieval, which provides the answer to the referenced video content according to the question. However, the existing visual question answering approaches mainly tackle the problem of static image question, which may be ineffectively for video question answering due to the insufficiency of modeling the temporal dynamics of video contents. In this paper, we study the problem of video question answering by modeling its temporal dynamics with frame-level attention mechanism. We propose the attribute-augmented attention network learning framework that enables the joint frame-level attribute detection and unified video representation learning for video question answering. We then incorporate the multi-step reasoning process for our proposed attention network to further improve the performance. We construct a large-scale video question answering dataset. We conduct the experiments on both multiple-choice and open-ended video question answering tasks to show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "a81c87374e7ea9a3066f643ac89bfd2b",
"text": "Image edge detection is a process of locating the e dg of an image which is important in finding the approximate absolute gradient magnitude at each point I of an input grayscale image. The problem of getting an appropriate absolute gradient magnitude for edges lies in the method used. The Sobel operator performs a 2-D spatial gradient measurement on images. Transferri ng a 2-D pixel array into statistically uncorrelated data se t enhances the removal of redundant data, as a result, reduction of the amount of data is required to represent a digital image. The Sobel edge detector uses a pair of 3 x 3 convolution masks, one estimating gradient in the x-direction and the other estimating gradient in y–direction. The Sobel detector is incredibly sensit ive o noise in pictures, it effectively highlight them as edges. Henc e, Sobel operator is recommended in massive data communication found in data transfer.",
"title": ""
},
{
"docid": "047c36e2650b8abde75cccaeb0368c88",
"text": "Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients. In this work, we describe a custom-build 3D fully convolutional network (FCN) that can process a 3D image including the whole pancreas and produce an automatic segmentation. We investigate two variations of the 3D FCN architecture; one with concatenation and one with summation skip connections to the decoder part of the network. We evaluate our methods on a dataset from a clinical trial with gastric cancer patients, including 147 contrast enhanced abdominal CT scans acquired in the portal venous phase. Using the summation architecture, we achieve an average Dice score of 89.7 ± 3.8 (range [79.8, 94.8])% in testing, achieving the new state-of-the-art performance in pancreas segmentation on this dataset.",
"title": ""
},
{
"docid": "de478fc24877f9e144615d6f3bb46799",
"text": "Design issues of a spontaneous speech corpus is described. The corpus under compilation will contain 800-1000 hour spontaneously uttered Common Japanese speech and the morphologically annotated transcriptions. Also, segmental and intonation labeling will be provided for a subset of the corpus. The primary application domain of the corpus is speech recognition of spontaneous speech, but we plan to make it useful for natural language processing and phonetic/linguistic studies also.",
"title": ""
},
{
"docid": "3e80fb154cb594dc15f5318b774cf0c3",
"text": "Progressive multifocal leukoencephalopathy (PML) is a rare, subacute, demyelinating disease of the central nervous system caused by JC virus. Studies of PML from HIV Clade C prevalent countries are scarce. We sought to study the clinical, neuroimaging, and pathological features of PML in HIV Clade C patients from India. This is a prospective cum retrospective study, conducted in a tertiary care Neurological referral center in India from Jan 2001 to May 2012. Diagnosis was considered “definite” (confirmed by histopathology or JCV PCR in CSF) or “probable” (confirmed by MRI brain). Fifty-five patients of PML were diagnosed between January 2001 and May 2012. Complete data was available in 38 patients [mean age 39 ± 8.9 years; duration of illness—82.1 ± 74.7 days). PML was prevalent in 2.8 % of the HIV cohort seen in our Institute. Hemiparesis was the commonest symptom (44.7 %), followed by ataxia (36.8 %). Definitive diagnosis was possible in 20 cases. Eighteen remained “probable” wherein MRI revealed multifocal, symmetric lesions, hypointense on T1, and hyperintense on T2/FLAIR. Stereotactic biopsy (n = 11) revealed demyelination, enlarged oligodendrocytes with intranuclear inclusions and astrocytosis. Immunohistochemistry revelaed the presence of JC viral antigen within oligodendroglial nuclei and astrocytic cytoplasm. No differences in clinical, radiological, or pathological features were evident from PML associated with HIV Clade B. Clinical suspicion of PML was entertained in only half of the patients. Hence, a high index of suspicion is essential for diagnosis. There are no significant differences between clinical, radiological, and pathological picture of PML between Indian and Western countries.",
"title": ""
},
{
"docid": "6799b2cb4eda7ab7bd2c5e31ff7a4ec1",
"text": "The effect of right, left, and alternate nostril yoga breathing (i.e., RNYB, LNYB, and ANYB, respectively) were compared with breath awareness (BAW) and normal breathing (CTL). Autonomic and respiratory variables were studied in 21 male volunteers with ages between 18 and 45 years and experience in the yoga breathing practices between 3 and 48 months. Subjects were assessed in five experimental sessions on five separate days. The sessions were in fixed possible sequences and subjects were assigned to a sequence randomly. Each session was for 40 min; 30 min for the breathing practice, preceded and followed by 5 min of quiet sitting. Assessments included heart rate variability, skin conductance, finger plethysmogram amplitude, breath rate, and blood pressure. Following RNYB there was a significant increase in systolic, diastolic and mean pressure. In contrast, the systolic and diastolic pressure decreased after ANYB and the systolic and mean pressure were lower after LNYB. Hence, unilateral nostril yoga breathing practices appear to influence the blood pressure in different ways. These effects suggest possible therapeutic applications.",
"title": ""
},
{
"docid": "0ef2419d4be5db10d2caaf6c0424796c",
"text": "Modeled after the hierarchical control architecture of power transmission systems, a layering of primary, secondary, and tertiary control has become the standard operation paradigm for islanded microgrids. Despite this superficial similarity, the control objectives in microgrids across these three layers are varied and ambitious, and they must be achieved while allowing for robust plug-and-play operation and maximal flexibility, without hierarchical decision making and time-scale separations. In this paper, we explore control strategies for these three layers and illuminate some possibly unexpected connections and dependencies among them. Building from a first-principle analysis of decentralized primary droop control, we study centralized, decentralized, and distributed architectures for secondary frequency regulation. We find that averaging-based distributed controllers using communication among the generation units offer the best combination of flexibility and performance. We further leverage these results to study constrained ac economic dispatch in a tertiary control layer. Surprisingly, we show that the minimizers of the economic dispatch problem are in one-to-one correspondence with the set of steady states reachable by droop control. In other words, the adoption of droop control is necessary and sufficient to achieve economic optimization. This equivalence results in simple guidelines to select the droop coefficients, which include the known criteria for power sharing. We illustrate the performance and robustness of our designs through simulations.",
"title": ""
}
] |
scidocsrr
|
ce8f7f924fba900b4a8228b06f84d06e
|
Building Portable Options: Skill Transfer in Reinforcement Learning
|
[
{
"docid": "99d57cef03e21531be9f9663ec023987",
"text": "Anton Schwartz Dept. of Computer Science Stanford University Stanford, CA 94305 Email: schwartz@cs.stanford.edu Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. This paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks. Skills collapse whole action sequences into single operators. They are learned by minimizing the compactness of action policies, using a description length argument on their representation. Empirical results in simple grid navigation tasks illustrate the successful discovery of structure in reinforcement learning.",
"title": ""
}
] |
[
{
"docid": "e9a66ce7077baf347d325bca7b008d6b",
"text": "Recent research have shown that the Wavelet Transform (WT) can potentially be used to extract Partial Discharge (PD) signals from severe noise like White noise, Random noise and Discrete Spectral Interferences (DSI). It is important to define that noise is a significant problem in PD detection. Accordingly, the paper mainly deals with denoising of PD signals, based on improved WT techniques namely Translation Invariant Wavelet Transform (TIWT). The improved WT method is distinct from other traditional method called as Fast Fourier Transform (FFT). The TIWT not only remain the edge of the original signal efficiently but also reduce impulsive noise to some extent. Additionally Translation Invariant (TI) Wavelet Transform denoising is used to suppress Pseudo Gibbs phenomenon. In this paper an attempt has been made to review the methodology of denoising the partial discharge signals and shows that the proposed denoising method results are better when compared to other wavelet-based approaches like FFT, wavelet hard thresholding, wavelet soft thresholding, by evaluating five different parameters like, Signal to noise ratio, Cross correlation coefficient, Pulse amplitude distortion, Mean square error, Reduction in noise level.",
"title": ""
},
{
"docid": "0574e5c8cf24cd2f72a01223c54cec09",
"text": "In the wake of the Mexican and Asian currency turmoil, the subject of financial crises has come to the forefront of academic and policy discussions. This paper analyzes the links between banking and currency crises. We find that: problems in the banking sector typically precede a currency crisis--the currency crisis deepens the banking crisis, activating a vicious spiral; financial liberalization often precedes banking crises. The anatomy of these episodes suggests that crises occur as the economy enters a recession, following a prolonged boom in economic activity that was fueled by credit, capital inflows and accompanied by an overvalued currency. (JEL F30, F41) * Graciela L. Kaminsky, George Washington University, Washington, D.C. 20552. Carmen M. Reinhart, University of Maryland, College Park, Maryland 20742. We thank two anonymous referees for very helpful suggestions. We also thank Guillermo Calvo, Rudiger Dornbusch, Peter Montiel, Vincent Reinhart, John Rogers, Andrew Rose and seminar participants at Banco de México, the Board of Governors of the Federal Reserve System, Florida State University, Harvard, the IMF, Johns Hopkins University, Massachusetts Institute of Technology, Stanford University, SUNY at Albany, University of California, Berkeley, UCLA, University of California, Santa Cruz, University of Maryland, University of Washington, The World Bank, and the conference on “Speculative Attacks in the Era of the Global Economy: Theory, Evidence, and Policy Implications,” (Washington, DC, December 1995), for very helpful comments and Greg Belzer, Kris Dickson, and Noah Williams for superb research assistance. 1 Pervasive currency turmoil, particularly in Latin America in the late 1970s and early 1980s, gave impetus to a flourishing literature on balance-of-payments crises. As stressed in Paul Krugman’s (1979) seminal paper, in this literature crises occur because a country finances its fiscal deficit by printing money to the extent that excessive credit growth leads to the eventual collapse of the fixed exchange rate regime. With calmer currency markets in the midand late 1980s, interest in this literature languished. The collapse of the European Exchange Rate Mechanism, the Mexican peso crisis, and the wave of currency crises sweeping through Asia have, however, rekindled interest in the topic. Yet, the focus of this recent literature has shifted. While the earlier literature emphasized the inconsistency between fiscal and monetary policies and the exchange rate commitment, the new one stresses self-fulfilling expectations and herding behavior in international capital markets. In this view, as Guillermo A.Calvo (1995, page 1) summarizes “If investors deem you unworthy, no funds will be forthcoming and, thus, unworthy you will be.” Whatever the causes of currency crises, neither the old literature nor the new models of self-fulfilling crises have paid much attention to the interaction between banking and currency problems, despite the fact that many of the countries that have had currency crises have also had full-fledged domestic banking crises around the same time. Notable exceptions are: Carlos Diaz-Alejandro (1985), Andres Velasco (1987), Calvo (1995), Ilan Goldfajn and Rodrigo Valdés (1995), and Victoria Miller (1995). As to the empirical evidence on the potential links between what we dub the twin crises, the literature has been entirely silent. The Thai, Indonesian, and Korean crises are not the first examples of dual currency and banking woes, they are only the recent additions to a long list of casualties which includes Chile, Finland, Mexico, Norway, and Sweden. In this paper, we aim to fill this void in the literature and examine currency and banking crises episodes for a number of industrial and developing countries. The former include: Denmark, Finland, Norway, Spain, and Sweden. The latter focus on: Argentina, Bolivia, Brazil, Chile, Colombia, Indonesia,",
"title": ""
},
{
"docid": "7c449e74c3eb6c1ba6b9e8630e5e4d90",
"text": "The effect of cereal-based diets varying in dietary fibre (DF) on gastric emptying and glucose absorption over an isolated loop of jejunum was studied in four pigs fitted with two sets of re-entrant cannulas. The pigs were fed on either a wheat-flour diet or three diets based on oat flour (endosperm), rolled oats or oat bran containing different amounts of soluble DF. Mean transit time (MTT) of liquid estimated from the output from the first jejunal cannula was significantly higher with the two diets having the highest DF content, but MTT of dry matter (DM), starch, xylose and neutral non-starch polysaccharides (nNSP) was not correlated directly to the DF content of the diet. DF had a stimulatory effect on secretion of gastrointestinal juices, but the effect was not linearly correlated with the DF content of the diet. Starch was significantly degraded in digesta collected within 30 min after feeding with malto-oligosaccharides accounting for 140-147 g/kg total starch. The degradation was more extensive with higher DF and lower starch content of the diet. However, taking into account the differences in jejunal flow, the amount of malto-oligosaccharides available for absorption in the first 0.5 h decreased with higher levels of DF in the oat-based diets. The absorption of glucose from the isolated loop was 18-34 g/m intestine over an 8 h period with no significant differences between diets. This corresponded to a non-significant decrease in recovery of starch from 0.91 to 0.82 with increasing levels of DF and decreasing levels of starch in the diet. This suggests that the capacity for absorption of large doses of starch entering the proximal small intestine after ingestion of a carbohydrate-rich cereal-based diet has a major influence on the absorption at this site. Consequently any effect of DF on glucose absorption may be exerted either through the rate of gastric emptying or by impaired rate of absorption more distal in the small intestine and not by displacement of the site for starch absorption.",
"title": ""
},
{
"docid": "0b9ed15b4aaefb22aa8f0bb2b6c8fa00",
"text": "Most existing Multi-View Stereo (MVS) algorithms employ the image matching method using Normalized Cross-Correlation (NCC) to estimate the depth of an object. The accuracy of the estimated depth depends on the step size of the depth in NCC-based window matching. The step size of the depth must be small for accurate 3D reconstruction, while the small step significantly increases computational cost. To improve the accuracy of depth estimation and reduce the computational cost, this paper proposes an efficient image matching method for MVS. The proposed method is based on Phase-Only Correlation (POC), which is a high-accuracy image matching technique using the phase components in Fourier transforms. The advantages of using POC are (i) the correlation function is obtained only by one window matching and (ii) the accurate sub-pixel displacement between two matching windows can be estimated by fitting the analytical correlation peak model of the POC function. Thus, using POC-based window matching for MVS makes it possible to estimate depth accurately from the correlation function obtained only by one window matching. Through a set of experiments using the public MVS datasets, we demonstrate that the proposed method performs better in terms of accuracy and computational cost than the conventional method.",
"title": ""
},
{
"docid": "fc0327de912ec8ef6ca33467d34bcd9e",
"text": "In this paper, a progressive fingerprint image compression (for storage or transmission) using edge detection scheme is adopted. The image is decomposed into two components. The first component is the primary component, which contains the edges, the other component is the secondary component, which contains the textures and the features. In this paper, a general grasp for the image is reconstructed in the first stage at a bit rate of 0.0223 bpp for Sample (1) and 0.0245 bpp for Sample (2) image. The quality of the reconstructed images is competitive to the 0.75 bpp target bit set by FBI standard. Also, the compression ratio and the image quality of this algorithm is competitive to other existing methods given in the literature [6]-[9]. The compression ratio for our algorithm is about 45:1 (0.180 bpp).",
"title": ""
},
{
"docid": "25a7f23c146add12bfab3f1fc497a065",
"text": "One of the greatest puzzles of human evolutionary history concerns the how and why of the transition from small-scale, ‘simple’ societies to large-scale, hierarchically complex ones. This paper reviews theoretical approaches to resolving this puzzle. Our discussion integrates ideas and concepts from evolutionary biology, anthropology, and political science. The evolutionary framework of multilevel selection suggests that complex hierarchies can arise in response to selection imposed by intergroup conflict (warfare). The logical coherency of this theory has been investigated with mathematical models, and its predictions were tested empirically by constructing a database of the largest territorial states in the world (with the focus on the preindustrial era).",
"title": ""
},
{
"docid": "cbc4fc5d233c55fcc065fcc64b0404d8",
"text": "PURPOSE\nTo determine if noise damage in the organ of Corti is different in the low- and high-frequency regions of the cochlea.\n\n\nMATERIALS AND METHODS\nChinchillas were exposed for 2 to 432 days to a 0.5 (low-frequency) or 4 kHz (high-frequency) octave band of noise at 47 to 95 dB sound pressure level. Auditory thresholds were determined before, during, and after the noise exposure. The cochleas were examined microscopically as plastic-embedded flat preparations. Missing cells were counted, and the sequence of degeneration was determined as a function of recovery time (0-30 days).\n\n\nRESULTS\nWith high-frequency noise, primary damage began as small focal losses of outer hair cells in the 4-8 kHz region. With continued exposure, damage progressed to involve loss of an entire segment of the organ of Corti, along with adjacent myelinated nerve fibers. Much of the latter loss is secondary to the intermixing of cochlear fluids through the damaged reticular lamina. With low-frequency noise, primary damage appeared as outer hair cell loss scattered over a broad area in the apex. With continued exposure, additional apical outer hair cells degenerated, while supporting cells, inner hair cells, and nerve fibers remained intact. Continued exposure to low-frequency noise also resulted in focal lesions in the basal cochlea that were indistinguishable from those resulting from exposure to high-frequency noise.\n\n\nCONCLUSIONS\nThe patterns of cochlear damage and their relation to functional measures of hearing in noise-exposed chinchillas are similar to those seen in noise-exposed humans. Thus, the chinchilla is an excellent model for studying noise effects, with the long-term goal of identifying ways to limit noise-induced hearing loss in humans.",
"title": ""
},
{
"docid": "576d911990bb207eebaaca6ab137cc7a",
"text": "The online fingerprints by biometric system is not widely used now a days and there is less scope as user is friendly with the system. This paper represents a framework and applying the latent fingerprints obtained from the crime scene. These prints would be matched with our database and we identify the criminal. For this process we have to get the fingerprints of all the citizens. This technique may reduce the crime to a large extent. Latent prints are different from the patent prints. These fingerprints are found at the time of crime and these fingerprints are left accidentally. By this approach we collect these fingerprints by chemicals, powder, lasers and other physical means. Sometimes, fingerprints have a broken curve and it is not so clear due to low pressure. We apply the M_join algorithm to join the curve to achieve better results. Thus, our proposed approach eliminates the pseudo minutiae and joins the broken curves in fingerprints.",
"title": ""
},
{
"docid": "0bef4c6547ac1266686bf53fe93f05fc",
"text": "According to some estimates, more than half of the world's population is multilingual to some extent. Because of the centrality of language use to human experience and the deep connections between linguistic and nonlinguistic processing, it would not be surprising to find that there are interactions between bilingualism and cognitive and brain processes. The present review uses the framework of experience-dependent plasticity to evaluate the evidence for systematic modifications of brain and cognitive systems that can be attributed to bilingualism. The review describes studies investigating the relation between bilingualism and cognition in infants and children, younger and older adults, and patients, using both behavioral and neuroimaging methods. Excluded are studies whose outcomes focus primarily on linguistic abilities because of their more peripheral contribution to the central question regarding experience-dependent changes to cognition. Although most of the research discussed in the review reports some relation between bilingualism and cognitive or brain outcomes, several areas of research, notably behavioral studies with young adults, largely fail to show these effects. These discrepancies are discussed and considered in terms of methodological and conceptual issues. The final section proposes an account based on \"executive attention\" to explain the range of research findings and to set out an agenda for the next steps in this field. (PsycINFO Database Record",
"title": ""
},
{
"docid": "9e737da857c76f9cebe4d295dc061e8f",
"text": "L2 regularization and weight decay regularization are equivalent for standard stochastic gradient descent (when rescaled by the learning rate), but as we demonstrate this is not the case for adaptive gradient algorithms, such as Adam. While common deep learning frameworks of these algorithms implement L2 regularization (often calling it “weight decay” in what may be misleading due to the inequivalence we expose), we propose a simple modification to recover the original formulation of weight decay regularization by decoupling the weight decay from the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam, and (ii) substantially improves Adam’s generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). We also propose a version of Adam with warm restarts (AdamWR) that has strong anytime performance while achieving state-of-the-art results on CIFAR-10 and ImageNet32x32. Our source code is available at https://github.com/loshchil/AdamW-and-SGDW",
"title": ""
},
{
"docid": "5bd7df3bfcb5b99f8bcb4a9900af980e",
"text": "A learning model predictive controller for iterative tasks is presented. The controller is reference-free and is able to improve its performance by learning from previous iterations. A safe set and a terminal cost function are used in order to guarantee recursive feasibility and nondecreasing performance at each iteration. This paper presents the control design approach, and shows how to recursively construct terminal set and terminal cost from state and input trajectories of previous iterations. Simulation results show the effectiveness of the proposed control logic.",
"title": ""
},
{
"docid": "d0bfdb2e2637104eec45531821e7cab7",
"text": "Memory bandwidth severely limits the scalability and performance of multicore and manycore systems. Application performance can be very sensitive to both the delivered memory bandwidth and latency. In multicore systems, a memory channel is usually shared by multiple cores. Having the ability to precisely provision, schedule, and isolate memory bandwidth and latency on a per-core basis is particularly important when different memory guarantees are needed on a per-customer, per-application, or per-core basis. Infrastructure as a Service (IaaS) Cloud systems, and even general purpose multicores optimized for application throughput or fairness all benefit from the ability to control and schedule memory access on a fine-grain basis. In this paper, we propose MITTS (Memory Inter-arrival Time Traffic Shaping), a simple, distributed hardware mechanism which limits memory traffic at the source (Core or LLC). MITTS shapes memory traffic based on memory request inter-arrival time, enabling fine-grain bandwidth allocation. In an IaaS system, MITTS enables Cloud customers to express their memory distribution needs and pay commensurately. For instance, MITTS enables charging customers that have bursty memory traffic more than customers with uniform memory traffic for the same aggregate bandwidth. Beyond IaaS systems, MITTS can also be used to optimize for throughput or fairness in a general purpose multi-program workload. MITTS uses an online genetic algorithm to configure hardware bins, which can adapt for program phases and variable input sets. We have implemented MITTS in Verilog and have taped-out the design in a 25-core 32nm processor and find that MITTS requires less than 0.9% of core area. We evaluate across SPECint, PARSEC, Apache, and bhm Mail Server workloads, and find that MITTS achieves an average 1.18× performance gain compared to the best static bandwidth allocation, a 2.69× average performance/cost advantage in an IaaS setting, and up to 1.17× better throughput and 1.52× better fairness when compared to conventional memory bandwidth provisioning techniques.",
"title": ""
},
{
"docid": "6f26f4409d418fe69b1d43ec9b4f8b39",
"text": "Automatic understanding of human affect using visual signals is of great importance in everyday human–machine interactions. Appraising human emotional states, behaviors and reactions displayed in real-world settings, can be accomplished using latent continuous dimensions (e.g., the circumplex model of affect). Valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the activation of the emotion) constitute popular and effective representations for affect. Nevertheless, the majority of collected datasets this far, although containing naturalistic emotional states, have been captured in highly controlled recording conditions. In this paper, we introduce the Aff-Wild benchmark for training and evaluating affect recognition algorithms. We also report on the results of the First Affect-in-the-wild Challenge (Aff-Wild Challenge) that was recently organized in conjunction with CVPR 2017 on the Aff-Wild database, and was the first ever challenge on the estimation of valence and arousal in-the-wild. Furthermore, we design and extensively train an end-to-end deep neural architecture which performs prediction of continuous emotion dimensions based on visual cues. The proposed deep learning architecture, AffWildNet, includes convolutional and recurrent neural network layers, exploiting the invariant properties of convolutional features, while also modeling temporal dynamics that arise in human behavior via the recurrent layers. The AffWildNet produced state-of-the-art results on the Aff-Wild Challenge. We then exploit the AffWild database for learning features, which can be used as priors for achieving best performances both for dimensional, as well as categorical emotion recognition, using the RECOLA, AFEW-VA and EmotiW 2017 datasets, compared to all other methods designed for the same goal. The database and emotion recognition models are available at http://ibug.doc.ic.ac.uk/resources/first-affect-wild-challenge .",
"title": ""
},
{
"docid": "dbc64c508b074f435b4175e6c8b967d5",
"text": "Data collected from mobile phones have the potential to provide insight into the relational dynamics of individuals. This paper compares observational data from mobile phones with standard self-report survey data. We find that the information from these two data sources is overlapping but distinct. For example, self-reports of physical proximity deviate from mobile phone records depending on the recency and salience of the interactions. We also demonstrate that it is possible to accurately infer 95% of friendships based on the observational data alone, where friend dyads demonstrate distinctive temporal and spatial patterns in their physical proximity and calling patterns. These behavioral patterns, in turn, allow the prediction of individual-level outcomes such as job satisfaction.",
"title": ""
},
{
"docid": "685faa54046bcd70e21a7003cb1182e2",
"text": "We analyze to what extent the random SAT and Max-SAT problems differ in their properties. Our findings suggest that for random k-CNF with ratio in a certain range, Max-SAT can be solved by any SAT algorithm with subexponential slowdown, while for formulae with ratios greater than some constant, algorithms under the random walk framework require substantially different heuristics. In light of these results, we propose a novel probabilistic approach for random Max-SAT called ProMS. Experimental results illustrate that ProMS outperforms many state-of-the-art local search solvers on random Max-SAT benchmarks.",
"title": ""
},
{
"docid": "ce2ef27f032d30ce2bc6aa5509a58e49",
"text": "Bibliometric measures are commonly used to estimate the popularity and the impact of published research. Existing bibliometric measures provide “quantitative” indicators of how good a published paper is. This does not necessarily reflect the “quality” of the work presented in the paper. For example, when hindex is computed for a researcher, all incoming citations are treated equally, ignoring the fact that some of these citations might be negative. In this paper, we propose using NLP to add a “qualitative” aspect to biblometrics. We analyze the text that accompanies citations in scientific articles (which we term citation context). We propose supervised methods for identifying citation text and analyzing it to determine the purpose (i.e. author intention) and the polarity (i.e. author sentiment) of citation.",
"title": ""
},
{
"docid": "c21a1a07918d86dab06d84e0e4e7dc05",
"text": "Big data potential value across business sectors has received tremendous attention from the practitioner and academia world. The huge amount of data collected in different forms in organizations promises to radically transform the business landscape globally. The impact of big data, which is spreading across all business sectors, has potential to create new opportunities for growth. With organizations now able to store huge diverse amounts of data from different sources and forms, big data is expected to deliver tremendous value across business sectors. This paper focuses on building a business case for big data adoption in organizations. This paper discusses some of the opportunities and potential benefits associated with big data adoption across various business sectors globally. The discussion is important for making a business case for big data investment in organizations, which is major challenge for its adoption globally. The paper uses the IT strategic grid to understand the current and future potential benefits of big data for different business sectors. The results of the study suggest that there is no one-size-fits-all to big data adoption potential benefits in organizations.",
"title": ""
},
{
"docid": "499e2c0a0170d5b447548f85d4a9f402",
"text": "OBJECTIVE\nTo discuss the role of proprioception in motor control and in activation of the dynamic restraints for functional joint stability.\n\n\nDATA SOURCES\nInformation was drawn from an extensive MEDLINE search of the scientific literature conducted in the areas of proprioception, motor control, neuromuscular control, and mechanisms of functional joint stability for the years 1970-1999.\n\n\nDATA SYNTHESIS\nProprioception is conveyed to all levels of the central nervous system. It serves fundamental roles for optimal motor control and sensorimotor control over the dynamic restraints.\n\n\nCONCLUSIONS/APPLICATIONS\nAlthough controversy remains over the precise contributions of specific mechanoreceptors, proprioception as a whole is an essential component to controlling activation of the dynamic restraints and motor control. Enhanced muscle stiffness, of which muscle spindles are a crucial element, is argued to be an important characteristic for dynamic joint stability. Articular mechanoreceptors are attributed instrumental influence over gamma motor neuron activation, and therefore, serve to indirectly influence muscle stiffness. In addition, articular mechanoreceptors appear to influence higher motor center control over the dynamic restraints. Further research conducted in these areas will continue to assist in providing a scientific basis to the selection and development of clinical procedures.",
"title": ""
},
{
"docid": "06c6bb292cfdd6383bf21a6ce3d57f78",
"text": "In this paper, we are concerned with trust modeling for agents in networked computing systems. As trust is a subjective notion that is invisible, implicit and uncertain in nature, many attempts have been made to model trust with aid of Bayesian probability theory, while the field lacks a global comprehensive analysis for variants of Bayesian trust models. We present a study to fill in this gap by giving a comprehensive review of the literature. A generic Bayesian trust (GBT) modeling perspective is highlighted here. It is shown that all models under survey can cast into a GBT based computing paradigm as special cases. We discuss both capabilities and limitations of the GBT perspective and point out open questions to answer for advancing it to become a pragmatic infrastructure for analyzing intrinsic relationships between variants of trust models and for developing novel ones for trust evaluation.",
"title": ""
},
{
"docid": "1e7cae07ab0ec1eb8fdc1c213ec37071",
"text": "CD4+CD25+ regulatory T cells are essential for the active suppression of autoimmunity. Here we report that the forkhead transcription factor Foxp3 is specifically expressed in CD4+CD25+ regulatory T cells and is required for their development.The lethal autoimmune syndrome observed in Foxp3-mutant scurfy mice and Foxp3-null mice results from a CD4+CD25+ regulatory T cell deficiency and not from a cell-intrinsic defect of CD4+CD25– T cells. CD4+CD25+ regulatory T cells rescue disease development and preferentially expand when transferred into neonatal Foxp3deficient mice. Furthermore, ectopic expression of Foxp3 confers suppressor function on peripheral CD4+CD25– T cells. Thus, Foxp3 is a critical regulator of CD4+CD25+ regulatory T cell development and function.",
"title": ""
}
] |
scidocsrr
|
07b0d64f70c8de6996f405d0fc6118a0
|
Understanding Adversarial Space Through the Lens of Attribution
|
[
{
"docid": "b12bae586bc49a12cebf11cca49c0386",
"text": "Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations—small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93% ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.",
"title": ""
},
{
"docid": "88a1549275846a4fab93f5727b19e740",
"text": "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.",
"title": ""
}
] |
[
{
"docid": "b0bae633eb8b54a8a0a174da8eb59b26",
"text": " Advancement in payment technologies have an important impact on the quality of life. The emerging payment technologies create both opportunities and challenges for future. Being a quick and convenient process, contactless payment gained its momentum, especially in merchants, where throughput is the main important parameter. However, it poses risk to issuers as no robust verification method of customer is available. Thus giving rise to quests to evolve and sustain a wellorganized, efficient, reliable and secure unified payment system, which may contribute to the smooth functioning of the market by eliminating scratch in business. This article presents an approach and module by which one card can communicate with the other using Near Field Communication (NFC) technology to transfer money from payer’s bank to payee’s bank by digital means. This approach eliminates the need of physical cash and also serves all types of payment and identity needs. Embodiments of this approach furnish a medium for cashless card-to-card transaction. The module, which is called Swing-Pay, communicates with its concerned bank via GSM. The security of this module is intensified using biometric authentication. The article also presents an app on Android platform, which works as a scanner of the proposed module to read the identity details of concerned person, the owner of the card. We have also presented the prototype of a digital card. This card can also be used as virtual identity card (ID), accumulating the information of all ID cards including electronic Passport, Voter ID, and Driving License.",
"title": ""
},
{
"docid": "a14665d8ae0a471a56607bb175e6c8c6",
"text": "Multiple modalities often co-occur when describing natural phenomena. Learning a joint representation of these modalities should yield deeper and more useful representations. Previous generative approaches to multi-modal input either do not learn a joint distribution or require additional computation to handle missing data. Here, we introduce a multimodal variational autoencoder (MVAE) that uses a product-of-experts inference network and a sub-sampled training paradigm to solve the multi-modal inference problem. Notably, our model shares parameters to efficiently learn under any combination of missing modalities. We apply the MVAE on four datasets and match state-of-the-art performance using many fewer parameters. In addition, we show that the MVAE is directly applicable to weaklysupervised learning, and is robust to incomplete supervision. We then consider two case studies, one of learning image transformations—edge detection, colorization, segmentation—as a set of modalities, followed by one of machine translation between two languages. We find appealing results across this range of tasks.",
"title": ""
},
{
"docid": "1bd75e455b57b14c2a275e50aff0d2db",
"text": "Keratosis pilaris is a common skin disorder comprising less common variants and rare subtypes, including keratosis pilaris rubra, erythromelanosis follicularis faciei et colli, and the spectrum of keratosis pilaris atrophicans. Data, and critical analysis of existing data, are lacking, so the etiologies, pathogeneses, disease associations, and treatments of these clinical entities are poorly understood. The present article aims to fill this knowledge gap by reviewing literature in the PubMed, EMBASE, and CINAHL databases and providing a comprehensive, analytical summary of the clinical characteristics and pathophysiology of keratosis pilaris and its subtypes through the lens of disease associations, genetics, and pharmacologic etiologies. Histopathologic, genomic, and epidemiologic evidence points to keratosis pilaris as a primary disorder of the pilosebaceous unit as a result of inherited mutations or acquired disruptions in various biomolecular pathways. Recent data highlight aberrant Ras signaling as an important contributor to the pathophysiology of keratosis pilaris and its subtypes. We also evaluate data on treatments for keratosis pilaris and its subtypes, including topical, systemic, and energy-based therapies. The effectiveness of various types of lasers in treating keratosis pilaris and its subtypes deserves wider recognition.",
"title": ""
},
{
"docid": "0e2cb28634a20c058f985065b53d34f6",
"text": "Although the construct of comfort has been analysed, diagrammed in a two-dimensional content map, and operationalized as a holistic outcome, it has not been conceptualized within the context of a broader theory for the discipline of nursing. The theoretical work presented here utilizes an intra-actional perspective to develop a theory of comfort as a positive outcome of nursing case. A model of human press is the framework within which comfort is related to (a) interventions that enhance the state of comfort and (b) desirable subsequent outcomes of nursing care. The paper concludes with a discussion about the theory of comfort as a significant one for the discipline of nursing.",
"title": ""
},
{
"docid": "cc37744c95e5e41cb46b166132da53f6",
"text": "This work is part of research to build a system to combine facial and prosodic information to recognize commonly occurring user states such as delight and frustration. We create two experimental situations to elicit two emotional states: the first involves recalling situations while expressing either delight or frustration; the second experiment tries to elicit these states directly through a frustrating experience and through a delightful video. We find two significant differences in the nature of the acted vs. natural occurrences of expressions. First, the acted ones are much easier for the computer to recognize. Second, in 90% of the acted cases, participants did not smile when frustrated, whereas in 90% of the natural cases, participants smiled during the frustrating interaction, despite self-reporting significant frustration with the experience. This paper begins to explore the differences in the patterns of smiling that are seen under natural frustration and delight conditions, to see if there might be something measurably different about the smiles in these two cases, which could ultimately improve the performance of classifiers applied to natural expressions.",
"title": ""
},
{
"docid": "367d1b8e188231145824d0577ab6bd40",
"text": "This paper describes the experiences of introducing ISO 9000 into Taiwan's higher education systems. Based on an empirical investigation and a case study, the authors argue that the implementation of ISO 9000 quality systems has a positive impact on the education quality. The benefits of ISO 9000 certification are further depicted for those interested in complying with the Standard. We also justify the current progress of the ISO 9000 implementation in Taiwan with recommendations for improvement.",
"title": ""
},
{
"docid": "467ff4b60acb874c0430ae4c20d62137",
"text": "The purpose of this paper is twofold. First, we give a survey of the known methods of constructing lattices in complex hyperbolic space. Secondly, we discuss some of the lattices constructed by Deligne and Mostow and by Thurston in detail. In particular, we give a unified treatment of the constructions of fundamental domains and we relate this to other properties of these lattices.",
"title": ""
},
{
"docid": "3277a9b4ff573e3b4647864a0560155f",
"text": "Mobile technologies are being used to deliver health behavior interventions. The study aims to determine how health behavior theories are applied to mobile interventions. This is a review of the theoretical basis and interactivity of mobile health behavior interventions. Many of the mobile health behavior interventions reviewed were predominately one way (i.e., mostly data input or informational output), but some have leveraged mobile technologies to provide just-in-time, interactive, and adaptive interventions. Most smoking and weight loss studies reported a theoretical basis for the mobile intervention, but most of the adherence and disease management studies did not. Mobile health behavior intervention development could benefit from greater application of health behavior theories. Current theories, however, appear inadequate to inform mobile intervention development as these interventions become more interactive and adaptive. Dynamic feedback system theories of health behavior can be developed utilizing longitudinal data from mobile devices and control systems engineering models.",
"title": ""
},
{
"docid": "fd5f3a14f731b4af60c86d7bac95e997",
"text": "(Document Summary) Direct selling as a type of non-store retailing continues to increase internationally and in Australia in its use and popularity. One non-store retailing method, multilevel marketing or network marketing, has recently incurred a degree of consumer suspicion and negative perceptions. A study was developed to investigate consumer perceptions and concerns in New South Wales and Victoria. Consumers were surveyed to determine their perception of direct selling and its relationship to consumer purchasing decisions. Responses indicate consumers had a negative perceptions towards network marketing, while holding a low positive view of direct selling. There appears to be no influence of network marketing on consumer purchase decisions. Direct selling, as a method of non-store retailing, has continued to increase in popularity in Australia and internationally. This study investigated network marketing as a type of direct selling in Australia, by examining consumers' perceptions. The results indicate that Australian consumers were generally negative and suspicious towards network marketing in Australia.",
"title": ""
},
{
"docid": "58e2cba4f609dce3b17e945f58d90c08",
"text": "We develop a theory of financing of entrepreneurial ventures via an initial coin offering (ICO). Pre-selling a venture’s output by issuing tokens allows the entrepreneur to transfer part of the venture risk to diversified investors without diluting her control rights. This, however, leads to an agency conflict between the entrepreneur and investors that manifests itself in underinvestment. We show that an ICO can dominate traditional venture capital (VC) financing when VC investors are under-diversified, when the idiosyncratic component of venture risk is large enough, when the payoff distribution is sufficiently right-skewed, and when the degree of information asymmetry between the entrepreneur and ICO investors is not too large. Overall, our model suggests that an ICO can be a viable financing alternative for some but not all entrepreneurial ventures. An implication is that while regulating ICOs to reduce the information asymmetry associated with them is desirable, banning them outright is not.",
"title": ""
},
{
"docid": "ed13193df5db458d0673ccee69700bc0",
"text": "Interest in meat fatty acid composition stems mainly from the need to find ways to produce healthier meat, i.e. with a higher ratio of polyunsaturated (PUFA) to saturated fatty acids and a more favourable balance between n-6 and n-3 PUFA. In pigs, the drive has been to increase n-3 PUFA in meat and this can be achieved by feeding sources such as linseed in the diet. Only when concentrations of α-linolenic acid (18:3) approach 3% of neutral lipids or phospholipids are there any adverse effects on meat quality, defined in terms of shelf life (lipid and myoglobin oxidation) and flavour. Ruminant meats are a relatively good source of n-3 PUFA due to the presence of 18:3 in grass. Further increases can be achieved with animals fed grain-based diets by including whole linseed or linseed oil, especially if this is \"protected\" from rumen biohydrogenation. Long-chain (C20-C22) n-3 PUFA are synthesised from 18:3 in the animal although docosahexaenoic acid (DHA, 22:6) is not increased when diets are supplemented with 18:3. DHA can be increased by feeding sources such as fish oil although too-high levels cause adverse flavour and colour changes. Grass-fed beef and lamb have naturally high levels of 18:3 and long chain n-3 PUFA. These impact on flavour to produce a 'grass fed' taste in which other components of grass are also involved. Grazing also provides antioxidants including vitamin E which maintain PUFA levels in meat and prevent quality deterioration during processing and display. In pork, beef and lamb the melting point of lipid and the firmness/hardness of carcass fat is closely related to the concentration of stearic acid (18:0).",
"title": ""
},
{
"docid": "25272839e0a346c4b018a78fcd2ba44e",
"text": "This paper explores the utility of machine learning methods for understanding bullying, a significant social-psychological issue in the United States, through social media data. Machine learning methods were applied to all public mentions of bullying on Twitter between September 1, 2011 and August 31, 2013 to extract the posts that referred to discrete bullying episodes (N = 9,764,583) to address five key questions. Most posts were authored by victims and reporters and referred to general forms of bullying. Posts frequently reflected self-disclosure about personal involvement in bullying. The number of posts that originated from a state was positively associated with the state population size; the timing of the posts reveal that more posts were made on weekdays than on Saturdays and more posts were made during the evening compared to daytime hours. Potential benefits of merging social science and computer science methods to enhance the study of bullying are discussed. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "48aa68862748ab502f3942300b4d8e1e",
"text": "While data volumes continue to rise, the capacity of human attention remains limited. As a result, users need analytics engines that can assist in prioritizing attention in this fast data that is too large for manual inspection. We present a set of design principles for the design of fast data analytics engines that leverage the relative scarcity of human attention and overabundance of data: return fewer results, prioritize iterative analysis, and filter fast to compute less. We report on our early experiences employing these principles in the design and deployment of MacroBase, an open source analysis engine for prioritizing attention in fast data. By combining streaming operators for feature transformation, classification, and data summarization, MacroBase provides users with interpretable explanations of key behaviors, acting as a search engine for fast data.",
"title": ""
},
{
"docid": "1ac4ac9b112c2554db37de2070d7c2df",
"text": "This paper studies empirically the effect of sampling and threshold-moving in training cost-sensitive neural networks. Both oversampling and undersampling are considered. These techniques modify the distribution of the training data such that the costs of the examples are conveyed explicitly by the appearances of the examples. Threshold-moving tries to move the output threshold toward inexpensive classes such that examples with higher costs become harder to be misclassified. Moreover, hard-ensemble and soft-ensemble, i.e., the combination of above techniques via hard or soft voting schemes, are also tested. Twenty-one UCl data sets with three types of cost matrices and a real-world cost-sensitive data set are used in the empirical study. The results suggest that cost-sensitive learning with multiclass tasks is more difficult than with two-class tasks, and a higher degree of class imbalance may increase the difficulty. It also reveals that almost all the techniques are effective on two-class tasks, while most are ineffective and even may cause negative effect on multiclass tasks. Overall, threshold-moving and soft-ensemble are relatively good choices in training cost-sensitive neural networks. The empirical study also suggests that some methods that have been believed to be effective in addressing the class imbalance problem may, in fact, only be effective on learning with imbalanced two-class data sets.",
"title": ""
},
{
"docid": "265a709088f671ba484ffba937ae2977",
"text": "We test a number of the leading computational color constancy algorithms using a comprehensive set of images. These were of 33 different scenes under 11 different sources representative of common illumination conditions. The algorithms studied include two gray world methods, a version of the Retinex method, several variants of Forsyth's gamut-mapping method, Cardei et al.'s neural net method, and Finlayson et al.'s Color by Correlation method. We discuss a number of issues in applying color constancy ideas to image data, and study in depth the effect of different preprocessing strategies. We compare the performance of the algorithms on image data with their performance on synthesized data. All data used for this study are available online at http://www.cs.sfu.ca/(tilde)color/data, and implementations for most of the algorithms are also available (http://www.cs.sfu.ca/(tilde)color/code). Experiments with synthesized data (part one of this paper) suggested that the methods which emphasize the use of the input data statistics, specifically color by correlation and the neural net algorithm, are potentially the most effective at estimating the chromaticity of the scene illuminant. Unfortunately, we were unable to realize comparable performance on real images. Here exploiting pixel intensity proved to be more beneficial than exploiting the details of image chromaticity statistics, and the three-dimensional (3-D) gamut-mapping algorithms gave the best performance.",
"title": ""
},
{
"docid": "a1196d8624026339f66e843df68469d0",
"text": "Two or more isoforms of several cytokines including tumor necrosis factors (tnfs) have been reported from teleost fish. Although zebrafish (Danio rerio) and medaka (Oryzias latipes) possess two tnf-α genes, their genomic location and existence are yet to be described and confirmed. Therefore, we conducted in silico identification, synteny analysis of tnf-α and tnf-n from both the fish with that of human TNF/lymphotoxin loci and their expression analysis in zebrafish. We identified two homologs of tnf-α (named as tnf-α1 and tnf-α2) and a tnf-n gene from zebrafish and medaka. Genomic location of these genes was found to be as: tnf-α1, and tnf-n and tnf-α2 genes on zebrafish chromosome 19 and 15 and medaka chromosome 11 and 16, respectively. Several features such as existence of TNF family signature, conservation of genes in TNF loci with human chromosome, phylogenetic clustering and amino acid similarity with other teleost TNFs confirmed their identity as tnf-α and tnf-n. There were a constitutive expression of all three genes in different tissues, and an increased expression of tnf-α1 and -α2 and a varied expression of tnf-n ligand in zebrafish head kidney cells induced with 20 μg mL(-1) LPS in vitro. Our results suggest the presence of two tnf-α homologs on different chromosomes of zebrafish and medaka and correlate this incidence arising from the fish whole genome duplication event.",
"title": ""
},
{
"docid": "7110e68a420d10fa75a943d1c1f0bd42",
"text": "This paper proposes a compact microstrip Yagi-Uda antenna for 2.45 GHz radio frequency identification (RFID) handheld reader applications. The proposed antenna is etched on a piece of FR4 substrate with an overall size of 65 mm × 55 mm ×1.6 mm and consists of a microstrip balun, a dipole, and a director. The ground plane is designed to act as a reflector that contributes to enhancing the antenna gain. The measured 10-dB return loss bandwidth and peak gain achieved by the proposed antenna are 380 MHz and 7.5 dBi, respectively. In addition, a parametric study is conducted to facilitate the design and optimization processes for engineers.",
"title": ""
},
{
"docid": "2f180422f2cc0813f6d7e0b1d831fd3f",
"text": "This paper is demonstrating to create a system of multifactor authentication based on biometric verification. Our system use iris for the first factor and fingerprint for the second factor. Once an attacker attempts to attack the system, there must have two factors. If one of them is compromised or broken, the attacker still has at least one more barrier to breach before successfully breaking into the target. Furthermore, this system will be implemented to enhance security for accessing control login government system.",
"title": ""
},
{
"docid": "f8339417b0894191670d1528df7ac297",
"text": "OBJECTIVE\nThe purpose of this study was to reanalyze the results of a previously published trial that compared 3 methods of anterior colporrhaphy according to the clinically relevant definitions of success.\n\n\nSTUDY DESIGN\nA secondary analysis of a trial of 114 subjects who underwent surgery for anterior pelvic organ prolapse who were assigned randomly to standard anterior colporrhaphy, ultralateral colporrhaphy, or anterior colporrhaphy plus polyglactin 910 mesh from 1996-1999. For the current analysis, success was defined as (1) no prolapse beyond the hymen, (2) the absence of prolapse symptoms (visual analog scale ≤ 2), and (3) the absence of retreatment.\n\n\nRESULTS\nEighty-eight percent of the women met our definition of success at 1 year. One subject (1%) underwent surgery for recurrence 29 months after surgery. No differences among the 3 groups were noted for any outcomes.\n\n\nCONCLUSION\nReanalysis of a trial of 3 methods of anterior colporrhaphy revealed considerably better success with the use of clinically relevant outcome criteria compared with strict anatomic criteria.",
"title": ""
},
{
"docid": "87eb69d6404bf42612806a5e6d67e7bb",
"text": "In this paper we present an analysis of an AltaVista Search Engine query log consisting of approximately 1 billion entries for search requests over a period of six weeks. This represents almost 285 million user sessions, each an attempt to fill a single information need. We present an analysis of individual queries, query duplication, and query sessions. We also present results of a correlation analysis of the log entries, studying the interaction of terms within queries. Our data supports the conjecture that web users differ significantly from the user assumed in the standard information retrieval literature. Specifically, we show that web users type in short queries, mostly look at the first 10 results only, and seldom modify the query. This suggests that traditional information retrieval techniques may not work well for answering web search requests. The correlation analysis showed that the most highly correlated items are constituents of phrases. This result indicates it may be useful for search engines to consider search terms as parts of phrases even if the user did not explicitly specify them as such.",
"title": ""
}
] |
scidocsrr
|
f395393293c25e3f7525845ab3791e92
|
A 3-RSR Haptic Wearable Device for Rendering Fingertip Contact Forces
|
[
{
"docid": "aa5fc254d9f51cf881bf00c3c71e7a84",
"text": "PURPOSE\nThis article provides rehabilitation professionals and engineers with a theoretical and pragmatic rationale for the inclusion of haptic feedback in the rehabilitation of central nervous system disorders affecting the hand.\n\n\nMETHOD\nA narrative review of haptic devices used in sensorimotor hand rehabilitation was undertaken. Presented papers were selected to outline and clarify the underlying somatosensory mechanisms underpinning these technologies and provide exemplars of the evidence to date.\n\n\nRESULTS\nHaptic devices provide kinaesthetic and/or tactile stimulation. Kinaesthetic haptics are beginning to be incorporated in central nervous system rehabilitation; however, there has been limited development of tactile haptics. Clinical research in haptic rehabilitation of the hand is embryonic but initial findings indicate potential clinical benefit.\n\n\nCONCLUSIONS\nHaptic rehabilitation offers the potential to advance sensorimotor hand rehabilitation but both scientific and pragmatic developments are needed to ensure that its potential is realized.",
"title": ""
}
] |
[
{
"docid": "46d3097da826ebb2d56d0587fdd873f2",
"text": "This letter presents a new filter, based on substrate integrated folded waveguide (SIFW) technology, which exhibits compact size and good out-of-band rejection. The filter is based on a SIFW cavity, which guarantees size reduction, and the out-of-band rejection is controlled by the suppression and tuning of the high-order cavity modes. A detailed investigation of the cavity mode spectrum is presented, to illustrate the operation principle and the design of the filter. The interesting feature of this filter is the possibility to design the pass band and the return band by simply tuning the mode spectrum of the cavity, which is practically unaffected by the connection to the excitation ports. The fabrication and testing of a prototype operating at 4.5 GHz validate the proposed filter topology.",
"title": ""
},
{
"docid": "ef77d042a04b7fa704f13a0fa5e73688",
"text": "The nature of the cellular basis of learning and memory remains an often-discussed, but elusive problem in neurobiology. A popular model for the physiological mechanisms underlying learning and memory postulates that memories are stored by alterations in the strength of neuronal connections within the appropriate neural circuitry. Thus, an understanding of the cellular and molecular basis of synaptic plasticity will expand our knowledge of the molecular basis of learning and memory. The view that learning was the result of altered synaptic weights was first proposed by Ramon y Cajal in 1911 and formalized by Donald O. Hebb. In 1949, Hebb proposed his \" learning rule, \" which suggested that alterations in the strength of synapses would occur between two neurons when those neurons were active simultaneously (1). Hebb's original postulate focused on the need for synaptic activity to lead to the generation of action potentials in the postsynaptic neuron, although more recent work has extended this to include local depolarization at the synapse. One problem with testing this hypothesis is that it has been difficult to record directly the activity of single synapses in a behaving animal. Thus, the challenge in the field has been to relate changes in synaptic efficacy to specific behavioral instances of associative learning. In this chapter, we will review the relationship among synaptic plasticity, learning, and memory. We will examine the extent to which various current models of neuronal plasticity provide potential bases for memory storage and we will explore some of the signal transduction pathways that are critically important for long-term memory storage. We will focus on two systems—the gill and siphon withdrawal reflex of the invertebrate Aplysia californica and the mammalian hippocam-pus—and discuss the abilities of models of synaptic plasticity and learning to account for a range of genetic, pharmacological, and behavioral data.",
"title": ""
},
{
"docid": "ea8bc1970977c855fc72bbee9185e909",
"text": "This paper reports on a major Australian research project which examines whether the evolution in digital content creation and social media can create a new audience of active cultural participants. The project draws together experts from major Australian museums, libraries and screen centres to examine the evolution in digital contentcreation and social media. It explores whether organizations can become active in content generation ('new literacy'), and thereby be linked into new modes of distribution, calling into being 'new audiences'. The paper presents interim findings of the project, describing the theories and methodologies developed to investigate the rise of social media and, more broadly, digital content creation, within cultural institutions.",
"title": ""
},
{
"docid": "173811394fd49c15b151fc9059acbe13",
"text": "The 'jewel in the crown' from the MIT90s [Management in the 90s] program is undoubtedly the Strategic Alignment Model (SAM) of Henderson and Venkatraman.",
"title": ""
},
{
"docid": "84aacf4b56891e70063e438b0dc35040",
"text": "The increasing availability and maturity of both scalable computing architectures and deep syntactic parsers is opening up new possibilities for Relation Extraction (RE) on large corpora of natural language text. In this paper, we present FREEPAL, a resource designed to assist with the creation of relation extractors for more than 5,000 relations defined in the FREEBASE knowledge base (KB). The resource consists of over 10 million distinct lexico-syntactic patterns extracted from dependency trees, each of which is assigned to one or more FREEBASE relations with different confidence strengths. We generate the resource by executing a large-scale distant supervision approach on the CLUEWEB09 corpus to extract and parse over 260 million sentences labeled with FREEBASE entities and relations. We make FREEPAL freely available to the research community, and present a web demonstrator to the dataset, accessible from free-pal.appspot.com.",
"title": ""
},
{
"docid": "76156cea2ef1d49179d35fd8f333b011",
"text": "Climate change, pollution, and energy insecurity are among the greatest problems of our time. Addressing them requires major changes in our energy infrastructure. Here, we analyze the feasibility of providing worldwide energy for all purposes (electric power, transportation, heating/cooling, etc.) from wind, water, and sunlight (WWS). In Part I, we discuss WWS energy system characteristics, current and future energy demand, availability of WWS resources, numbers of WWS devices, and area and material requirements. In Part II, we address variability, economics, and policy of WWS energy. We estimate that !3,800,000 5 MW wind turbines, !49,000 300 MW concentrated solar plants, !40,000 300 MW solar PV power plants, !1.7 billion 3 kW rooftop PV systems, !5350 100 MWgeothermal power plants, !270 new 1300 MW hydroelectric power plants, !720,000 0.75 MWwave devices, and !490,000 1 MW tidal turbines can power a 2030 WWS world that uses electricity and electrolytic hydrogen for all purposes. Such a WWS infrastructure reduces world power demand by 30% and requires only !0.41% and !0.59% more of the world’s land for footprint and spacing, respectively. We suggest producing all new energy withWWSby 2030 and replacing the pre-existing energy by 2050. Barriers to the plan are primarily social and political, not technological or economic. The energy cost in a WWS world should be similar to",
"title": ""
},
{
"docid": "fc387da4792896b1c85d18e4bd5f7376",
"text": "It is generally understood that building software systems with components has many advantages but the difficulties of this approach should not be ignored. System evolution, maintenance, migration and compatibilities are some of the challenges met with when developing a component-based software system. Since most systems evolve over time, components must be maintained or replaced. The evolution of requirements affects not only specific system functions and particular components but also component-based architecture on all levels. Increased complexity is a consequence of different components and systems having different life cycles. In component-based systems it is easier to replace part of system with a commercial component. This process is however not straightforward and different factors such as requirements management, marketing issues, etc., must be taken into consideration. In this paper we discuss the issues and challenges encountered when developing and using an evolving component-based software system. An industrial control system has been used as a case study.",
"title": ""
},
{
"docid": "b9bb07dd039c0542a7309f2291732f82",
"text": "Recent progress in acquiring shape from range data permits the acquisition of seamless million-polygon meshes from physical models. In this paper, we present an algorithm and system for converting dense irregular polygon meshes of arbitrary topology into tensor product B-spline surface patches with accompanying displacement maps. This choice of representation yields a coarse but efficient model suitable for animation and a fine but more expensive model suitable for rendering. The first step in our process consists of interactively painting patch boundaries over a rendering of the mesh. In many applications, interactive placement of patch boundaries is considered part of the creative process and is not amenable to automation. The next step is gridded resampling of each boundedsection of the mesh. Our resampling algorithm lays a grid of springs across the polygon mesh, then iterates between relaxing this grid and subdividing it. This grid provides a parameterization for the mesh section, which is initially unparameterized. Finally, we fit a tensor product B-spline surface to the grid. We also output a displacement map for each mesh section, which represents the error between our fitted surface and the spring grid. These displacement maps are images; hence this representation facilitates the use of image processing operators for manipulating the geometric detail of an object. They are also compatible with modern photo-realistic rendering systems. Our resampling and fitting steps are fast enough to surface a million polygon mesh in under 10 minutes important for an interactive system. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling —curve, surface and object representations; I.3.7[Computer Graphics]:Three-Dimensional Graphics and Realism—texture; J.6[Computer-Aided Engineering]:ComputerAided Design (CAD); G.1.2[Approximation]:Spline Approximation Additional",
"title": ""
},
{
"docid": "5b43ea9e56c81e98c52b4041b0c32fdf",
"text": "A novel broadband probe-type waveguide-to-microstrip transition adapted for operation in V band is presented. The transition is realized on a standard high frequency printed circuit board (PCB) fixed between a standard WR-15 waveguide and a simple backshort. The microstrip-fed probe is placed at the same side of the PCB with the backshort and acts as an impedance matching element. The proposed transition additionally includes two through holes implemented on the PCB in the center of the transition area. Thus, significant part of the lossy PCB dielectric is removed from that area providing wideband and low-loss performance of the transition. Measurements show that the designed transition has the bandwidth of 50–70 GHz for the −10 dB level of the reflection coefficient with the loss level of only 0.75 dB within the transition bandwidth.",
"title": ""
},
{
"docid": "f810dbe1e656fe984b4b6498c1c27bcb",
"text": "Information-maximization clustering learns a probabilistic classifier in an unsupervised manner so that mutual information between feature vectors and cluster assignments is maximized. A notable advantage of this approach is that it involves only continuous optimization of model parameters, which is substantially simpler than discrete optimization of cluster assignments. However, existing methods still involve nonconvex optimization problems, and therefore finding a good local optimal solution is not straightforward in practice. In this letter, we propose an alternative information-maximization clustering method based on a squared-loss variant of mutual information. This novel approach gives a clustering solution analytically in a computationally efficient way via kernel eigenvalue decomposition. Furthermore, we provide a practical model selection procedure that allows us to objectively optimize tuning parameters included in the kernel function. Through experiments, we demonstrate the usefulness of the proposed approach.",
"title": ""
},
{
"docid": "3f394e57febd3ffdc7414cf1af94c53b",
"text": "Background recovery is a very important theme in computer vision applications. Recent research shows that robust principal component analysis (RPCA) is a promising approach for solving problems such as noise removal, video background modeling, and removal of shadows and specularity. RPCA utilizes the fact that the background is common in multiple views of a scene, and attempts to decompose the data matrix constructed from input images into a low-rank matrix and a sparse matrix. This is possible if the sparse matrix is sufficiently sparse, which may not be true in computer vision applications. Moreover, algorithmic parameters need to be fine tuned to yield accurate results. This paper proposes a fixed-rank RPCA algorithm for solving background recovering problems whose low-rank matrices have known ranks. Comprehensive tests show that, by fixing the rank of the low-rank matrix to a known value, the fixed-rank algorithm produces more reliable and accurate results than existing low-rank RPCA algorithm.",
"title": ""
},
{
"docid": "19acedd03589d1fd1173dd1565d11baf",
"text": "This is the first report on the microbial diversity of xaj-pitha, a rice wine fermentation starter culture through a metagenomics approach involving Illumine-based whole genome shotgun (WGS) sequencing method. Metagenomic DNA was extracted from rice wine starter culture concocted by Ahom community of Assam and analyzed using a MiSeq® System. A total of 2,78,231 contigs, with an average read length of 640.13 bp, were obtained. Data obtained from the use of several taxonomic profiling tools were compared with previously reported microbial diversity studies through the culture-dependent and culture-independent method. The microbial community revealed the existence of amylase producers, such as Rhizopus delemar, Mucor circinelloides, and Aspergillus sp. Ethanol producers viz., Meyerozyma guilliermondii, Wickerhamomyces ciferrii, Saccharomyces cerevisiae, Candida glabrata, Debaryomyces hansenii, Ogataea parapolymorpha, and Dekkera bruxellensis, were found associated with the starter culture along with a diverse range of opportunistic contaminants. The bacterial microflora was dominated by lactic acid bacteria (LAB). The most frequent occurring LAB was Lactobacillus plantarum, Lactobacillus brevis, Leuconostoc lactis, Weissella cibaria, Lactococcus lactis, Weissella para mesenteroides, Leuconostoc pseudomesenteroides, etc. Our study provided a comprehensive picture of microbial diversity associated with rice wine fermentation starter and indicated the superiority of metagenomic sequencing over previously used techniques.",
"title": ""
},
{
"docid": "986f9f66668bb4feec5900260003b069",
"text": "BACKGROUND\nIn March, 2016, the UK Government proposed a tiered levy on sugar-sweetened beverages (SSBs; high tax for drinks with >8 g of sugar per 100 mL, moderate tax for 5-8 g, and no tax for <5 g). We estimate the effect of possible industry responses to the levy on obesity, diabetes, and dental caries.\n\n\nMETHODS\nWe modelled three possible industry responses: reformulation to reduce sugar concentration, an increase of product price, and a change of the market share of high-sugar, mid-sugar, and low-sugar drinks. For each response, we defined a better-case and worse-case health scenario. We developed a comparative risk assessment model to estimate the UK health impact of each scenario on prevalence of obesity and incidence of dental caries and type 2 diabetes. The model combined data for sales and consumption of SSBs, disease incidence and prevalence, price elasticity estimates, and estimates of the association between SSB consumption and disease outcomes. We drew the disease association parameters from a meta-analysis of experimental studies (SSBs and weight change), a meta-analysis of prospective cohort studies (type 2 diabetes), and a prospective cohort study (dental caries).\n\n\nFINDINGS\nThe best modelled scenario for health is SSB reformulation, resulting in a reduction of 144 383 (95% uncertainty interval 5102-306 743; 0·9%) of 15 470 813 adults and children with obesity in the UK, 19 094 (6920-32 678; incidence reduction of 31·1 per 100 000 person-years) fewer incident cases of type 2 diabetes per year, and 269 375 (82 211-470 928; incidence reduction of 4·4 per 1000 person-years) fewer decayed, missing, or filled teeth annually. An increase in the price of SSBs in the better-case scenario would result in 81 594 (3588-182 669; 0·5%) fewer adults and children with obesity, 10 861 (3899-18 964; 17·7) fewer incident cases of diabetes per year, and 149 378 (45 231-262 013; 2·4) fewer decayed, missing, or filled teeth annually. Changes to market share to increase the proportion of low-sugar drinks sold in the better-case scenario would result in 91 042 (4289-204 903; 0·6%) fewer adults and children with diabetes, 1528 (4414-21 785; 19·7) fewer incident cases of diabetes per year, and 172 718 (47 919-294 499; 2·8) fewer decayed, missing, or filled teeth annually. The greatest benefit for obesity and oral health would be among individuals aged younger than 18 years, with people aged older than 65 years having the largest absolute decreases in diabetes incidence.\n\n\nINTERPRETATION\nThe health impact of the soft drinks levy is dependent on its implementation by industry. Uncertainty exists as to how industry will react and about estimation of health outcomes. Health gains could be maximised by substantial product reformulation, with additional benefits possible if the levy is passed on to purchasers through raising of the price of high-sugar and mid-sugar drinks and activities to increase the market share of low-sugar products.\n\n\nFUNDING\nNone.",
"title": ""
},
{
"docid": "79ad9125b851b6d2c3ed6fb1c5cf48e1",
"text": "In this paper, we extend distant supervision (DS) based on Wikipedia for Relation Extraction (RE) by considering (i) relations defined in external repositories, e.g. YAGO, and (ii) any subset of Wikipedia documents. We show that training data constituted by sentences containing pairs of named entities in target relations is enough to produce reliable supervision. Our experiments with state-of-the-art relation extraction models, trained on the above data, show a meaningful F1 of 74.29% on a manually annotated test set: this highly improves the state-of-art in RE using DS. Additionally, our end-to-end experiments demonstrated that our extractors can be applied to any general text document.",
"title": ""
},
{
"docid": "925ae9febfc3e9ab02e76c517ed21bfc",
"text": "This study presents the macrosocial and macropsychological correlates of two cultural dimensions, Individualism-Collectivism and Hierarchy, based on a review of cross-cultural research. Correlations between the culturelevel value scores provided by Hofstede, Schwartz and Trompenaars and nation-level indices confirm their criterion validity. Thus power distance and collectivism are correlated with low social development (HDI index), income differences (Gini index), the socio-political corruption index, and the competitiveness index. The predominantly Protestant societies are more individualist and egalitarian, the Confucianist societies are more collectivist; and Islamic sociRésumé Cette étude présente les facteurs macro-sociaux et macro-psychologiques de deux dimensions culturelles, l’Individualisme-Collectivisme et la Hiérarchie ou Distance au Pouvoir, dimensions basées sur certaines révisions des recherches dans le domaine transculturel. Les corrélations entre les valeurs, au niveau culturel, fournies par Hofstede, Schwartz et Trompenaars, et des index socio-économiques confirment la validité de ces dimensions. La distance de pouvoir et le collectivisme sont associés au bas développement social (indice HDI), aux différences de revenus (indice Gini), à l’indice de corruption sociopolitique et de compétitivité. Les sociétés majoritairement protestantes sont plus individualistes et égalitaires, les sociétés confuciaMots-clés Culture, Individualisme, Collectivisme, Distance au Pouvoir o Hierarchie Key-words Culture, Individualism, Collectivism, Power Distance Correspondence concerning this article should be addressed either to Nekane Basabe, Universidad del País Vasco, Departamento de Psicología Social, Paseo de la Universidad, 7, 01006 Vitoria, Spain; or to Maria Ros, Universidad Complutense, Departamento Psicología Social, 28023 Madrid, Spain. Request for reprints should be directed to Nekane Basabe (email pspbaban@vf.ehu.es) or Maria Ros (mros@cps.ucm.es) This study was supported by the following Basque Country University Research Grants MCYT BSO2001-1236-CO-7-01, 9/UPV00109.231-13645/2001, from the University of the Basque Country and Spanish Government. * Nekane Basabe, University of the Basque Country, San Sebastián, Spain. ** María Ros, Complutense University of Madrid, Madrid, Spain. MEP 1/2005 18/04/05 17:47 Page 189",
"title": ""
},
{
"docid": "4d9312d22dcc37933d0108fbfacd1c38",
"text": "This study focuses on the use of different types of shear reinforcement in the reinforced concrete beams. Four different types of shear reinforcement are investigated; traditional stirrups, welded swimmer bars, bolted swimmer bars, and u-link bolted swimmer bars. Beam shear strength as well as beam deflection are the main two factors considered in this study. Shear failure in reinforced concrete beams is one of the most undesirable modes of failure due to its rapid progression. This sudden type of failure made it necessary to explore more effective ways to design these beams for shear. The reinforced concrete beams show different behavior at the failure stage in shear compare to the bending, which is considered to be unsafe mode of failure. The diagonal cracks that develop due to excess shear forces are considerably wider than the flexural cracks. The cost and safety of shear reinforcement in reinforced concrete beams led to the study of other alternatives. Swimmer bar system is a new type of shear reinforcement. It is a small inclined bars, with its both ends bent horizontally for a short distance and welded or bolted to both top and bottom flexural steel reinforcement. Regardless of the number of swimmer bars used in each inclined plane, the swimmer bars form plane-crack interceptor system instead of bar-crack interceptor system when stirrups are used. Several reinforced concrete beams were carefully prepared and tested in the lab. The results of these tests will be presented and discussed. The deflection of each beam is also measured at incrementally increased applied load.",
"title": ""
},
{
"docid": "1a095e16a26837e65a1c6692190b34c6",
"text": "Increasing documentation on the size and appearance of muscles in the lumbar spine of low back pain (LBP) patients is available in the literature. However, a comparative study between unoperated chronic low back pain (CLBP) patients and matched (age, gender, physical activity, height and weight) healthy controls with regard to muscle cross-sectional area (CSA) and the amount of fat deposits at different levels has never been undertaken. Moreover, since a recent focus in the physiotherapy management of patients with LBP has been the specific training of the stabilizing muscles, there is a need for quantifying and qualifying the multifidus. A comparative study between unoperated CLBP patients and matched control subjects was conducted. Twenty-three healthy volunteers and 32 patients were studied. The muscle and fat CSAs were derived from standard computed tomography (CT) images at three different levels, using computerized image analysis techniques. The muscles studied were: the total paraspinal muscle mass, the isolated multifidus and the psoas. The results showed that only the CSA of the multifidus and only at the lowest level (lower end-plate of L4) was found to be statistically smaller in LBP patients. As regards amount of fat, in none of the three studied muscles was a significant difference found between the two groups. An aetiological relationship between atrophy of the multifidus and the occurrence of LBP can not be ruled out as a possible explanation. Alternatively, atrophy may be the consequence of LBP: after the onset of pain and possible long-loop inhibition of the multifidus a combination of reflex inhibition and substitution patterns of the trunk muscles may work together and could cause a selective atrophy of the multifidus. Since this muscle is considered important for lumbar segmental stability, the phenomenon of atrophy may be a reason for the high recurrence rate of LBP.",
"title": ""
},
{
"docid": "73b12041a88a574aa19fe6cd006e9df9",
"text": "Recommender systems, especially the newly launched ones, have to deal with the data-sparsity issue, where little existing rating information is available. Recently, transfer learning has been proposed to address this problem by leveraging the knowledge from related recommender systems where rich collaborative data are available. However, most previous transfer learning models assume that entity-correspondences across different systems are given as input, which means that for any entity (e.g., a user or an item) in a target system, its corresponding entity in a source system is known. This assumption can hardly be satisfied in real-world scenarios where entity-correspondences across systems are usually unknown, and the cost of identifying them can be expensive. For example, it is extremely difficult to identify whether a user A from Facebook and a user B from Twitter are the same person. In this paper, we propose a framework to construct entity correspondence with limited budget by using active learning to facilitate knowledge transfer across recommender systems. Specifically, for the purpose of maximizing knowledge transfer, we first iteratively select entities in the target system based on our proposed criterion to query their correspondences in the source system. We then plug the actively constructed entity-correspondence mapping into a general transferred collaborative-filtering model to improve recommendation quality. We perform extensive experiments on real world datasets to verify the effectiveness of our proposed framework for this crosssystem recommendation problem.",
"title": ""
},
{
"docid": "16e3d66e7fd6621258d1bebdd469fd10",
"text": "Participatory modeling has grown in popularity in recent years with the acknowledgement that stakeholder knowledge is an essential component to informed environmental decision-making. Including stakeholders in model building and analysis allows decision-makers to understand important conceptual components in the environmental systems being managed, builds trust and common understanding between potentially diverse sets of competing groups, and reduces uncertainty by mining information that might not otherwise be a part of scientific assessment performed by experts alone. Software that facilitates the integration and analysis of stakeholder knowledge in modeling, however, is currently lacking. In this paper we report on the design and anticipated use of a participatory modeling tool based in fuzzy-logic cognitive mapping (FCM) called 'Mental Modeler' which makes the mental models of stakeholders explicit and provides an opportunity to incorporate different types of knowledge into environmental decision-making, define hypotheses to be tested, and run scenarios to determine perceived outcomes of proposed policies.",
"title": ""
},
{
"docid": "417fe20322c4458c58553c6d0984cabe",
"text": "Neural Turing Machines (NTMs) are an instance of Memory Augmented Neural Networks, a new class of recurrent neural networks which decouple computation from memory by introducing an external memory unit. NTMs have demonstrated superior performance over Long Short-Term Memory Cells in several sequence learning tasks. A number of open source implementations of NTMs exist but are unstable during training and/or fail to replicate the reported performance of NTMs. This paper presents the details of our successful implementation of a NTM. Our implementation learns to solve three sequential learning tasks from the original NTM paper. We find that the choice of memory contents initialization scheme is crucial in successfully implementing a NTM. Networks with memory contents initialized to small constant values converge on average 2 times faster than the next best memory contents initialization scheme.",
"title": ""
}
] |
scidocsrr
|
1b9bc649f2fb948a063f02dd685e6aba
|
Adjusting the Outputs of a Classifier to New a Priori Probabilities: A Simple Procedure
|
[
{
"docid": "2a1920f22f22dcf473612a6d35cf0132",
"text": "We address statistical classifier design given a mixed training set consisting of a small labelled feature set and a (generally larger) set of unlabelled features. This situation arises, e.g., for medical images, where although training features may be plentiful, expensive expertise is required to extract their class labels. We propose a classifier structure and learning algorithm that make effective use of unlabelled data to improve performance. The learning is based on maximization of the total data likelihood, i.e. over both the labelled and unlabelled data subsets. Two distinct EM learning algorithms are proposed, differing in the EM formalism applied for unlabelled data. The classifier, based on a joint probability model for features and labels, is a \"mixture of experts\" structure that is equivalent to the radial basis function (RBF) classifier, but unlike RBFs, is amenable to likelihood-based training. The scope of application for the new method is greatly extended by the observation that test data, or any new data to classify, is in fact additional, unlabelled data thus, a combined learning/classification operation much akin to what is done in image segmentation can be invoked whenever there is new data to classify. Experiments with data sets from the UC Irvine database demonstrate that the new learning algorithms and structure achieve substantial performance gains over alternative approaches.",
"title": ""
},
{
"docid": "41b305c49b74063f16e5eb07bcb905d9",
"text": "Many neural network classifiers provide outputs which estimate Bayesian a posteriori probabilities. When the estimation is accurate, network outputs can be treated as probabilities and sum to one. Simple proofs show that Bayesian probabilities are estimated when desired network outputs are 2 of M (one output unity, all others zero) and a squarederror or cross-entropy cost function is used. Results of Monte Carlo simulations performed using multilayer perceptron (MLP) networks trained with backpropagation, radial basis function (RBF) networks, and high-order polynomial networks graphically demonstrate that network outputs provide good estimates of Bayesian probabilities. Estimation accuracy depends on network complexity, the amount of training data, and the degree to which training data reflect true likelihood distributions and u priori class probabilities. Interpretation of network outputs as Bayesian probabilities allows outputs from multiple networks to be combined for higher level decision making, simplifies creation of rejection thresholds, makes it possible to compensate for differences between pattern class probabilities in training and test data, allows outputs to be used to minimize alternative risk functions, and suggests alternative measures of network performance.",
"title": ""
}
] |
[
{
"docid": "3e5fd66795e92999aacf6e39cc668aed",
"text": "A couple of popular methods are presented with their benefits and drawbacks. Commonly used methods are using wrapped phase and impulse response. With real time FFT analysis, magnitude and time domain can be analyzed simultaneously. Filtered impulse response and Cepstrum analysis are helpful tools when the spectral content differs and make it hard to analyse the impulse response. To make a successful time alignment the measurements must be anechoic. Methods such as multiple time windowing and averaging in frequency domain are presented. Group-delay and wavelets analysis are used to evaluate the measurements.",
"title": ""
},
{
"docid": "0937ad9a7795ae336e0c129920cd4b1d",
"text": "This paper examines determinants of purchasing flights from low-cost carrier (LCC) websites. In doing so an extended unified theory of acceptance and use of technology (UTAUT) model is proposed building on earlier work by Venkatesh, Thong, and Xu (2012). The results, derived from a sample of 1096 Spanish consumers of LCC flights, indicate that key determinants of purchasing are trust, habit, cost saving, ease of use, performance and expended effort, hedonic motivation and social factors. Of these variables, online purchase intentions, habit and ease of use are the most important. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "bec2b4da297daca5a5f04affea2b16b2",
"text": "Using current reinforcement learning methods, it has recently become possible to learn to play unknown 3D games from raw pixels. In this work, we study the challenges that arise in such complex environments, and summarize current methods to approach these. We choose a task within the Doom game, that has not been approached yet. The goal for the agent is to fight enemies in a 3D world consisting of five rooms. We train the DQN and LSTMA3C algorithms on this task. Results show that both algorithms learn sensible policies, but fail to achieve high scores given the amount of training. We provide insights into the learned behavior, which can serve as a valuable starting point for further research in the Doom domain.",
"title": ""
},
{
"docid": "2575f09d6d723b6cc768efb6e24321ce",
"text": "This paper introduces an approach to performance animation that employs video cameras and a small set of retro-reflective markers to create a low-cost, easy-to-use system that might someday be practical for home use. The low-dimensional control signals from the user's performance are supplemented by a database of pre-recorded human motion. At run time, the system automatically learns a series of local models from a set of motion capture examples that are a close match to the marker locations captured by the cameras. These local models are then used to reconstruct the motion of the user as a full-body animation. We demonstrate the power of this approach with real-time control of six different behaviors using two video cameras and a small set of retro-reflective markers. We compare the resulting animation to animation from commercial motion capture equipment with a full set of markers.",
"title": ""
},
{
"docid": "c5c46fb727ff9447ebe75e3625ad375b",
"text": "Plenty of face detection and recognition methods have been proposed and got delightful results in decades. Common face recognition pipeline consists of: 1) face detection, 2) face alignment, 3) feature extraction, 4) similarity calculation, which are separated and independent from each other. The separated face analyzing stages lead the model redundant calculation and are hard for end-to-end training. In this paper, we proposed a novel end-to-end trainable convolutional network framework for face detection and recognition, in which a geometric transformation matrix was directly learned to align the faces, instead of predicting the facial landmarks. In training stage, our single CNN model is supervised only by face bounding boxes and personal identities, which are publicly available from WIDER FACE [36] dataset and CASIA-WebFace [37] dataset. Tested on Face Detection Dataset and Benchmark (FDDB) [11] dataset and Labeled Face in the Wild (LFW) [9] dataset, we have achieved 89.24% recall for face detection task and 98.63% verification accuracy for face recognition task simultaneously, which are comparable to state-of-the-art results.",
"title": ""
},
{
"docid": "0afe6d5922fb39021823dc4f77547549",
"text": "BACKGROUND\n'Shock wave' therapies are now extensively used in the treatment of musculoskeletal injuries. This systematic review summarises the evidence base for the use of these modalities.\n\n\nMETHODS\nA thorough search of the literature was performed to identify studies of adequate quality to assess the evidence base for shockwave therapies on pain in specific soft tissue injuries. Both focused extracorporeal shockwave therapy (F-ESWT) and radial pulse therapy (RPT) were examined.\n\n\nRESULTS\n23 appropriate studies were identified. There is evidence for the benefit of F-ESWT and of RPT in a number of soft tissue musculoskeletal conditions, and evidence that both treatment modalities are safe. There is evidence that F-ESWT is effective in the treatment of plantar fasciitis, calcific tendinitis, and that RPT is effective in plantar fasciitis. Where benefit is seen in F-ESWT, it appears to be dose dependent, with greater success seen with higher dose regimes. There is low level evidence for lack of benefit of low-dose F-ESWT and RPT in non-calcific rotator cuff disease and mixed evidence in lateral epicondylitis.",
"title": ""
},
{
"docid": "6e53c13c4da3f985f85d56d2c9b037e6",
"text": "Simulating human mobility is important in mobile networks because many mobile devices are either attached to or controlled by humans and it is very hard to deploy real mobile networks whose size is controllably scalable for performance evaluation. Lately various measurement studies of human walk traces have discovered several significant statistical patterns of human mobility. Namely these include truncated power-law distributions of flights, pause-times and inter-contact times, fractal way-points, and heterogeneously defined areas of individual mobility. Unfortunately, none of existing mobility models effectively captures all of these features. This paper presents a new mobility model called SLAW (Self-similar Least Action Walk) that can produce synthetic walk traces containing all these features. This is by far the first such model. Our performance study using using SLAW generated traces indicates that SLAW is effective in representing social contexts present among people sharing common interests or those in a single community such as university campus, companies and theme parks. The social contexts are typically common gathering places where most people visit during their daily lives such as student unions, dormitory, street malls and restaurants. SLAW expresses the mobility patterns involving these contexts by fractal waypoints and heavy-tail flights on top of the waypoints. We verify through simulation that SLAW brings out the unique performance features of various mobile network routing protocols.",
"title": ""
},
{
"docid": "0f80933b5302bd6d9595234ff8368ac4",
"text": "We show how a simple convolutional neural network (CNN) can be trained to accurately and robustly regress 6 degrees of freedom (6DoF) 3D head pose, directly from image intensities. We further explain how this FacePoseNet (FPN) can be used to align faces in 2D and 3D as an alternative to explicit facial landmark detection for these tasks. We claim that in many cases the standard means of measuring landmark detector accuracy can be misleading when comparing different face alignments. Instead, we compare our FPN with existing methods by evaluating how they affect face recognition accuracy on the IJB-A and IJB-B benchmarks: using the same recognition pipeline, but varying the face alignment method. Our results show that (a) better landmark detection accuracy measured on the 300W benchmark does not necessarily imply better face recognition accuracy. (b) Our FPN provides superior 2D and 3D face alignment on both benchmarks. Finally, (c), FPN aligns faces at a small fraction of the computational cost of comparably accurate landmark detectors. For many purposes, FPN is thus a far faster and far more accurate face alignment method than using facial landmark detectors.",
"title": ""
},
{
"docid": "2d644e4146358131d43fbe25ba725c74",
"text": "Neural interface technology has made enormous strides in recent years but stimulating electrodes remain incapable of reliably targeting specific cell types (e.g. excitatory or inhibitory neurons) within neural tissue. This obstacle has major scientific and clinical implications. For example, there is intense debate among physicians, neuroengineers and neuroscientists regarding the relevant cell types recruited during deep brain stimulation (DBS); moreover, many debilitating side effects of DBS likely result from lack of cell-type specificity. We describe here a novel optical neural interface technology that will allow neuroengineers to optically address specific cell types in vivo with millisecond temporal precision. Channelrhodopsin-2 (ChR2), an algal light-activated ion channel we developed for use in mammals, can give rise to safe, light-driven stimulation of CNS neurons on a timescale of milliseconds. Because ChR2 is genetically targetable, specific populations of neurons even sparsely embedded within intact circuitry can be stimulated with high temporal precision. Here we report the first in vivo behavioral demonstration of a functional optical neural interface (ONI) in intact animals, involving integrated fiberoptic and optogenetic technology. We developed a solid-state laser diode system that can be pulsed with millisecond precision, outputs 20 mW of power at 473 nm, and is coupled to a lightweight, flexible multimode optical fiber, approximately 200 microm in diameter. To capitalize on the unique advantages of this system, we specifically targeted ChR2 to excitatory cells in vivo with the CaMKIIalpha promoter. Under these conditions, the intensity of light exiting the fiber ( approximately 380 mW mm(-2)) was sufficient to drive excitatory neurons in vivo and control motor cortex function with behavioral output in intact rodents. No exogenous chemical cofactor was needed at any point, a crucial finding for in vivo work in large mammals. Achieving modulation of behavior with optical control of neuronal subtypes may give rise to fundamental network-level insights complementary to what electrode methodologies have taught us, and the emerging optogenetic toolkit may find application across a broad range of neuroscience, neuroengineering and clinical questions.",
"title": ""
},
{
"docid": "a07472c2f086332bf0f97806255cb9d5",
"text": "The Learning Analytics Dashboard (LAD) is an application to show students’ online behavior patterns in a virtual learning environment. This supporting tool works by tracking students’ log-files, mining massive amounts of data to find meaning, and visualizing the results so they can be comprehended at a glance. This paper reviews previously developed applications to analyze their features. Based on the implications from the review of previous studies as well as a preliminary investigation on the need for such tools, an early version of the LAD was designed and developed. Also, in order to improve the LAD, a usability test incorporating a stimulus recall interview was conducted with 38 college students in two blended learning classes. Evaluation of this tool was performed in an experimental research setting with a control group and additional surveys were conducted asking students’ about perceived usefulness, conformity, level of understanding of graphs, and their behavioral changes. The results indicated that this newly developed learning analytics tool did not significantly impact on their learning achievement. However, lessons learned from the usability and pilot tests support that visualized information impacts on students’ understanding level; and the overall satisfaction with dashboard plays as a covariant that impacts on both the degree of understanding and students’ perceived change of behavior. Taking in the results of the tests and students’ openended responses, a scaffolding strategy to help them understand the meaning of the information displayed was included in each sub section of the dashboard. Finally, this paper discusses future directions in regard to improving LAD so that it better supports students’ learning performance, which might be helpful for those who develop learning analytics applications for students.",
"title": ""
},
{
"docid": "209bf58b1222476aca884a83548327a0",
"text": "This report describes the development and validation/calibration of a structured food frequency questionnaire for use in a large-scale cohort study of diet and health in Chinese men and women aged 45-74 years in Singapore, the development of a food composition database for analysis of the dietary data, and the results of the dietary validation/calibration study. The present calibration study comparing estimated intakes from 24-hour recalls with those from the food frequency questionnaires revealed correlations of 0.24-0.79 for energy and nutrients among the Singapore Chinese, which are comparable to the correlation coefficients reported in calibration studies of other populations. We also report on the nutritional profiles of Singapore Chinese on the basis of results of 1,880 24-hour dietary recalls conducted on 1,022 (425 men and 597 women) cohort subjects. Comparisons with age-adjusted corresponding values for US whites and blacks show distinct differences in dietary intakes between the Singapore and US populations. The Singapore cohort will be followed prospectively to identify dietary associations with cancer risk and other health outcomes.",
"title": ""
},
{
"docid": "eec7a9a6859e641c3cc0ade73583ef5c",
"text": "We propose an Apache Spark-based scale-up server architecture using Docker container-based partitioning method to improve performance scalability. The performance scalability problem of Apache Spark-based scale-up servers is due to garbage collection(GC) and remote memory access overheads when the servers are equipped with significant number of cores and Non-Uniform Memory Access(NUMA). The proposed method minimizes the problems using Docker container-based architecture effectively partitioning the original scale-up server into small logical servers. Our evaluation study based on benchmark programs revealed that the partitioning method showed performance improvement by ranging from 1.1x through 1.7x on a 120 core scale-up system. Our proof-of-concept scale-up server architecture provides the basis towards complete and practical design of partitioning-based scale-up servers showing performance scalability.",
"title": ""
},
{
"docid": "671952f18fb9041e7335f205666bf1f5",
"text": "This new handbook is an efficient way to keep up with the continuing advances in antenna technology and applications. The handbook is uniformly well written, up-to-date, and filled with a wealth of practical information. This makes it a useful reference for most antenna engineers and graduate students.",
"title": ""
},
{
"docid": "4591003089a1ccecd46fb1ac80ab3bb7",
"text": "Pre-season rugby training develops the physical requisites for competition and consists of a high volume of resistance training and anaerobic and aerobic conditioning. However, the effects of a rugby union pre-season in professional athletes are currently unknown. Therefore, the purpose of this investigation was to determine the effects of a 4-week pre-season on 33 professional rugby union players. Bench press and box squat increased moderately (13.6 kg, 90% confidence limits +/-2.9 kg and 17.6 +/- 8.0 kg, respectively) over the training phase. Small decreases in bench throw (70.6 +/- 53.5 W), jump squat (280.1 +/- 232.4 W), and fat mass (1.4 +/- 0.4 kg) were observed. In addition, small increases were seen in fat-free mass (2.0 +/- 0.6 kg) and flexed upper-arm girth (0.6 +/- 0.2 cm), while moderate increases were observed in mid-thigh girth (1.9 +/- 0.5 cm) and perception of fatigue (0.6 +/- 0.4 units). Increases in strength and body composition were observed in elite rugby union players after 4 weeks of intensive pre-season training, but this may have been the result of a return to fitness levels prior to the off-season. Decreases in power may reflect high training volumes and increases in perceived of fatigue.",
"title": ""
},
{
"docid": "b755647e0c32207c9a239b28e43bcb90",
"text": "The utilization of Information Technology (IT) is spreading in tourism industry with explosive growth of Internet, Social Network Service (SNS) through smart phone applications. Especially, since intensive information has high value on tourism area, IT is becoming a crucial factor in the tourism industry. The smart tourism is explained as an holistic approach that provide tour information, service related to travel, such as destination, food, transportation, reservation, travel guide, conveniently to tourists through IT devices. In our research, we focus on the Korea Tourism Organization’s (KTO’s) smart tourism case. This research concentrates on the necessity and effectiveness of smart tourism which delivers travel information in real-time base. Also, our study overview how KTO’s IT operation manages each channel, website, SNS, applications and finally suggests the smart tourism’s future direction for the successful realization.",
"title": ""
},
{
"docid": "3b5e584b95ae31ff94be85d7dbea1ccb",
"text": "Due to the fact that no NP-complete problem can be solved in polynomial time (unless P=NP), many approximability results (both positive and negative) of NP-hard optimization problems have appeared in the technical literature. In this compendium, we collect a large number of these results. ● Introduction ❍ NPO Problems: Definitions and Preliminaries ❍ Approximate Algorithms and Approximation Classes ❍ Completeness in Approximation Classes ❍ A list of NPO problems ❍ Improving the compendium ● Graph Theory ❍ Covering and Partitioning ❍ Subgraphs and Supergraphs ❍ Vertex Ordering file:///E|/COMPEND/COMPED19/COMPENDI.HTM (1 of 2) [19/1/2003 1:36:58] A compendium of NP optimization problems ❍ Isoand Other Morphisms ❍ Miscellaneous ● Network Design ❍ Spanning Trees ❍ Cuts and Connectivity ❍ Routing Problems ❍ Flow Problems ❍ Miscellaneous ● Sets and Partitions ❍ Covering, Hitting, and Splitting ❍ Weighted Set Problems ● Storage and Retrieval ❍ Data Storage ❍ Compression and Representation ❍ Miscellaneous ● Sequencing and Scheduling ❍ Sequencing on One Processor ❍ Multiprocessor Scheduling ❍ Shop Scheduling ❍ Miscellaneous ● Mathematical Programming ● Algebra and Number Theory ● Games and Puzzles ● Logic ● Program Optimization ● Miscellaneous ● References ● Index ● About this document ... Viggo Kann Mon Apr 21 13:07:14 MET DST 1997 file:///E|/COMPEND/COMPED19/COMPENDI.HTM (2 of 2) [19/1/2003 1:36:58]",
"title": ""
},
{
"docid": "d44b15b2e8bbc198030746a46c47e00c",
"text": "Recent advances in far-field optical nanoscopy have enabled fluorescence imaging with a spatial resolution of 20 to 50 nanometers. Multicolor super-resolution imaging, however, remains a challenging task. Here, we introduce a family of photo-switchable fluorescent probes and demonstrate multicolor stochastic optical reconstruction microscopy (STORM). Each probe consists of a photo-switchable \"reporter\" fluorophore that can be cycled between fluorescent and dark states, and an \"activator\" that facilitates photo-activation of the reporter. Combinatorial pairing of reporters and activators allows the creation of probes with many distinct colors. Iterative, color-specific activation of sparse subsets of these probes allows their localization with nanometer accuracy, enabling the construction of a super-resolution STORM image. Using this approach, we demonstrate multicolor imaging of DNA model samples and mammalian cells with 20- to 30-nanometer resolution. This technique will facilitate direct visualization of molecular interactions at the nanometer scale.",
"title": ""
},
{
"docid": "6b930b924ea560a4cbdff108f5d0c4af",
"text": "Abstract A blockchain constitutes a distributed ledger that records transactions across a network of agents. Blockchain’s value proposition requires that agents eventually agree on the ledger’s contents since payments possess risk otherwise. Restricted blockchains ensure this consensus by appointing a central authority to dictate payment validity. Permissionless blockchains (e.g. Bitcoin, Ethereum), however, admit no central authority and therefore face a non-trivial issue of inducing consensus endogenously. Nakamoto (2008) provided a temporary solution to the problem by invoking an economic mechanism known as Proof-of-Work (PoW). PoW, however, lacks sustainability, so, in recent years, a variety of alternatives have been proposed. This paper studies the most famous such alternative, Proof-of-Stake (PoS). I provide the first formal economic model of PoS and demonstrate that PoS induces consensus in equilibrium. My result arises because I endogenize blockchain coin prices. Propagating disagreement introduces the prospect of default and thereby reduces blockchain coin value which implies that stake-holders face an implicit cost from delaying consensus. PoS randomly selects a stake-holder to update the blockchain and provides her an explicit monetary incentive, a “block reward,” for her service. In the event of disagreement, block rewards constitute a perverse incentive, but I demonstrate that restricting updating ability to large stake-holders induces an equilibrium in which consensus obtains as soon as possible. I also demonstrate that consensus obtains eventually almost surely in any equilibrium so long as the blockchain employs a modest block reward schedule. My work reveals the economic viability of permissionless blockchains.",
"title": ""
},
{
"docid": "577b239cb33f88d95a84f150f12d3c12",
"text": "This paper presents UWSim: a new software tool for visualization and simulation of underwater robotic missions. The software visualizes an underwater virtual scenario that can be configured using standard modeling software. Controllable underwater vehicles, surface vessels and robotic manipulators, as well as simulated sensors, can be added to the scene and accessed externally through network interfaces. This allows to easily integrate the simulation and visualization tool with existing control architectures, thus allowing hardware-in-the-loop simulations (HIL). UWSim has been successfully used for simulating the logics of underwater intervention missions and for reproducing real missions from the captured logs. The software is offered as open source, thus filling a gap in the underwater robotics community, where commercial simulators oriented to ROV pilot training predominate.",
"title": ""
},
{
"docid": "140fd854c8564b75609f692229ac616e",
"text": "Modern search systems are based on dozens or even hundreds of ranking features. The dueling bandit gradient descent (DBGD) algorithm has been shown to effectively learn combinations of these features solely from user interactions. DBGD explores the search space by comparing a possibly improved ranker to the current production ranker. To this end, it uses interleaved comparison methods, which can infer with high sensitivity a preference between two rankings based only on interaction data. A limiting factor is that it can compare only to a single exploratory ranker. We propose an online learning to rank algorithm called multileave gradient descent (MGD) that extends DBGD to learn from so-called multileaved comparison methods that can compare a set of rankings instead of merely a pair. We show experimentally that MGD allows for better selection of candidates than DBGD without the need for more comparisons involving users. An important implication of our results is that orders of magnitude less user interaction data is required to find good rankers when multileaved comparisons are used within online learning to rank. Hence, fewer users need to be exposed to possibly inferior rankers and our method allows search engines to adapt more quickly to changes in user preferences.",
"title": ""
}
] |
scidocsrr
|
02998990a4a1e8f1ee9652b87fa435dd
|
Standardized Protocol Stack for the Internet of (Important) Things
|
[
{
"docid": "bf6a5ff65a60da049c6024375e2effb6",
"text": "This document updates RFC 4944, \"Transmission of IPv6 Packets over IEEE 802.15.4 Networks\". This document specifies an IPv6 header compression format for IPv6 packet delivery in Low Power Wireless Personal Area Networks (6LoWPANs). The compression format relies on shared context to allow compression of arbitrary prefixes. How the information is maintained in that shared context is out of scope. This document specifies compression of multicast addresses and a framework for compressing next headers. UDP header compression is specified within this framework. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.",
"title": ""
}
] |
[
{
"docid": "2e4accf8d6342a8b6f4ea20d1286c8e5",
"text": "Peer-to-Peer (P2P) lending is a popular way of lending in contemporary Internet financial filed. Comparing with the traditional bank lending, the annual risk evaluation is no longer applicable for P2P platform because of the short life cycle and a lot of transaction records. This paper presents a method to dynamically evaluate the operation risk of P2P plat- forms based on a short-time multi-source regression algorithm. Dynamic time windows are used to split up the lending records and linear regression method is used to quantify the dynamic risk index of P2P platforms. The experimental results show that the proposed method can reflect the visible operation situation of platforms, and give investors dynamic risk assessment and effective tips of the platforms.",
"title": ""
},
{
"docid": "66334ca62a62a78cab72c80b9a19072b",
"text": "End-to-end neural models have made significant progress in question answering, however recent studies show that these models implicitly assume that the answer and evidence appear close together in a single document. In this work, we propose the Coarse-grain Fine-grain Coattention Network (CFC), a new question answering model that combines information from evidence across multiple documents. The CFC consists of a coarse-grain module that interprets documents with respect to the query then finds a relevant answer, and a fine-grain module which scores each candidate answer by comparing its occurrences across all of the documents with the query. We design these modules using hierarchies of coattention and selfattention, which learn to emphasize different parts of the input. On the Qangaroo WikiHop multi-evidence question answering task, the CFC obtains a new stateof-the-art result of 70.6% on the blind test set, outperforming the previous best by 3% accuracy despite not using pretrained contextual encoders.",
"title": ""
},
{
"docid": "8b79816cc07237489dafde316514702a",
"text": "In this dataset paper we describe our work on the collection and analysis of public WhatsApp group data. Our primary goal is to explore the feasibility of collecting and using WhatsApp data for social science research. We therefore present a generalisable data collection methodology, and a publicly available dataset for use by other researchers. To provide context, we perform statistical exploration to allow researchers to understand what public WhatsApp group data can be collected and how this data can be used. Given the widespread use of WhatsApp, our techniques to obtain public data and potential applications are important for the community.",
"title": ""
},
{
"docid": "5addf869fb072fb047b9e4ff4f1dc3eb",
"text": "This paper presents type classes, a new approach to ad-hoc polymorphism. Type classes permit overloading of arithmetic operators such as multiplication, and generalise the “eqtype variables” of Standard ML. Type classes extend the Hindley/Milner polymorphic type system, and provide a new approach to issues that arise in object-oriented programming, bounded type quantification, and abstract data types. This paper provides an informal introduction to type classes, and defines them formally by means of type inference rules.",
"title": ""
},
{
"docid": "40cd5516298028be83576b9ec5c5c746",
"text": "Security is becoming an increasingly important issue for IT systems, yet it is often dealt with as separate from mainstream systems and software development and in many cases neglected or addressed post-hoc, yielding costly and unsatisfactory solutions. One idea to improve the focus on security might be to include such concerns into mainstream diagram notations used in information systems analysis, and one existing proposal for this is misuse cases, allowing for representation of attack use cases together with the normal legitimate use cases of a system. While this technique has shown much promise, it is not equally useful for all kinds of attack. In this paper we look into another type of technique that could complement misuse cases for early elicitation of security requirements, namely mal-activity diagrams. These allow the inclusion of hostile activities together with legitimate activities in business process models. Through some examples and a small case study, mal-activity diagrams are shown to have strengths in many aspects where misuse cases have weaknesses.",
"title": ""
},
{
"docid": "99517fd63982a47aa18366780586d327",
"text": "A low-profile substrate-integrated lens antenna is designed using planar metamaterials for a broadband operation. The lens antenna is based on embedding a Vivaldi antenna source inside a parallel-plate waveguide to illuminate a half Maxwell fish-eye (HMFE) lens operating in X-band. The focusing condition of the lens, requiring a gradient refractive index is achieved through the use of complementary nonresonant metamaterial structures. Numerical simulations are performed to determine the suitable unit cells geometry with respect to the wave launcher inserted into the parallel-plate waveguide. The electric field distribution inside the antenna system has also been explored numerically. Far-field radiation patterns have been measured on a fabricated prototype in an anechoic chamber. It has been shown from both near- and far-field plots that the proposed planar antenna presents good focusing properties.",
"title": ""
},
{
"docid": "4a3951e865671f8c051f011e5e4459ae",
"text": "Intrusion Detection System (IDS) have become increasingly popular over the past years as an important network security technology to detect cyber attacks in a wide variety of network communication. IDS monitors' network or host system activities by collecting network information, and analyze this information for malicious activities. Cloud computing, with the concept of Software as a Service (SaaS) presents an exciting benefit when it enables providers to rent their services to users in perform complex tasks over the Internet. In addition, Cloud based services reduce a cost in investing new infrastructure, training new personnel, or licensing new software. In this paper, we introduce a novel framework based on Cloud computing called Cloud-based Intrusion Detection Service (CBIDS). This model enables the identification of malicious activities from different points of network and overcome the deficiency of classical intrusion detection. CBIDS can be implemented to detect variety of attacks in private and public Clouds.",
"title": ""
},
{
"docid": "6d620c1862b053c97e3ce29a415550e1",
"text": "To understand whether a user is satisfied with the current search results, implicit behavior is a useful data source, with clicks being the best-known implicit signal. However, it is possible for a non-clicking user to be satisfied and a clicking user to be dissatisfied. Here we study additional implicit signals based on the relationship between the user's current query and the next query, such as their textual similarity and the inter-query time. Using a large unlabeled dataset, a labeled dataset of queries and a labeled dataset of user tasks, we analyze the relationship between these signals. We identify an easily-implemented rule that indicates dissatisfaction: that a similar query issued within a time interval that is short enough (such as five minutes) implies dissatisfaction. By incorporating additional query-based features in the model, we show that a query-based model (with no click information) can indicate satisfaction more accurately than click-based models. The best model uses both query and click features. In addition, by comparing query sequences in successful tasks and unsuccessful tasks, we observe that search success is an incremental process for successful tasks with multiple queries.",
"title": ""
},
{
"docid": "b163fb3faa31f6db35599d32d7946523",
"text": "Humans learn how to behave directly through environmental experience and indirectly through rules and instructions. Behavior analytic research has shown that instructions can control behavior, even when such behavior leads to sub-optimal outcomes (Hayes, S. (Ed.). 1989. Rule-governed behavior: cognition, contingencies, and instructional control. Plenum Press.). Here we examine the control of behavior through instructions in a reinforcement learning task known to depend on striatal dopaminergic function. Participants selected between probabilistically reinforced stimuli, and were (incorrectly) told that a specific stimulus had the highest (or lowest) reinforcement probability. Despite experience to the contrary, instructions drove choice behavior. We present neural network simulations that capture the interactions between instruction-driven and reinforcement-driven behavior via two potential neural circuits: one in which the striatum is inaccurately trained by instruction representations coming from prefrontal cortex/hippocampus (PFC/HC), and another in which the striatum learns the environmentally based reinforcement contingencies, but is \"overridden\" at decision output. Both models capture the core behavioral phenomena but, because they differ fundamentally on what is learned, make distinct predictions for subsequent behavioral and neuroimaging experiments. Finally, we attempt to distinguish between the proposed computational mechanisms governing instructed behavior by fitting a series of abstract \"Q-learning\" and Bayesian models to subject data. The best-fitting model supports one of the neural models, suggesting the existence of a \"confirmation bias\" in which the PFC/HC system trains the reinforcement system by amplifying outcomes that are consistent with instructions while diminishing inconsistent outcomes.",
"title": ""
},
{
"docid": "8dae37ecc2e1bdb6bc8a625b565ea7e8",
"text": "Friendships are essential for adolescent social development. However, they may be pursued for varying motives, which, in turn, may predict similarity in friendships via social selection or social influence processes, and likely help to explain friendship quality. We examined the effect of early adolescents' (N = 374, 12-14 years) intrinsic and extrinsic friendship motivation on friendship selection and social influence by utilizing social network modeling. In addition, longitudinal relations among motivation and friendship quality were estimated with structural equation modeling. Extrinsic motivation predicted activity in making friendship nominations during the sixth grade and lower friendship quality across time. Intrinsic motivation predicted inactivity in making friendship nominations during the sixth, popularity as a friend across the transition to middle school, and higher friendship quality across time. Social influence effects were observed for both motives, but were more pronounced for intrinsic motivation.",
"title": ""
},
{
"docid": "62c01560bc79c3c9d9a21b0b1fbab2e6",
"text": "This paper presents an innovative and open concept for secure mobile payments, based on payment applications hosted in the cloud. It details an experimental Android platform, and NFC payment experiments performed with this platform. All the platform components rely on open technologies, i.e. available in commercial devices or specified by IETF drafts. On the mobile side security is enforced by a dedicated protocol (based on TLS), running in a secure element. This protocol manages the access to remote payment applications, running in secure elements, hosted in dedicated servers. This approach creates a new entity that collects meaningful and relevant data, dealing with user's purchases. It also increases the level of trust both for consumers and banks.",
"title": ""
},
{
"docid": "075e263303b73ee5d1ed6cff026aee63",
"text": "Automatic and accurate whole-heart and great vessel segmentation from 3D cardiac magnetic resonance (MR) images plays an important role in the computer-assisted diagnosis and treatment of cardiovascular disease. However, this task is very challenging due to ambiguous cardiac borders and large anatomical variations among different subjects. In this paper, we propose a novel densely-connected volumetric convolutional neural network, referred as DenseVoxNet, to automatically segment the cardiac and vascular structures from 3D cardiac MR images. The DenseVoxNet adopts the 3D fully convolutional architecture for effective volume-to-volume prediction. From the learning perspective, our DenseVoxNet has three compelling advantages. First, it preserves the maximum information flow between layers by a densely-connected mechanism and hence eases the network training. Second, it avoids learning redundant feature maps by encouraging feature reuse and hence requires fewer parameters to achieve high performance, which is essential for medical applications with limited training data. Third, we add auxiliary side paths to strengthen the gradient propagation and stabilize the learning process. We demonstrate the effectiveness of DenseVoxNet by comparing it with the state-of-the-art approaches from HVSMR 2016 challenge in conjunction with MICCAI, and our network achieves the best dice coefficient. We also show that our network can achieve better performance than other 3D ConvNets but with fewer parameters.",
"title": ""
},
{
"docid": "78c2c15faacbc4b82d2efb75758f36bf",
"text": "The application of deep learning techniques resulted in remarkable improvement of machine learning models. In this paper we provide detailed characterizations of deep learning models used in many Facebook social network services. We present computational characteristics of our models, describe high-performance optimizations targeting existing systems, point out their limitations and make suggestions for the future general-purpose/accelerated inference hardware. Also, we highlight the need for better co-design of algorithms, numerics and computing platforms to address the challenges of workloads often run in data centers.",
"title": ""
},
{
"docid": "aef66fafaad00b5374b96f5270004cd9",
"text": "The Semantic Web could be a crucial tool to terrorism researchers, but to achieve this potential an accessible and flexible but comprehensive ontology needs to be designed to describe terrorist activity. Terrorist events are complicated phenomena that involve a large variety of situations and relationships. This paper addresses some of the issues the authors have encountered trying to build such an ontology – particularly how to describe sequences of events and the social networks that underpin terrorist",
"title": ""
},
{
"docid": "8aefd572e089cb29c13cefc6e59bdda8",
"text": "Different linguistic perspectives causes many diverse segmentation criteria for Chinese word segmentation (CWS). Most existing methods focus on improve the performance for each single criterion. However, it is interesting to exploit these different criteria and mining their common underlying knowledge. In this paper, we propose adversarial multi-criteria learning for CWS by integrating shared knowledge from multiple heterogeneous segmentation criteria. Experiments on eight corpora with heterogeneous segmentation criteria show that the performance of each corpus obtains a significant improvement, compared to single-criterion learning. Source codes of this paper are available on Github1.",
"title": ""
},
{
"docid": "aa35675d13972c585252a46fd520f374",
"text": "We present a multi-layered mapping, planning, and command execution system developed and tested on the LAGR mobile robot. Key to robust performance under uncertainty is the combination of a short-range perception system operating at high frame rate and low resolution and a long-range, adaptive vision system operating at lower frame rate and higher resolution. The short-range module performs local planning and obstacle avoidance with fast reaction times, while the long-range module performs strategic visual planning. Probabilistic traversability labels provided by the perception modules are combined and accumulated into a robot-centered hyperbolic-polar map with a 200 meter effective range. Instead of using a dynamical model of the robot for short-range planning, the system uses a large lookup table of physically-possible trajectory segments recorded on the robot in a wide variety of driving conditions. Localization is performed using a combination of GPS, wheel odometry, IMU, and a high-speed, low-complexity rotational visual odometry module. The end to end system was developed and tested on the LAGR mobile robot, and was verified in independent government tests.",
"title": ""
},
{
"docid": "fd7799d569bdc4ad48a88070974f6c13",
"text": "This paper presents a new large scale dataset targeting evaluation of local shape descriptors and 3d object recognition algorithms. The dataset consists of point clouds and triangulated meshes from 292 physical scenes taken from 11 different views, a total of approximately 3204 views. Each of the physical scenes contain 10 occluded objects resulting in a dataset with 32040 unique object poses and 45 different object models. The 45 object models are full 360 degree models which are scanned with a high precision structured light scanner and a turntable. All the included objects belong to different geometric groups, concave, convex, cylindrical and flat 3D object models. The object models have varying amount of local geometric features to challenge existing local shape feature descriptors in terms of descriptiveness and robustness. The dataset is validated in a benchmark which evaluates the matching performance of 7 different state-of-the-art local shape descriptors. Further, we validate the dataset in a 3D object recognition pipeline. Our benchmark shows as expected that local shape feature descriptors without any global point relation across the surface have a poor matching performance with flat and cylindrical objects. It is our objective that this dataset contributes to the future development of next generation of 3D object recognition algorithms. The dataset is public available at http://roboimagedata.compute.dtu.dk/.",
"title": ""
},
{
"docid": "2f0da9f8dac07ded1ce4282e4888c538",
"text": "This paper addresses the problem of compression of 3D point cloud sequences that are characterized by moving 3D positions and color attributes. As temporally successive point cloud frames share some similarities, motion estimation is key to effective compression of these sequences. It, however, remains a challenging problem as the point cloud frames have varying numbers of points without explicit correspondence information. We represent the time-varying geometry of these sequences with a set of graphs, and consider 3D positions and color attributes of the point clouds as signals on the vertices of the graphs. We then cast motion estimation as a feature-matching problem between successive graphs. The motion is estimated on a sparse set of representative vertices using new spectral graph wavelet descriptors. A dense motion field is eventually interpolated by solving a graph-based regularization problem. The estimated motion is finally used for removing the temporal redundancy in the predictive coding of the 3D positions and the color characteristics of the point cloud sequences. Experimental results demonstrate that our method is able to accurately estimate the motion between consecutive frames. Moreover, motion estimation is shown to bring a significant improvement in terms of the overall compression performance of the sequence. To the best of our knowledge, this is the first paper that exploits both the spatial correlation inside each frame (through the graph) and the temporal correlation between the frames (through the motion estimation) to compress the color and the geometry of 3D point cloud sequences in an efficient way.",
"title": ""
},
{
"docid": "a2d76e1217b0510f82ebccab39b7d387",
"text": "The floating photovoltaic system is a new concept in energy technology to meet the needs of our time. The system integrates existing land based photovoltaic technology with a newly developed floating photovoltaic technology. K-water has already completed two floating photovoltaic systems that enable generation of 100kW and 500kW respectively. In this paper, the generation efficiency of floating and land photovoltaic systems were compared and analyzed. Floating PV has shown greater generation efficiency by over 10% compared with the general PV systems installed overland",
"title": ""
},
{
"docid": "8ddf705b1fdd09f33870e940f19aa0e2",
"text": "BACKGROUND\nThe prevalence of obesity has increased substantially over the past 30 years. We performed a quantitative analysis of the nature and extent of the person-to-person spread of obesity as a possible factor contributing to the obesity epidemic.\n\n\nMETHODS\nWe evaluated a densely interconnected social network of 12,067 people assessed repeatedly from 1971 to 2003 as part of the Framingham Heart Study. The body-mass index was available for all subjects. We used longitudinal statistical models to examine whether weight gain in one person was associated with weight gain in his or her friends, siblings, spouse, and neighbors.\n\n\nRESULTS\nDiscernible clusters of obese persons (body-mass index [the weight in kilograms divided by the square of the height in meters], > or =30) were present in the network at all time points, and the clusters extended to three degrees of separation. These clusters did not appear to be solely attributable to the selective formation of social ties among obese persons. A person's chances of becoming obese increased by 57% (95% confidence interval [CI], 6 to 123) if he or she had a friend who became obese in a given interval. Among pairs of adult siblings, if one sibling became obese, the chance that the other would become obese increased by 40% (95% CI, 21 to 60). If one spouse became obese, the likelihood that the other spouse would become obese increased by 37% (95% CI, 7 to 73). These effects were not seen among neighbors in the immediate geographic location. Persons of the same sex had relatively greater influence on each other than those of the opposite sex. The spread of smoking cessation did not account for the spread of obesity in the network.\n\n\nCONCLUSIONS\nNetwork phenomena appear to be relevant to the biologic and behavioral trait of obesity, and obesity appears to spread through social ties. These findings have implications for clinical and public health interventions.",
"title": ""
}
] |
scidocsrr
|
f2a0248393c97d8362a6179e650ac61c
|
Contextual and Feature-based Models by PolyU Team at the NTCIR-13 STC-2 Task †
|
[
{
"docid": "9b1643284b783f2947be11f16ae8d942",
"text": "We investigate the task of modeling opendomain, multi-turn, unstructured, multiparticipant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant’s history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.",
"title": ""
},
{
"docid": "5132cf4fdbe55a47214f66738599df78",
"text": "Users may strive to formulate an adequate textual query for their information need. Search engines assist the users by presenting query suggestions. To preserve the original search intent, suggestions should be context-aware and account for the previous queries issued by the user. Achieving context awareness is challenging due to data sparsity. We present a novel hierarchical recurrent encoder-decoder architecture that makes possible to account for sequences of previous queries of arbitrary lengths. As a result, our suggestions are sensitive to the order of queries in the context while avoiding data sparsity. Additionally, our model can suggest for rare, or long-tail, queries. The produced suggestions are synthetic and are sampled one word at a time, using computationally cheap decoding techniques. This is in contrast to current synthetic suggestion models relying upon machine learning pipelines and hand-engineered feature sets. Results show that our model outperforms existing context-aware approaches in a next query prediction setting. In addition to query suggestion, our architecture is general enough to be used in a variety of other applications.",
"title": ""
},
{
"docid": "7e06f62814a2aba7ddaff47af62c13b4",
"text": "Natural language conversation is widely regarded as a highly difficult problem, which is usually attacked with either rule-based or learning-based models. In this paper we propose a retrieval-based automatic response model for short-text conversation, to exploit the vast amount of short conversation instances available on social media. For this purpose we introduce a dataset of short-text conversation based on the real-world instances from Sina Weibo (a popular Chinese microblog service), which will be soon released to public. This dataset provides rich collection of instances for the research on finding natural and relevant short responses to a given short text, and useful for both training and testing of conversation models. This dataset consists of both naturally formed conversations, manually labeled data, and a large repository of candidate responses. Our preliminary experiments demonstrate that the simple retrieval-based conversation model performs reasonably well when combined with the rich instances in our dataset.",
"title": ""
}
] |
[
{
"docid": "5a4315e5887bdbb6562e76b54d03beeb",
"text": "A combination of conventional cross sectional process and device simulations combined with top down and 3D device simulations have been used to design and optimise the integration of a 100V Lateral DMOS (LDMOS) device for high side bridge applications. This combined simulation approach can streamline the device design process and gain important information about end effects which are lost from 2D cross sectional simulations. Design solutions to negate detrimental end effects are proposed and optimised by top down and 3D simulations and subsequently proven on tested silicon.",
"title": ""
},
{
"docid": "172e7d3c18a1b6f2025f3f13719067d5",
"text": "Investigating the nature of system intrusions in large distributed systems remains a notoriously difficult challenge. While monitoring tools (e.g., Firewalls, IDS) provide preliminary alerts through easy-to-use administrative interfaces, attack reconstruction still requires that administrators sift through gigabytes of system audit logs stored locally on hundreds of machines. At present, two fundamental obstacles prevent synergy between system-layer auditing and modern cluster monitoring tools: 1) the sheer volume of audit data generated in a data center is prohibitively costly to transmit to a central node, and 2) systemlayer auditing poses a “needle-in-a-haystack” problem, such that hundreds of employee hours may be required to diagnose a single intrusion. This paper presents Winnower, a scalable system for auditbased cluster monitoring that addresses these challenges. Our key insight is that, for tasks that are replicated across nodes in a distributed application, a model can be defined over audit logs to succinctly summarize the behavior of many nodes, thus eliminating the need to transmit redundant audit records to a central monitoring node. Specifically, Winnower parses audit records into provenance graphs that describe the actions of individual nodes, then performs grammatical inference over individual graphs using a novel adaptation of Deterministic Finite Automata (DFA) Learning to produce a behavioral model of many nodes at once. This provenance model can be efficiently transmitted to a central node and used to identify anomalous events in the cluster. We have implemented Winnower for Docker Swarm container clusters and evaluate our system against real-world applications and attacks. We show that Winnower dramatically reduces storage and network overhead associated with aggregating system audit logs, by as much as 98%, without sacrificing the important information needed for attack investigation. Winnower thus represents a significant step forward for security monitoring in distributed systems.",
"title": ""
},
{
"docid": "6198021bd0d119f806b8102a54f1e090",
"text": "Six of the ten leading causes of death in the United States, including cancer, diabetes, and heart disease, can be directly linked to diet. Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many of the above chronic diseases. Measuring accurate dietary intake is considered to be an open research problem in the nutrition and health fields. In this paper we compare two techniques of estimating food portion size from images of food. The techniques are based on 3D geometric models and depth images. An expectation-maximization based technique is developed to detect the reference plane in depth images, which is essential for portion size estimation using depth images. Our experimental results indicate that volume estimation based on geometric models is more accurate for objects with well-defined 3D shapes compared to estimation using depth images.",
"title": ""
},
{
"docid": "e458ba119fe15f17aa658c5b42a21e2b",
"text": "In this paper, with the help of controllable active near-infrared (NIR) lights, we construct near-infrared differential (NIRD) images. Based on reflection model, NIRD image is believed to contain the lighting difference between images with and without active NIR lights. Two main characteristics based on NIRD images are exploited to conduct spoofing detection. Firstly, there exist obviously spoofing media around the faces in most conditions, which reflect incident lights in almost the same way as the face areas do. We analyze the pixel consistency between face and non-face areas and employ context clues to distinguish the spoofing images. Then, lighting feature, extracted only from face areas, is utilized to detect spoofing attacks of deliberately cropped medium. Merging the two features, we present a face spoofing detection system. In several experiments on self collected datasets with different spoofing media, we demonstrate the excellent results and robustness of proposed method.",
"title": ""
},
{
"docid": "fb809c5e2a15a49a449a818a1b0d59a5",
"text": "Neural responses are modulated by brain state, which varies with arousal, attention, and behavior. In mice, running and whisking desynchronize the cortex and enhance sensory responses, but the quiescent periods between bouts of exploratory behaviors have not been well studied. We found that these periods of \"quiet wakefulness\" were characterized by state fluctuations on a timescale of 1-2 s. Small fluctuations in pupil diameter tracked these state transitions in multiple cortical areas. During dilation, the intracellular membrane potential was desynchronized, sensory responses were enhanced, and population activity was less correlated. In contrast, constriction was characterized by increased low-frequency oscillations and higher ensemble correlations. Specific subtypes of cortical interneurons were differentially activated during dilation and constriction, consistent with their participation in the observed state changes. Pupillometry has been used to index attention and mental effort in humans, but the intracellular dynamics and differences in population activity underlying this phenomenon were previously unknown.",
"title": ""
},
{
"docid": "6289f4eea3f0c99d1dfafc5cb90de607",
"text": "In this paper, for the first time, we introduce a multiple instance (MI) deep hashing technique for learning discriminative hash codes with weak bag-level supervision suited for large-scale retrieval. We learn such hash codes by aggregating deeply learnt hierarchical representations across bag members through a dedicated MI pool layer. For better trainability and retrieval quality, we propose a two-pronged approach that includes robust optimization and training with an auxiliary single instance hashing arm which is down-regulated gradually. We pose retrieval for tumor assessment as an MI problem because tumors often coexist with benign masses and could exhibit complementary signatures when scanned from different anatomical views. Experimental validations on benchmark mammography and histology datasets demonstrate improved retrieval performance over the state-of-the-art methods.",
"title": ""
},
{
"docid": "9504571e66ea9071c6c227f61dfba98f",
"text": "Recent research has shown that although Reinforcement Learning (RL) can benefit from expert demonstration, it usually takes considerable efforts to obtain enough demonstration. The efforts prevent training decent RL agents with expert demonstration in practice. In this work, we propose Active Reinforcement Learning with Demonstration (ARLD), a new framework to streamline RL in terms of demonstration efforts by allowing the RL agent to query for demonstration actively during training. Under the framework, we propose Active Deep Q-Network, a novel query strategy which adapts to the dynamically-changing distributions during the RL training process by estimating the uncertainty of recent states. The expert demonstration data within Active DQN are then utilized by optimizing supervised max-margin loss in addition to temporal difference loss within usual DQN training. We propose two methods of estimating the uncertainty based on two state-of-the-art DQN models, namely the divergence of bootstrapped DQN and the variance of noisy DQN. The empirical results validate that both methods not only learn faster than other passive expert demonstration methods with the same amount of demonstration and but also reach super-expert level of performance across four different tasks.",
"title": ""
},
{
"docid": "6aa1c48fcde6674990a03a1a15b5dc0e",
"text": "A compact multiple-input-multiple-output (MIMO) antenna is presented for ultrawideband (UWB) applications with band-notched function. The proposed antenna is composed of two offset microstrip-fed antenna elements with UWB performance. To achieve high isolation and polarization diversity, the antenna elements are placed perpendicular to each other. A parasitic T-shaped strip between the radiating elements is employed as a decoupling structure to further suppress the mutual coupling. In addition, the notched band at 5.5 GHz is realized by etching a pair of L-shaped slits on the ground. The antenna prototype with a compact size of 38.5 × 38.5 mm2 has been fabricated and measured. Experimental results show that the antenna has an impedance bandwidth of 3.08-11.8 GHz with reflection coefficient less than -10 dB, except the rejection band of 5.03-5.97 GHz. Besides, port isolation, envelope correlation coefficient and radiation characteristics are also investigated. The results indicate that the MIMO antenna is suitable for band-notched UWB applications.",
"title": ""
},
{
"docid": "fa042f86d7d01b38e874a7a09bf00f34",
"text": "Keys for graphs aim to uniquely identify entities represented by vertices in a graph. We propose a class of keys that are recursively defined in terms of graph patterns, and are interpreted with subgraph isomorphism. Extending conventional keys for relations and XML, these keys find applications in object identification, knowledge fusion and social network reconciliation. As an application, we study the entity matching problem that, given a graph G and a set Σ of keys, is to find all pairs of entities (vertices) in G that are identified by keys in Σ. We show that the problem is intractable, and cannot be parallelized in logarithmic rounds. Nonetheless, we provide two parallel scalable algorithms for entity matching, in MapReduce and a vertex-centric asynchronous model. Using real-life and synthetic data, we experimentally verify the effectiveness and scalability of the algorithms.",
"title": ""
},
{
"docid": "ed13193df5db458d0673ccee69700bc0",
"text": "Interest in meat fatty acid composition stems mainly from the need to find ways to produce healthier meat, i.e. with a higher ratio of polyunsaturated (PUFA) to saturated fatty acids and a more favourable balance between n-6 and n-3 PUFA. In pigs, the drive has been to increase n-3 PUFA in meat and this can be achieved by feeding sources such as linseed in the diet. Only when concentrations of α-linolenic acid (18:3) approach 3% of neutral lipids or phospholipids are there any adverse effects on meat quality, defined in terms of shelf life (lipid and myoglobin oxidation) and flavour. Ruminant meats are a relatively good source of n-3 PUFA due to the presence of 18:3 in grass. Further increases can be achieved with animals fed grain-based diets by including whole linseed or linseed oil, especially if this is \"protected\" from rumen biohydrogenation. Long-chain (C20-C22) n-3 PUFA are synthesised from 18:3 in the animal although docosahexaenoic acid (DHA, 22:6) is not increased when diets are supplemented with 18:3. DHA can be increased by feeding sources such as fish oil although too-high levels cause adverse flavour and colour changes. Grass-fed beef and lamb have naturally high levels of 18:3 and long chain n-3 PUFA. These impact on flavour to produce a 'grass fed' taste in which other components of grass are also involved. Grazing also provides antioxidants including vitamin E which maintain PUFA levels in meat and prevent quality deterioration during processing and display. In pork, beef and lamb the melting point of lipid and the firmness/hardness of carcass fat is closely related to the concentration of stearic acid (18:0).",
"title": ""
},
{
"docid": "457684e85d51869692aab90231a711a1",
"text": "Cassandra is a distributed storage system for managing structured data that is designed to scale to a very large size across many commodity servers, with no single point of failure. Reliability at massive scale is a very big challenge. Outages in the service can have significant negative impact. Hence Cassandra aims to run on top of an infrastructure of hundreds of nodes (possibly spread across different datacenters). At this scale, small and large components fail continuously; the way Cassandra manages the persistent state in the face of these failures drives the reliability and scalability of the software systems relying on this service. Cassandra has achieved several goals--scalability, high performance, high availability and applicability. In many ways Cassandra resembles a database and shares many design and implementation strategies with databases. Cassandra does not support a full relational data model; instead, it provides clients with a simple data model that supports dynamic control over data layout and format.",
"title": ""
},
{
"docid": "6df61e330f6b71c4ef136e3a2220a5e2",
"text": "In recent years, we have seen significant advancement in technologies to bring about smarter cities worldwide. The interconnectivity of things is the key enabler in these initiatives. An important building block is smart mobility, and it revolves around resolving land transport challenges in cities with dense populations. A transformative direction that global stakeholders are looking into is autonomous vehicles and the transport infrastructure to interconnect them to the traffic management system (that is, vehicle to infrastructure connectivity), as well as to communicate with one another (that is, vehicle to vehicle connectivity) to facilitate better awareness of road conditions. A number of countries had also started to take autonomous vehicles to the roads to conduct trials and are moving towards the plan for larger scale deployment. However, an important consideration in this space is the security of the autonomous vehicles. There has been an increasing interest in the attacks and defences of autonomous vehicles as these vehicles are getting ready to go onto the roads. In this paper, we aim to organize and discuss the various methods of attacking and defending autonomous vehicles, and propose a comprehensive attack and defence taxonomy to better categorize each of them. Through this work, we hope that it provides a better understanding of how targeted defences should be put in place for targeted attacks, and for technologists to be more mindful of the pitfalls when developing architectures, algorithms and protocols, so as to realise a more secure infrastructure composed of dependable autonomous vehicles.",
"title": ""
},
{
"docid": "77df82cf7a9ddca2038433fa96a43cef",
"text": "In this study, new algorithms are proposed for exposing forgeries in soccer images. We propose a new and automatic algorithm to extract the soccer field, field side and the lines of field in order to generate an image of real lines for forensic analysis. By comparing the image of real lines and the lines in the input image, the forensic analyzer can easily detect line displacements of the soccer field. To expose forgery in the location of a player, we measure the height of the player using the geometric information in the soccer image and use the inconsistency of the measured height with the true height of the player as a clue for detecting the displacement of the player. In this study, two novel approaches are proposed to measure the height of a player. In the first approach, the intersections of white lines in the soccer field are employed for automatic calibration of the camera. We derive a closed-form solution to calculate different camera parameters. Then the calculated parameters of the camera are used to measure the height of a player using an interactive approach. In the second approach, the geometry of vanishing lines and the dimensions of soccer gate are used to measure a player height. Various experiments using real and synthetic soccer images show the efficiency of the proposed algorithms.",
"title": ""
},
{
"docid": "ccd6e2b8dac7bf25e9ac70ee35a06751",
"text": "In this letter, an ultrawideband (UWB) bandpass filter with a band notch is proposed. The UWB BPF (3.1-10.6 GHz) is realized by cascading a distributed high-pass filter and an elliptic low-pass filter with an embedded stepped impedance resonator (SIR) to achieve a band notch characteristic. The notch band is obtained at 5.22 GHz. It is shown that the notch frequency can be tuned by changing the impedance ratio of the embedded SIR. A fabricated prototype of the proposed UWB bandpass filter is developed. The inband and out-of-band performance obtained by measurement, EM simulation, and that with an equivalent circuit model are in good agreement.",
"title": ""
},
{
"docid": "3e6c4f94570670e13f357a5ceff83ed3",
"text": "Day by day more and more devices are getting connected to the Internet and with the advent of the Internet of Things, this rate has had an exponential growth. The lack of security in devices connected to the IoT is making them hot targets for cyber-criminals and strength of botnet attacks have increased drastically. Botnets are the technological backbones of multitudinous attacks including Distributed Denial of Service (DDoS), SPAM, identity theft and organizational spying. The 2016 Dyn cyber attack involved multiple DDoS attacks with an estimated throughput of 1.2 terabits per second; the attack is the largest DDoS attack on record. In this paper, we compare three different techniques for botnet detection with each having its unique use cases. The results of the detection methods were verified using ISCX Intrusion Detection Dataset and the CTU-13 Dataset.",
"title": ""
},
{
"docid": "0c76df51ba5e2d1aff885ac8fd146de8",
"text": "A design concept for a planar antenna array for Global Positioning System (GPS) applications is presented in this paper. A 4-element wideband circularly polarized array, which utilizes multi-layer microstrip patch antenna technology, was successfully designed and tested. The design achieves a very low axial ratio performance without compromising fabrication simplicity and overall antenna performance.",
"title": ""
},
{
"docid": "3f904e591a46f770e9a1425e6276041b",
"text": "Several decades of research in underwater communication and networking has resulted in novel and innovative solutions to combat challenges such as long delay spread, rapid channel variation, significant Doppler, high levels of non-Gaussian noise, limited bandwidth and long propagation delays. Many of the physical layer solutions can be tested by transmitting carefully designed signals, recording them after passing through the underwater channel, and then processing them offline using appropriate algorithms. However some solutions requiring online feedback to the transmitter cannot be tested without real-time processing capability in the field. Protocols and algorithms for underwater networking also require real-time communication capability for experimental testing. Although many modems are commercially available, they provide limited flexibility in physical layer signaling and sensing. They also provide limited control over the exact timing of transmission and reception, which can be critical for efficient implementation of some networking protocols with strict time constraints. To aid in our physical and higher layer research, we developed the UNET-2 software-defined modem with flexibility and extensibility as primary design objectives. We present the hardware and software architecture of the modem, focusing on the flexibility and adaptability that it provides researchers with. We describe the network stack that the modem uses, and show how it can also be used as a powerful tool for underwater network simulation. We illustrate the flexibility provided by the modem through a number of practical examples and experiments.",
"title": ""
},
{
"docid": "2da44919966d841d4a1d6f3cc2a648e9",
"text": "A composite cavity-backed folded sectorial bowtie antenna (FSBA) is proposed and investigated in this paper, which is differentially fed by an SMA connector through a balun, i.e. a transition from a microstrip line to a parallel stripline. The composite cavity as a general case, consisting of a conical part and a cylindrical rim, can be tuned freely from a cylindrical to a cup-shaped one. Parametric studies are performed to optimize the antenna performance. Experimental results reveal that it can achieve an impedance bandwidth of 143% for SWR les 2, a broadside gain of 8-15.3 dBi, and stable radiation pattern over the whole operating band. The total electrical dimensions are 0.66lambdam in diameter and 0.16lambdam in height, where lambdam is the free-space wavelength at lower edge of the operating frequency band. The problem about the distorted patterns in the upper frequency band for wideband cavity-backed antennas is solved in our work.",
"title": ""
},
{
"docid": "9c68b87f99450e85f3c0c6093429937d",
"text": "We present a method for activity recognition that first estimates the activity performer's location and uses it with input data for activity recognition. Existing approaches directly take video frames or entire video for feature extraction and recognition, and treat the classifier as a black box. Our method first locates the activities in each input video frame by generating an activity mask using a conditional generative adversarial network (cGAN). The generated mask is appended to color channels of input images and fed into a VGG-LSTM network for activity recognition. To test our system, we produced two datasets with manually created masks, one containing Olympic sports activities and the other containing trauma resuscitation activities. Our system makes activity prediction for each video frame and achieves performance comparable to the state-of-the-art systems while simultaneously outlining the location of the activity. We show how the generated masks facilitate the learning of features that are representative of the activity rather than accidental surrounding information.",
"title": ""
}
] |
scidocsrr
|
7341c82e76f53843640f1eadff1aaf5d
|
A review of inverse reinforcement learning theory and recent advances
|
[
{
"docid": "cae4703a50910c7718284c6f8230a4bc",
"text": "Autonomous helicopter flight is widely regarded to be a highly challenging control problem. Despite this fact, human experts can reliably fly helicopters through a wide range of maneuvers, including aerobatic maneuvers at the edge of the helicopter’s capabilities. We present apprenticeship learning algorithms, which leverage expert demonstrations to efficiently learn good controllers for tasks being demonstrated by an expert. These apprenticeship learning algorithms have enabled us to significantly extend the state of the art in autonomous helicopter aerobatics. Our experimental results include the first autonomous execution of a wide range of maneuvers, including but not limited to in-place flips, in-place rolls, loops and hurricanes, and even auto-rotation landings, chaos and tic-tocs, which only exceptional human pilots can perform. Our results also include complete airshows, which require autonomous transitions between many of these maneuvers. Our controllers perform as well as, and often even better than, our expert pilot.",
"title": ""
},
{
"docid": "fb4837a619a6b9e49ca2de944ec2314e",
"text": "Inverse reinforcement learning addresses the general problem of recovering a reward function from samples of a policy provided by an expert/demonstrator. In this paper, we introduce active learning for inverse reinforcement learning. We propose an algorithm that allows the agent to query the demonstrator for samples at specific states, instead of relying only on samples provided at “arbitrary” states. The purpose of our algorithm is to estimate the reward function with similar accuracy as other methods from the literature while reducing the amount of policy samples required from the expert. We also discuss the use of our algorithm in higher dimensional problems, using both Monte Carlo and gradient methods. We present illustrative results of our algorithm in several simulated examples of different complexities.",
"title": ""
},
{
"docid": "a4473c2cc7da3fb5ee52b60cee24b9b9",
"text": "The ALVINN (Autonomous h d Vehide In a N d Network) projea addresses the problem of training ani&ial naxal naarork in real time to perform difficult perapaon tasks. A L W is a back-propagation network dmpd to dnve the CMU Navlab. a modided Chevy van. 'Ibis ptpa describes the training techniques which allow ALVIN\" to luun in under 5 minutes to autonomously conm>l the Navlab by wardung ahuamr, dziver's rmaions. Usingthese technrques A L W has b&n trained to drive in a variety of Cirarmstanccs including single-lane paved and unprved roads. and multi-lane lined and rmlinecd roads, at speeds of up IO 20 miles per hour",
"title": ""
}
] |
[
{
"docid": "d38e5fa4adadc3e979c5de812599c78a",
"text": "The convergence properties of a nearest neighbor rule that uses an editing procedure to reduce the number of preclassified samples and to improve the performance of the rule are developed. Editing of the preclassified samples using the three-nearest neighbor rule followed by classification using the single-nearest neighbor rule with the remaining preclassified samples appears to produce a decision procedure whose risk approaches the Bayes' risk quite closely in many problems with only a few preclassified samples. The asymptotic risk of the nearest neighbor rules and the nearest neighbor rules using edited preclassified samples is calculated for several problems.",
"title": ""
},
{
"docid": "26295dded01b06c8b11349723fea81dd",
"text": "The increasing popularity of parametric design tools goes hand in hand with the use of building performance simulation (BPS) tools from the early design phase. However, current methods require a significant computational time and a high number of parameters as input, as they are based on traditional BPS tools conceived for detailed building design phase. Their application to the urban scale is hence difficult. As an alternative to the existing approaches, we developed an interface to CitySim, a validated building simulation tool adapted to urban scale assessments, bundled as a plug-in for Grasshopper, a popular parametric design platform. On the one hand, CitySim allows faster simulations and requires fewer parameters than traditional BPS tools, as it is based on algorithms providing a good trade-off between the simulations requirements and their accuracy at the urban scale; on the other hand, Grasshopper allows the easy manipulation of building masses and energy simulation parameters through semi-automated parametric",
"title": ""
},
{
"docid": "7dc5e63ddbb8ec509101299924093c8b",
"text": "The task of aspect and opinion terms co-extraction aims to explicitly extract aspect terms describing features of an entity and opinion terms expressing emotions from user-generated texts. To achieve this task, one effective approach is to exploit relations between aspect terms and opinion terms by parsing syntactic structure for each sentence. However, this approach requires expensive effort for parsing and highly depends on the quality of the parsing results. In this paper, we offer a novel deep learning model, named coupled multi-layer attentions. The proposed model provides an end-to-end solution and does not require any parsers or other linguistic resources for preprocessing. Specifically, the proposed model is a multilayer attention network, where each layer consists of a couple of attentions with tensor operators. One attention is for extracting aspect terms, while the other is for extracting opinion terms. They are learned interactively to dually propagate information between aspect terms and opinion terms. Through multiple layers, the model can further exploit indirect relations between terms for more precise information extraction. Experimental results on three benchmark datasets in SemEval Challenge 2014 and 2015 show that our model achieves stateof-the-art performances compared with several baselines.",
"title": ""
},
{
"docid": "2c3ab7e0f49dc4575c77a712e8184ce0",
"text": "The cubature Kalman filter (CKF), which is based on the third degree spherical–radial cubature rule, is numericallymore stable than the unscented Kalman filter (UKF) but less accurate than theGauss–Hermite quadrature filter (GHQF). To improve the performance of the CKF, a new class of CKFs with arbitrary degrees of accuracy in computing the spherical and radial integrals is proposed. The third-degree CKF is a special case of the class. The high-degree CKFs of the class can achieve the accuracy and stability performances close to those of the GHQF but at lower computational cost. A numerical integration problem and a target tracking problem are utilized to demonstrate the necessity of using the high-degree cubature rules to improve the performance. The target tracking simulation shows that the fifth-degree CKF can achieve higher accuracy than the extended Kalman filter, the UKF, the third-degree CKF, and the particle filter, and is computationally much more efficient than the GHQF. © 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8b7caff264c4258f0ae91f5927fde978",
"text": "Table detection is an important task in the field of document analysis. It has been extensively studied since a couple of decades. Various kinds of document mediums are involved, from scanned images to web pages, from plain texts to PDF files. Numerous algorithms published bring up a challenging issue: how to evaluate algorithms in different context. Currently, most work on table detection conducts experiments on their in-house dataset. Even the few sources of online datasets are targeted at image documents only. Moreover, Precision and recall measurement are usual practice in order to account performance based on human evaluation. In this paper, we provide a dataset that is representative, large and most importantly, publicly available. The compatible format of the ground truth makes evaluation independent of document medium. We also propose a set of new measures, implement them, and open the source code. Finally, three existing table detection algorithms are evaluated to demonstrate the reliability of the dataset and metrics.",
"title": ""
},
{
"docid": "6210d2da6100adbd4db89a983d00419f",
"text": "Many binary code encoding schemes based on hashing have been actively studied recently, since they can provide efficient similarity search, especially nearest neighbor search, and compact data representations suitable for handling large scale image databases in many computer vision problems. Existing hashing techniques encode high-dimensional data points by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. Furthermore, we propose a new binary code distance function, spherical Hamming distance, that is tailored to our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve balanced partitioning of data points for each hash function and independence between hashing functions. Our extensive experiments show that our spherical hashing technique significantly outperforms six state-of-the-art hashing techniques based on hyperplanes across various image benchmarks of sizes ranging from one to 75 million of GIST descriptors. The performance gains are consistent and large, up to 100% improvements. The excellent results confirm the unique merits of the proposed idea in using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.",
"title": ""
},
{
"docid": "d81c866f09dfbead73c8d55986b231ef",
"text": "Phenazepam is a benzodiazepine derivative that has been in clinical use in Russia since 1978 and is not available by prescription in the United States; however, it is attainable through various internet websites, sold either as tablets or as a reference grade crystalline powder. Presented here is the case of a 42-year old Caucasian male who died as the result of combined phenazepam, morphine, codeine, and thebaine intoxication. A vial of white powder labeled \"Phenazepam, Purity 99%, CAS No. 51753-57-2, Research Sample\", a short straw, and several poppy seed pods were found on the scene. Investigation revealed that the decedent had a history of ordering medications over the internet and that he had consumed poppy seed tea prior to his death. Phenazepam, morphine, codeine, and thebaine were present in the blood at 386, 116, 85, and 72 ng/mL, respectively.",
"title": ""
},
{
"docid": "49575576bc5a0b949c81b0275cbc5f41",
"text": "From email to online banking, passwords are an essential component of modern internet use. Yet, users do not always have good password security practices, leaving their accounts vulnerable to attack. We conducted a study which combines self-report survey responses with measures of actual online behavior gathered from 134 participants over the course of six weeks. We find that people do tend to re-use each password on 1.7–3.4 different websites, they reuse passwords that are more complex, and mostly they tend to re-use passwords that they have to enter frequently. We also investigated whether self-report measures are accurate indicators of actual behavior, finding that though people understand password security, their self-reported intentions have only a weak correlation with reality. These findings suggest that users manage the challenge of having many passwords by choosing a complex password on a website where they have to enter it frequently in order to memorize that password, and then re-using that strong password across other websites.",
"title": ""
},
{
"docid": "e577c2827822bfe2f1fc177efeeef732",
"text": "This paper presents a control problem involving an experimental propeller setup that is called the twin rotor multi-input multi-output system (TRMS). The control objective is to make the beam of the TRMS move quickly and accurately to the desired attitudes, both the pitch angle and the azimuth angle in the condition of decoupling between two axes. It is difficult to design a suitable controller because of the influence between the two axes and nonlinear movement. For easy demonstration in the vertical and horizontal separately, the TRMS is decoupled by the main rotor and tail rotor. An intelligent control scheme which utilizes a hybrid PID controller is implemented to this problem. Simulation results show that the new approach to the TRMS control problem can improve the tracking performance and reduce control energy.",
"title": ""
},
{
"docid": "d6d07f50778ba3d99f00938b69fe0081",
"text": "The use of metal casing is attractive to achieve robustness of modern slim tablet devices. The metal casing includes the metal back cover and the metal frame around the edges thereof. For such metal-casing tablet devices, the frame antenna that uses a part of the metal frame as an antenna's radiator is promising to achieve wide bandwidths for mobile communications. In this paper, the frame antenna based on the simple half-loop antenna structure to cover the long-term evolution 746-960 and 1710-2690 MHz bands is presented. The half-loop structure for the frame antenna is easy for manufacturing and increases the robustness of the metal casing. The dual-wideband operation of the half-loop frame antenna is obtained by using an elevated feed network supported by a thin feed substrate. The measured antenna efficiencies are, respectively, 45%-69% and 60%-83% in the low and high bands. By selecting different feed circuits, the antenna's low band can also be shifted from 746-960 MHz to lower frequencies such as 698-840 MHz, with the antenna's high-band coverage very slightly varied. The working principle of the antenna with the elevated feed network is discussed. The antenna is also fabricated and tested, and experimental results are presented.",
"title": ""
},
{
"docid": "57a333a88a5c1f076fd096ec4cde4cba",
"text": "2.1 HISTORY OF BIOTECHNOLOGY....................................................................................................6 2.2 MODERN BIOTECHNOLOGY ........................................................................................................6 2.3 THE GM DEBATE........................................................................................................................7 2.4 APPLYING THE PRECAUTIONARY APPROACH TO GMOS .............................................................8 2.5 RISK ASSESSMENT ISSUES ..........................................................................................................9 2.6 LEGAL CONTEXT ......................................................................................................................10 T",
"title": ""
},
{
"docid": "4519e039416fe4548e08a15b30b8a14f",
"text": "The R-tree, one of the most popular access methods for rectangles, is based on the heuristic optimization of the area of the enclosing rectangle in each inner node. By running numerous experiments in a standardized testbed under highly varying data, queries and operations, we were able to design the R*-tree which incorporates a combined optimization of area, margin and overlap of each enclosing rectangle in the directory. Using our standardized testbed in an exhaustive performance comparison, it turned out that the R*-tree clearly outperforms the existing R-tree variants. Guttman's linear and quadratic R-tree and Greene's variant of the R-tree. This superiority of the R*-tree holds for different types of queries and operations, such as map overlay, for both rectangles and multidimensional points in all experiments. From a practical point of view the R*-tree is very attractive because of the following two reasons 1 it efficiently supports point and spatial data at the same time and 2 its implementation cost is only slightly higher than that of other R-trees.",
"title": ""
},
{
"docid": "1bc91b4547481a81c2963dd117a96370",
"text": "Breast cancer is one of the main causes of women mortality worldwide. Ultrasonography (USG) is other modalities than mammography that capable to support radiologists in diagnosing breast cancer. However, the diagnosis may come with different interpretation depending on the radiologists experience. Therefore, Computer-Aided Diagnosis (CAD) is developed as a tool for radiologist's second opinion. CAD is built based on digital image processing of ultrasound (US) images which consists of several stages. Lesion segmentation is an important step in CAD system because it contains many important features for classification process related to lesion characteristics. This study provides a performance analysis and comparison of image segmentation for breast USG images. In this paper, several methods are presented such as a comprehensive comparison of adaptive thresholding, fuzzy C-Means (FCM), Fast Global Minimization for Active Contour (FGMAC) and Active Contours Without Edges (ACWE). The performance of these methods are evaluated with evaluation metrics Dice coefficient, Jaccard coefficient, FPR, FNR, Hausdorff distance, PSNR and MSSD parameters. Morphological operation is able to increase the performance of each segmentation methods. Overall, ACWE with morphological operation gives the best performance compare to the other methods with the similarity level of more than 90%.",
"title": ""
},
{
"docid": "77f5c568ed065e4f23165575c0a05da6",
"text": "Localization is the problem of determining the position of a mobile robot from sensor data. Most existing localization approaches are passive, i.e., they do not exploit the opportunity to control the robot's effectors during localization. This paper proposes an active localization approach. The approach provides rational criteria for (1) setting the robot's motion direction (exploration), and (2) determining the pointing direction of the sensors so as to most efficiently localize the robot. Furthermore, it is able to deal with noisy sensors and approximative world models. The appropriateness of our approach is demonstrated empirically using a mobile robot in a structured office environment.",
"title": ""
},
{
"docid": "02d11f4663277bb55a289d03403b5eb2",
"text": "Financial markets play an important role on the economical and social organization of modern society. In these kinds of markets, information is an invaluable asset. However, with the modernization of the financial transactions and the information systems, the large amount of information available for a trader can make prohibitive the analysis of a financial asset. In the last decades, many researchers have attempted to develop computational intelligent methods and algorithms to support the decision-making in different financial market segments. In the literature, there is a huge number of scientific papers that investigate the use of computational intelligence techniques to solve financial market problems. However, only few studies have focused on review the literature of this topic. Most of the existing review articles have a limited scope, either by focusing on a specific financial market application or by focusing on a family of machine learning algorithms. This paper presents a review of the application of several computational intelligent methods in several financial applications. This paper gives an overview of the most important primary studies published from 2009 to 2015, which cover techniques for preprocessing and clustering of financial data, for forecasting future market movements, for mining financial text information, among others. The main contributions of this paper are: (i) a comprehensive review of the literature of this field, (ii) the definition of a systematic procedure for guiding the task of building an intelligent trading system and (iii) a discussion about the main challenges and open problems in this scientific field. © 2016 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "dc1360563cb509c4213a68d2c9be56f1",
"text": "We present a novel efficient algorithm for portfolio selection which theoretically attains two desirable properties: 1. Worst-case guarantee: the algorithm is universal in the sense that it asymptotically performs almost as well as the best constant rebalanced portfolio determined in hindsight from the realized market prices. Furthermore, it attains the tightest known bounds on the regret, or the log-wealth difference relative to the best constant rebalanced portfolio. We prove that the regret of algorithm is bounded by O(logQ), where Q is the quadratic variation of the stock prices. This is the first improvement upon Cover’s [Cov91] seminal work that attains a regret bound of O(log T ), where T is the number of trading iterations. 2. Average-case guarantee: in the Geometric Brownian Motion (GBM) model of stock prices, our algorithm attains tighter regret bounds, which are provably impossible in the worst-case. Hence, when the GBM model is a good approximation of the behavior of market, the new algorithm has an advantage over previous ones, albeit retaining worst-case guarantees. We derive this algorithm as a special case of a novel and more general method for online convex optimization with exp-concave loss functions.",
"title": ""
},
{
"docid": "6cb2004d77c5a0ccb4f0cbab3058b2bc",
"text": "the field of optical character recognition.",
"title": ""
},
{
"docid": "4cfeef6e449e37219c75f8063220c1f8",
"text": "The 20 century was based on local linear engineering of complicated systems. We made cars, airplanes and chemical plants for example. The 21ot century has opened a new basis for holistic non-linear design of complex systems, such as the Internet, air traffic management and nanotechnologies. Complexity, interconnectivity, interaction and communication are major attributes of our evolving society. But, more interestingly, we have started to understand that chaos theories may be more important than reductionism, to better understand and thrive on our planet. Systems need to be investigated and tested as wholes, which requires a cross-disciplinary approach and new conceptual principles and tools. Consequently, schools cannot continue to teach isolated disciplines based on simple reductionism. Science; Technology, Engineering, and Mathematics (STEM) should be integrated together with the Arts to promote creativity together with rationalization, and move to STEAM (with an \"A\" for Arts). This new concept emphasizes the possibility of longer-term socio-technical futures instead of short-term financial predictions that currently lead to uncontrolled economies. Human-centered design (HCD) can contribute to improving STEAM education technologies, systems and practices. HCD not only provides tools and techniques to build useful and usable things, but also an integrated approach to learning by doing, expressing and critiquing, exploring possible futures, and understanding complex systems.",
"title": ""
},
{
"docid": "f779bf251b3d066e594867680e080ef4",
"text": "Machine Translation is area of research since six decades. It is gaining popularity since last decade due to better computational facilities available at personal computer systems. This paper presents different Machine Translation system where Sanskrit is involved as source, target or key support language. Researchers employ various techniques like Rule based, Corpus based, Direct for machine translation. The main aim to focus on Sanskrit in Machine Translation in this paper is to uncover the language suitability, its morphology and employ appropriate MT techniques.",
"title": ""
},
{
"docid": "f83d8a69a4078baf4048b207324e505f",
"text": "Low-dose computed tomography (LDCT) has attracted major attention in the medical imaging field, since CT-associated X-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Recently, deep-learning-based algorithms have achieved promising results in LDCT denoising, especially convolutional neural network (CNN) and generative adversarial network (GAN) architectures. This paper introduces a conveying path-based convolutional encoder-decoder (CPCE) network in 2-D and 3-D configurations within the GAN framework for LDCT denoising. A novel feature of this approach is that an initial 3-D CPCE denoising model can be directly obtained by extending a trained 2-D CNN, which is then fine-tuned to incorporate 3-D spatial information from adjacent slices. Based on the transfer learning from 2-D to 3-D, the 3-D network converges faster and achieves a better denoising performance when compared with a training from scratch. By comparing the CPCE network with recently published work based on the simulated Mayo data set and the real MGH data set, we demonstrate that the 3-D CPCE denoising model has a better performance in that it suppresses image noise and preserves subtle structures.",
"title": ""
}
] |
scidocsrr
|
b081c8cfc5365aa466c2db4580b4a515
|
Image Processing and Image Mining using Decision Trees
|
[
{
"docid": "4aac8bed4ddd3707c5b391d2025425c9",
"text": "Grouping images into (semantically) meaningful categories using low-level visual features is a challenging and important problem in content-based image retrieval. Using binary Bayesian classifiers, we attempt to capture high-level concepts from low-level image features under the constraint that the test image does belong to one of the classes. Specifically, we consider the hierarchical classification of vacation images; at the highest level, images are classified as indoor or outdoor; outdoor images are further classified as city or landscape; finally, a subset of landscape images is classified into sunset, forest, and mountain classes. We demonstrate that a small vector quantizer (whose optimal size is selected using a modified MDL criterion) can be used to model the class-conditional densities of the features, required by the Bayesian methodology. The classifiers have been designed and evaluated on a database of 6931 vacation photographs. Our system achieved a classification accuracy of 90.5% for indoor/outdoor, 95.3% for city/landscape, 96.6% for sunset/forest and mountain, and 96% for forest/mountain classification problems. We further develop a learning method to incrementally train the classifiers as additional data become available. We also show preliminary results for feature reduction using clustering techniques. Our goal is to combine multiple two-class classifiers into a single hierarchical classifier.",
"title": ""
}
] |
[
{
"docid": "97107561103eec062d9a2d4ae28ffb9e",
"text": "Development of loyalty in customers is a strategic goal of many firms and organizations and today, the main effort of many firms is allocated to retain customers and obtaining even more ones. Characteristics of loyal customers and method for formation of loyalty in customers in internet space are different to those in traditional one in some respects and study of them may be beneficial in improving performance of firms, organizations and shops involving in this field of business. Also it may help managers of these types of businesses to make efficient and effective decisions towards success of their organizations. Thus, present study aims to investigate the effects of e-service quality in three aspects of information, system and web-service on e-trust and e-satisfaction as key factors influencing creation of e-loyalty of Iranian customers in e-business context; Also it was tried to demonstrate moderating effect of situational factors e.g. time poverty, geographic distance, physical immobility and lack of transportation on e-loyalty level. Totally, 400 questionnaires were distributed to university students, that 382 questionnaires were used for the final analysis, which the results from analysis of them based on simple linear regression and multiple hierarchical regression show that customer loyalty to e-shops is directly influenced by e-trust in and e-satisfaction with e-shops which in turn are determined by e-service quality; also the obtained results shows that situational variables can moderate relationship between e-trust and/or e-satisfaction and e-loyalty. Therefore situational variables studied in present research can influence initiation of transaction of customer with online retailer and customer attitude importance and this in turn makes it necessary for managers to pay special attention to situational effects in examination of current attitude and behavior of customers.",
"title": ""
},
{
"docid": "24266c007082921474ce9ebb2575e5c3",
"text": "This review addresses the long-term gender outcome of gender assignment of persons with intersexuality and related conditions. The gender assignment to female of 46,XY newborns with severe genital abnormalities despite a presumably normal-male prenatal sex-hormone milieu is highly controversial because of variations in assumptions about the role of biological factors in gender identity formation. This article presents a literature review of gender outcome in three pertinent conditions (penile agenesis, cloacal exstrophy of the bladder, and penile ablation) in infancy or early childhood. The findings clearly indicate an increased risk of later patient-initiated gender re-assignment to male after female assignment in infancy or early childhood, but are nevertheless incompatible with the notion of a full determination of core gender identity by prenatal androgens.",
"title": ""
},
{
"docid": "bf6434b4498aa3cdaaf482cb15ca7e12",
"text": "Multicore processors represent the latest significant development in microprocessor technology. Computer System Performance and Evaluation deal with the investigation of computer components (both hardware and software) with a view to establish the level of their performances. This research work carried out performance evaluation studies on AMD dual-core and Intel dual-core processor to know which of the processor has better execution time and throughput. The architecture of AMD and Intel duo-core processor were studied. SPEC CPU2006 benchmarks suite was used to measure the performance of AMD and Intel duo core processors. The overall execution and throughput time measurement of AMD and Intel duo core processors were reported and compared to each other. Results showed that the execution time of CQ56 Intel Pentium Dual-Core Processor is about 6.62% faster than AMD Turion II P520 Dual-Core Processor while the throughput of Intel Pentium Dual-Core Processor was found to be 1.06 times higher than AMD Turion (tm) II P520 Dual Core Processor. Therefore, Intel Pentium Dual-Core Processors exhibit better performance probably due to the following architectural features: faster core-to-core communication, dynamic cache sharing between cores and smaller size of level 2 cache.",
"title": ""
},
{
"docid": "7c2ac62211ee7070298796241751f027",
"text": "Recently, “platform ecosystem” has received attention as a key business concept. Sustainable growth of platform ecosystems is enabled by platform users supplying and/or demanding content from each other: e.g. Facebook, YouTube or Twitter. The importance and value of user data in platform ecosystems is accentuated since platform owners use and sell the data for their business. Serious concern is increasing about data misuse or abuse, privacy issues and revenue sharing between the different stakeholders. Traditional data governance focuses on generic goals and a universal approach to manage the data of an enterprise. It entails limited support for the complicated situation and relationship of a platform ecosystem where multiple participating parties contribute, use data and share profits. This article identifies data governance factors for platform ecosystems through literature review. The study then surveys the data governance state of practice of four platform ecosystems: Facebook, YouTube, EBay and Uber. Finally, 19 governance models in industry and academia are compared against our identified data governance factors for platform ecosystems to reveal the gaps and limitations.",
"title": ""
},
{
"docid": "1c20908b24c78b43a858ba154165b544",
"text": "The implementation of concentrated windings in interior permanent magnet (IPM) machines has numerous advantages over distributed windings, with the disadvantage being mainly the decrease in saliency ratio. This paper presents a proposed finite element (FE) method in which the d- and q-axis inductances (Ld and Lq) of the IPM machine with fractional-slot concentrated windings can be accurately determined. This method is used to determine Ld and Lq of various winding configurations and to determine the optimum saliency ratio for a 12-slot 14-pole model with fractional-slot concentrated windings. FE testing were carried out by the use of Flux2D.",
"title": ""
},
{
"docid": "375d5fcb41b7fb3a2f60822720608396",
"text": "We present a full-stack design to accelerate deep learning inference with FPGAs. Our contribution is two-fold. At the software layer, we leverage and extend TVM, the end-to-end deep learning optimizing compiler, in order to harness FPGA-based acceleration. At the the hardware layer, we present the Versatile Tensor Accelerator (VTA) which presents a generic, modular, and customizable architecture for TPU-like accelerators. Our results take a ResNet-18 description in MxNet and compiles it down to perform 8-bit inference on a 256-PE accelerator implemented on a low-cost Xilinx Zynq FPGA, clocked at 100MHz. Our full hardware acceleration stack will be made available for the community to reproduce, and build upon at http://github.com/uwsaml/vta.",
"title": ""
},
{
"docid": "cea92cadacce42ed8db1d3d14370f838",
"text": "Domestic dogs are unusually skilled at reading human social and communicative behavior--even more so than our nearest primate relatives. For example, they use human social and communicative behavior (e.g. a pointing gesture) to find hidden food, and they know what the human can and cannot see in various situations. Recent comparisons between canid species suggest that these unusual social skills have a heritable component and initially evolved during domestication as a result of selection on systems mediating fear and aggression towards humans. Differences in chimpanzee and human temperament suggest that a similar process may have been an important catalyst leading to the evolution of unusual social skills in our own species. The study of convergent evolution provides an exciting opportunity to gain further insights into the evolutionary processes leading to human-like forms of cooperation and communication.",
"title": ""
},
{
"docid": "cca61271fe31513cb90c2ac7ecb0b708",
"text": "This paper deals with the synthesis of fuzzy state feedback controller of induction motor with optimal performance. First, the Takagi-Sugeno (T-S) fuzzy model is employed to approximate a non linear system in the synchronous d-q frame rotating with electromagnetic field-oriented. Next, a fuzzy controller is designed to stabilise the induction motor and guaranteed a minimum disturbance attenuation level for the closed-loop system. The gains of fuzzy control are obtained by solving a set of Linear Matrix Inequality (LMI). Finally, simulation results are given to demonstrate the controller’s effectiveness. Keywords—Rejection disturbance, fuzzy modelling, open-loop control, Fuzzy feedback controller, fuzzy observer, Linear Matrix Inequality (LMI)",
"title": ""
},
{
"docid": "96590c575412d33e09fee7ea52ae9a60",
"text": "Performance of microphone arrays at the high-frequency range is typically limited by aliasing, which is a result of the spatial sampling process. This paper presents analysis of aliasing for spherical microphone arrays, which have been recently studied for a range of applications. The paper presents theoretical analysis of spatial aliasing for various sphere sampling configurations, showing how high-order spherical harmonic coefficients are aliased into the lower orders. Spatial antialiasing filters on the sphere are then introduced, and the performance of spatially constrained filters is compared to that of the ideal antialiasing filter. A simulation example shows how the effect of aliasing on the beam pattern can be reduced by the use of the antialiasing filters",
"title": ""
},
{
"docid": "974f5d138d2a85d81b5dd64f13311721",
"text": "We present a new constraint solver over Boolean variables, available as library(clpb) in SWI-Prolog. Our solver distinguishes itself from other available CLP(B) solvers by several unique features: First, it is written entirely in Prolog and is hence portable to different Prolog implementations. Second, it is the first freely available BDDbased CLP(B) solver. Third, we show that new interface predicates allow us to solve new types of problems with CLP(B) constraints. We also use our implementation experience to contrast features and state necessary requirements of attributed variable interfaces to optimally support CLP(B) constraints in different Prolog systems. Finally, we also present some performance results and comparisons with SICStus Prolog.",
"title": ""
},
{
"docid": "3e88cbd8f22df74d233da045e86e4546",
"text": "The generation and propagation of single event transients (SET) is measured and modeled in SOI inverter chains with different designs. SET propagation in inverter chains induces significant modifications of the transient width. In some cases, a \"propagation-induced pulse broadening\" (PIPB) effect is observed. Initially narrow transients, less than 200 ps at the struck node, are progressively broadened up to the nanosecond range, with the degree of broadening dependent on the transistor design and the length of propagation. The chain design (transistor size and load) is shown to have a major impact on the transient width modification.",
"title": ""
},
{
"docid": "ea6eecdaed8e76c28071ad1d9c1c39f9",
"text": "When it comes to taking the public transportation, time and patience are of essence. In other words, many people using public transport buses have experienced time loss because of waiting at the bus stops. In this paper, we proposed smart bus tracking system that any passenger with a smart phone or mobile device with the QR (Quick Response) code reader can scan QR codes placed at bus stops to view estimated bus arrival times, buses' current locations, and bus routes on a map. Anyone can access these maps and have the option to sign up to receive free alerts about expected bus arrival times for the interested buses and related routes via SMS and e-mails. We used C4.5 (a statistical classifier) algorithm for the estimation of bus arrival times to minimize the passengers waiting time. GPS (Global Positioning System) and Google Maps are used for navigation and display services, respectively.",
"title": ""
},
{
"docid": "53acdb714d51d9eca25f1e635f781afa",
"text": "Research in several areas provides scientific guidance for use of graphical encoding to convey information in an information visualization display. By graphical encoding we mean the use of visual display elements such as icon color, shape, size, or position to convey information about objects represented by the icons. Literature offers inconclusive and often conflicting viewpoints, including the suggestion that the effectiveness of a graphical encoding depends on the type of data represented. Our empirical study suggests that the nature of the users’ perceptual task is more indicative of the effectiveness of a graphical encoding than the type of data represented. 1. Overview of Perceptual Issues In producing a design to visualize search results for a digital library called Envision [12, 13, 19], we found that choosing graphical devices and document attributes to be encoded with each graphical device is a surprisingly difficult task. By graphical devices we mean those visual display elements (e.g., icon color hue, color saturation, flash rate, shape, size, alphanumeric identifiers, position, etc.) used to convey encoded information. Providing access to graphically encoded information requires attention to a range of human cognitive activities, explored by researchers under at least three rubrics: psychophysics of visual search and identification tasks, graphical perception, and graphical language development. Research in these areas provides scientific guidance for design and evaluation of graphical encoding that might otherwise be reduced to opinion and personal taste. Because of space limits, we discuss here only a small portion of the research on graphical encoding that has been conducted. Additional information is in [20]. Ware [29] provides a broader review of perceptual issues pertaining to information visualization. Especially useful for designers are rankings by effectiveness of various graphical devices in communicating different types of data (e.g., nominal, ordinal, or quantitative). Christ [6] provides such rankings in the context of visual search and identification tasks and provides some empirical evidence to support his findings. Mackinlay [17] suggests rankings of graphical devices for conveying nominal, ordinal, and quantitative data in the context of graphical language design, but these rankings have not been empirically validated [personal communication]. Cleveland and McGill [8, 9] have empirically validated their ranking of graphical devices for quantitative data. The rankings suggested by Christ, Mackinlay, and Cleveland and McGill are not the same, while other literature offers more conflicting viewpoints, suggesting the need for further research. 1.1 Visual Search and Identification Tasks Psychophysics is a branch of psychology concerned with the \"relationship between characteristics of physical stimuli and the psychological experience they produce\" [28]. Studies in the psychophysics of visual search and identification tasks have roots in signal detection theory pertaining to air traffic control, process control, and cockpit displays. These studies suggest rankings of graphical devices [6, 7] described later in this paper and point out significant perceptual interactions among graphical devices used in multidimensional displays. Visual search tasks require visual scanning to locate one or more targets [6, 7, 31]. With a scatterplotlike display (sometimes known as a starfield display [1]), users perform a visual search task when they scan the display to determine the presence of one or more symbols meeting some specific criterion and to locate those symbols if present. For identification tasks, users go beyond visual search to report semantic data about symbols of interest, typically by answering true/false questions or by noting facts about encoded data [6, 7]. Measures of display effectiveness for visual search and identification tasks include time, accuracy, and cognitive workload. A more thorough introduction to signal detection theory may be found in Wickens’ book [31]. Issues involved in studies that influenced the Envision design are complex and findings are sometimes contradictory. Following is a representative overview, but many imProceedings of the IEEE Symposium on Information Visualization 2002 (InfoVis’02) 1522-404X/02 $17.00 © 2002 IEEE portant details are necessarily omitted due to space limitations. 1.1.1 Unidimensional Displays. For unidimensional displays — those involving a single graphical code — Christ’s [6, 7] meta-analysis of 42 prior studies suggests the following ranking of graphical devices by effectiveness: color, size, brightness or alphanumeric, and shape. Other studies confirm that color is the most effective graphical device for reducing display search time [7, 14, 25] but find it followed by shape and then letters or digits [7]. Benefits of color-coding increase for high-density displays [15, 16], but using shapes too similar to one another actually increases search time [22]. For identification tasks measuring accuracy with unidimensional displays, Christ’s work [6, 7] suggests the following ranking of graphical devices by effectiveness: alphanumeric, color, brightness, size, and shape. In a later study, Christ found that digits gave the most accurate results but that color, letters, and familiar geometric shapes all produced equal results with experienced subjects [7]. However, Jubis [14] found that shape codes yielded faster mean reaction times than color codes, while Kopala [15] found no significant difference among codes for identification tasks. 1.1.2 Multidimensional Displays. For multidimensional displays — those using multiple graphical devices combined in one visual object to encode several pieces of information — codes may be either redundant or non-redundant. A redundant code using color and shape to encode the same information yields average search speeds even faster than non-redundant color or shape encoding [7]. Used redundantly with other codes, color yields faster results than shape, and either color or shape is superior as a redundant code to both letters and digits [7]. Jubis [14] confirms that a redundant code involving both color and shape is superior to shape coding but is approximately equal to non-redundant color-coding. For difficult tasks, using redundant color-coding may significantly reduce reaction time and increase accuracy [15]. Benefits of redundant color-coding increase as displays become more cluttered or complex [15]. 1.1.3 Interactions Among Graphical Devices . Significant interactions among graphical devices complicate design for multidimensional displays. Color-coding interferes with all achromatic codes, reducing accuracy by as much as 43% [6]. Indeed, Luder [16] suggests that color has such cognitive dominance that it should only be used to encode the most important data and in situations where dependence on color-coding does not increase risk. While we found no supporting empirical evidence, we believe size and shape interact, causing the shape of very small objects to be perceived less accurately. 1.1.4 Ranges of Graphical Devices. The number of instances of each graphical device (e.g., how many colors or shapes are used in the code) is significant because it limits the range or number of values encoded using that device [3]. The conservative recommendation is to use only five or six distinct colors or shapes [3, 7, 27, 31]. However, some research suggests that 10 [3] to 18 [24] colors may be used for search tasks. 1.1.5 Integration vs. Non-integration Tasks. Later research has focused on how humans extract information from a multidimensional display to perform both integration and non-integration tasks [4, 26, 27]. An integration task uses information encoded non-redundantly with two or more graphical devices to reach a single decision or action, while a non-integration task bases decisions or actions on information encoded in only one graphical device. Studies [4, 30] provide evidence that object displays, in which multiple visual attributes of a single object present information about multiple characteristics, facilitate integration tasks, especially where multiple graphical encodings all convey information relevant to the task at hand. However, object displays hinder non-integration tasks, as additional effort is required to filter out unwanted information communicated by the objects. 1.2 Graphical Perception Graphical perception is “the visual decoding of the quantitative and qualitative information encoded on graphs,” where visual decoding means “instantaneous perception of the visual field that comes without apparent mental effort” [9, p. 828]. Cleveland and McGill studied the perception of quantitative data such as “numerical values of a variable...that are not highly discrete...” [9, p. 828]. They have identified and empirically validated a ranking of graphical devices for displaying quantitative data, ordered as follows from most to least accurately perceived [9, p. 830]: Position along a common scale; Position on identical but non-aligned scales; Length; Angle or Slope; Area; Volume, Density, and/or Color saturation; Color hue. 1.3 Graphical Language Development Graphical language development is based on the assertion that graphical devices communicate information equivalent to sentences [17] and thus call for attention to appropriate use of each graphical device. In his discussion of graphical languages, Mackinlay [17] suggests three different rankings of the effectiveness of various graphical devices in communicating quantitative (numerical), ordinal (ranked), and nominal (non-ordinal textual) data about objects. Although based on psychophysical and graphical perception research, Mackinlay's rankings have not been experimentally validated [personal communication]. 1.4 Observations on Prior Research These studies make it clear that no single graphical device works equally well for all users, nor does an",
"title": ""
},
{
"docid": "f077dc076131748d97ec36b44c3feb6e",
"text": "The inspection, assessment, maintenance and safe operation of the existing civil infrastructure consists one of the major challenges facing engineers today. Such work requires either manual approaches, which are slow and yield subjective results, or automated approaches, which depend upon complex handcrafted features. Yet, for the latter case, it is rarely known in advance which features are important for the problem at hand. In this paper, we propose a fully automated tunnel assessment approach; using the raw input from a single monocular camera we hierarchically construct complex features, exploiting the advantages of deep learning architectures. Obtained features are used to train an appropriate defect detector. In particular, we exploit a Convolutional Neural Network to construct high-level features and as a detector we choose to use a Multi-Layer Perceptron due to its global function approximation properties. Such an approach achieves very fast predictions due to the feedforward nature of Convolutional Neural Networks and Multi-Layer Perceptrons.",
"title": ""
},
{
"docid": "152182336e620ee94f24e3865b7b377f",
"text": "In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in overparametrized deep convolutional networks. We show that Stochastic Gradient Descent (SGD) selects with high probability solutions that 1) have zero (or small) empirical error, 2) are degenerate as shown in Theory II and 3) have maximum generalization. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 123 1216. H.M. is supported in part by ARO Grant W911NF-15-10385.",
"title": ""
},
{
"docid": "192e124432b9ba8dfbb9b8e8b1c42a76",
"text": "Several recurrent networks have been proposed as representations for the task of formal language learning. After training a recurrent network recognize a formal language or predict the next symbol of a sequence, the next logical step is to understand the information processing carried out by the network. Some researchers have begun to extracting finite state machines from the internal state trajectories of their recurrent networks. This paper describes how sensitivity to initial conditions and discrete measurements can trick these extraction methods to return illusory finite state descriptions. INTRODUCTION Formal language learning (Gold, 1969) has been a topic of concern for cognitive science and artificial intelligence. It is the task of inducing a computational description of a formal language from a sequence of positive and negative examples of strings in the target language. Neural information processing approaches to this problem involve the use of recurrent networks that embody the internal state mechanisms underlying automata models (Cleeremans et al., 1989; Elman, 1990; Pollack, 1991; Giles et al, 1992; Watrous & Kuhn, 1992). Unlike traditional automata-based approaches, learning systems relying on recurrent networks have an additional burden: we are still unsure as to what these networks are doing.Some researchers have assumed that the networks are learning to simulate finite state Fool’s Gold: Extracting Finite State Machines From Recurrent Network Dynamics machines (FSMs) in their state dynamics and have begun to extract FSMs from the networks' state transition dynamics (Cleeremans et al., 1989; Giles et al., 1992; Watrous & Kuhn, 1992). These extraction methods employ various clustering techniques to partition the internal state space of the recurrent network into a finite number of regions corresponding to the states of a finite state automaton. This assumption of finite state behavior is dangerous on two accounts. First, these extraction techniques are based on a discretization of the state space which ignores the basic definition of information processing state. Second, discretization can give rise to incomplete computational explanations of systems operating over a continuous state space. SENSITIVITY TO INITIAL CONDITIONS In this section, I will demonstrate how sensitivity to initial conditions can confuse an FSM extraction system. The basis of this claim rests upon the definition of information processing state. Information processing (IP) state is the foundation underlying automata theory. Two IP states are the same if and only if they generate the same output responses for all possible future inputs (Hopcroft & Ullman, 1979). This definition is the fulcrum for many proofs and techniques, including finite state machine minimization. Any FSM extraction technique should embrace this definition, in fact it grounds the standard FSM minimization methods and the physical system modelling of Crutchfield and Young (Crutchfield & Young, 1989). Some dynamical systems exhibit exponential divergence for nearby state vectors, yet remain confined within an attractor. This is known as sensitivity to initial conditions. If this divergent behavior is quantized, it appears as nondeterministic symbol sequences (Crutchfield & Young, 1989) even though the underlying dynamical system is completely deterministic (Figure 1). Consider a recurrent network with one output and three recurrent state units. The output unit performs a threshold at zero activation for state unit one. That is, when the activation of the first state unit of the current state is less than zero then the output is A. Otherwise, the output is B. Equation 1 presents a mathematical description. is the current state of the system is the current output.",
"title": ""
},
{
"docid": "1d12470ab31735721a1f50ac48ac65bd",
"text": "In this work, we investigate the role of relational bonds in keeping students engaged in online courses. Specifically, we quantify the manner in which students who demonstrate similar behavior patterns influence each other’s commitment to the course through their interaction with them either explicitly or implicitly. To this end, we design five alternative operationalizations of relationship bonds, which together allow us to infer a scaled measure of relationship between pairs of students. Using this, we construct three variables, namely number of significant bonds, number of significant bonds with people who have dropped out in the previous week, and number of such bonds with people who have dropped in the current week. Using a survival analysis, we are able to measure the prediction strength of these variables with respect to dropout at each time point. Results indicate that higher numbers of significant bonds predicts lower rates of dropout; while loss of significant bonds is associated with higher rates of dropout.",
"title": ""
},
{
"docid": "7edfde7d7875d88702db2aabc4ac2883",
"text": "This paper proposes a novel approach to build integer multiplication circuits based on speculation, a technique which performs a faster-but occasionally wrong-operation resorting to a multi-cycle error correction circuit only in the rare case of error. The proposed speculative multiplier uses a novel speculative carry-save reduction tree using three steps: partial products recoding, partial products partitioning, speculative compression. The speculative tree uses speculative (m:2) counters, with m > 3, that are faster than a conventional tree using full-adders and half-adders. A technique to automatically choose the suitable speculative counters, taking into accounts both error probability and delay, is also presented in the paper. The speculative tree is completed with a fast speculative carry-propagate adder and an error correction circuit. We have synthesized speculative multipliers for several operand lengths using the UMC 65 nm library. Comparisons with conventional multipliers show that speculation is effective when high speed is required. Speculative multipliers allow reaching a higher speed compared with conventional counterparts and are also quite effective in terms of power dissipation, when a high speed operation is required.",
"title": ""
},
{
"docid": "1e852e116c11a6c7fb1067313b1ffaa3",
"text": "Article history: Received 20 February 2013 Received in revised form 30 July 2013 Accepted 11 September 2013 Available online 21 September 2013",
"title": ""
},
{
"docid": "95a3c05afbca1f8ff77a5320a63e2617",
"text": "The idea of bike sharing is to provide bikes to the citizens via stations that are located all around the city. At each station, bikes are stored in special racks, such that users can easily pick up or return a bike. However, popular stations are often emptied or filled very quickly, resulting in annoyed users who cannot return or retrieve bikes. To avoid this, the stations must be balanced. Bike sharing systems are balanced by distributing bikes from one station to another by using specific vehicles. Therefore, balancing the system corresponds to finding a tour for each vehicle, including loading and unloading instructions per station such that the resulting system is balanced. Clearly, balancing bike sharing systems is a difficult task, since it requires solving a vehicle routing problem combined with distributing single commodities (bikes) according to the target values at the stations. In the following, we are consistent with the notation introduced in [5]. We consider balancing a bike sharing system with S stations S = {1, . . . , S} and a set of depots D = {S + 1, . . . , S +D}, where each station s ∈ S has a capacity Cs > 0, a number of available bikes bs and the number of target bikes ts that denotes the number of bikes that should be at station s after balancing the system. We use V vehicles V = {1, . . . , V } with capacity cv > 0 and initial load b̂v ≥ 0 that distribute the bikes within maximal t̂v > 0 time units. The travel times between stations (and the depots) is given by a travel time matrix ttu,v ∗Scheduling and Timetabling Group, Department of Electrical, Management and Mechanical Engineering, University of Udine, Via Delle Scienze, 206 33100 Udine, Italy – {luca.digaspero,tommaso.urli}@uniud.it †DTS Mobility Department, Austrian Institute of Technology. Giefinggasse 2, 1210 Vienna, Austria – andrea.rendl@ait.ac.at",
"title": ""
}
] |
scidocsrr
|
e232f96a7cfa31afd68bebd82ba1bacc
|
Hybrid prediction model for Type-2 diabetic patients
|
[
{
"docid": "83688690678b474cd9efe0accfdb93f9",
"text": "Feature selection, as a preprocessing step to machine learning, is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. However, the recent increase of dimensionality of data poses a severe challenge to many existing feature selection methods with respect to efficiency and effectiveness. In this work, we introduce a novel concept, predominant correlation, and propose a fast filter method which can identify relevant features as well as redundancy among relevant features without pairwise correlation analysis. The efficiency and effectiveness of our method is demonstrated through extensive comparisons with other methods using real-world data of high dimensionality.",
"title": ""
}
] |
[
{
"docid": "e0af75450667733cde8745d09ae20c22",
"text": "Although keeping patients informed is a part of quality hospital care, inpatients often report they are not well informed. The authors placed whiteboards in each patient room on medicine wards in their hospital and asked nurses and physicians to use them to improve communication with inpatients. The authors then examined the effect of these whiteboards by comparing satisfaction with communication of patients discharged from medical wards before and after whiteboards were placed to satisfaction with communication of patients from surgical wards that did not have whiteboards. Patient satisfaction scores (0-100 scale) with communication improved significantly on medicine wards: nurse communication (+6.4, P < .001), physician communication (+4.0, P = .04), and involvement in decision making (+6.3, P = .002). Patient satisfaction scores did not change significantly on surgical wards. There was no secular trend, and the authors excluded a trend in overall patient satisfaction. Whiteboards could be a simple and effective tool to increase inpatient satisfaction with communication.",
"title": ""
},
{
"docid": "fc9fe094b3e46a85b7564a89730347fd",
"text": "We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.",
"title": ""
},
{
"docid": "db01e0c7c959e2f279afc5d78240ffca",
"text": "The implementation of an enterprise-wide Service Oriented Architecture (SOA) is a complex task. In most cases, evolutional approaches are used to handle this complexity. Maturity models are a possibility to plan and control such an evolution as they allow evaluating the current maturity and identifying current shortcomings. In order to support an SOA implementation, maturity models should also support in the selection of the most adequate maturity level and the deduction of a roadmap to this level. Existing SOA maturity models provide only weak assistance with the selection of an adequate maturity level. Most of them are developed by vendors of SOA products and often used to promote their products. In this paper, we introduce our independent SOA Maturity Model (iSOAMM), which is independent of the used technologies and products. In addition to the impacts on IT systems, it reflects the implications on organizational structures and governance. Furthermore, the iSOAMM lists the challenges, benefits and risks associated with each maturity level. This enables enterprises to select the most adequate maturity level for them, which is not necessarily the highest one.",
"title": ""
},
{
"docid": "7e40c7145f4613f12e7fc13646f3927c",
"text": "One strategy for intelligent agents in order to reach their goals is to plan their actions in advance. This can be done by simulating how the agent’s actions affect the environment and how it evolves independently of the agent. For this simulation, a model of the environment is needed. However, the creation of this model might be labor-intensive and it might be computational complex to evaluate during simulation. That is why, we suggest to equip an intelligent agent with a learned intuition about the dynamics of its environment by utilizing the concept of intuitive physics. To demonstrate our approach, we used an agent that can freely move in a two dimensional floor plan. It has to collect moving targets while avoiding the collision with static and dynamic obstacles. In order to do so, the agent plans its actions up to a defined planning horizon. The performance of our agent, which intuitively estimates the dynamics of its surrounding objects based on artificial neural networks, is compared to an agent which has a physically exact model of the world and one that acts randomly. The evaluation shows comparatively good results for the intuition based agent considering it uses only a quarter of the computation time in comparison to the agent with a physically exact model.",
"title": ""
},
{
"docid": "deccc92276cca4d064b0161fd8ee7dd9",
"text": "Vast amount of information is available on web. Data analysis applications such as extracting mutual funds information from a website, daily extracting opening and closing price of stock from a web page involves web data extraction. Huge efforts are made by lots of researchers to automate the process of web data scraping. Lots of techniques depends on the structure of web page i.e. html structure or DOM tree structure to scrap data from web page. In this paper we are presenting survey of HTML aware web scrapping techniques. Keywords— DOM Tree, HTML structure, semi structured web pages, web scrapping and Web data extraction.",
"title": ""
},
{
"docid": "c7c1bafc295af6ebc899e391daae04c1",
"text": "Non-orthogonal multiple access (NOMA) is expected to be a promising multiple access technique for 5G networks due to its superior spectral efficiency. In this letter, the ergodic capacity maximization problem is first studied for the Rayleigh fading multiple-input multiple-output (MIMO) NOMA systems with statistical channel state information at the transmitter (CSIT). We propose both optimal and low complexity suboptimal power allocation schemes to maximize the ergodic capacity of MIMO NOMA system with total transmit power constraint and minimum rate constraint of the weak user. Numerical results show that the proposed NOMA schemes significantly outperform the traditional orthogonal multiple access scheme.",
"title": ""
},
{
"docid": "580b5dfe7d17db560d5efd2fd975a284",
"text": "Structured knowledge about concepts plays an increasingly important role in areas such as information retrieval. The available ontologies and knowledge graphs that encode such conceptual knowledge, however, are inevitably incomplete. This observation has led to a number of methods that aim to automatically complete existing knowledge bases. Unfortunately, most existing approaches rely on black box models, e.g. formulated as global optimization problems, which makes it difficult to support the underlying reasoning process with intuitive explanations. In this paper, we propose a new method for knowledge base completion, which uses interpretable conceptual space representations and an explicit model for inductive inference that is closer to human forms of commonsense reasoning. Moreover, by separating the task of representation learning from inductive reasoning, our method is easier to apply in a wider variety of contexts. Finally, unlike optimization based approaches, our method can naturally be applied in settings where various logical constraints between the extensions of concepts need to be taken into account.",
"title": ""
},
{
"docid": "048646919aaf49a43f7eb32f47ba3041",
"text": "The authors developed and meta-analytically examined hypotheses designed to test and extend work design theory by integrating motivational, social, and work context characteristics. Results from a summary of 259 studies and 219,625 participants showed that 14 work characteristics explained, on average, 43% of the variance in the 19 worker attitudes and behaviors examined. For example, motivational characteristics explained 25% of the variance in subjective performance, 2% in turnover perceptions, 34% in job satisfaction, 24% in organizational commitment, and 26% in role perception outcomes. Beyond motivational characteristics, social characteristics explained incremental variances of 9% of the variance in subjective performance, 24% in turnover intentions, 17% in job satisfaction, 40% in organizational commitment, and 18% in role perception outcomes. Finally, beyond both motivational and social characteristics, work context characteristics explained incremental variances of 4% in job satisfaction and 16% in stress. The results of this study suggest numerous opportunities for the continued development of work design theory and practice.",
"title": ""
},
{
"docid": "999a1fbc3830ca0453760595046edb6f",
"text": "This paper introduces BoostMap, a method that can significantly reduce retrieval time in image and video database systems that employ computationally expensive distance measures, metric or non-metric. Database and query objects are embedded into a Euclidean space, in which similarities can be rapidly measured using a weighted Manhattan distance. Embedding construction is formulated as a machine learning task, where AdaBoost is used to combine many simple, ID embeddings into a multidimensional embedding that preserves a significant amount of the proximity structure in the original space. Performance is evaluated in a hand pose estimation system, and a dynamic gesture recognition system, where the proposed method is used to retrieve approximate nearest neighbors under expensive image and video similarity measures: In both systems, in quantitative experiments, BoostMap significantly increases efficiency, with minimal losses in accuracy. Moreover, the experiments indicate that BoostMap compares favorably with existing embedding methods that have been employed in computer vision and database applications, i.e., FastMap and Bourgain embeddings.",
"title": ""
},
{
"docid": "3e42bdbf2888562dfd6031f2bf95eb96",
"text": "I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more beautiful. Curiosity is the desire to create or discover more non-random, non-arbitrary, regular data that is novel and surprising not in the traditional sense of Boltzmann and Shannon but in the sense that it allows for compression progress because its regularity was not yet known. This drive maximizes interestingness, the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and (since 1990) artificial systems. 1 Store and Compress and Reward Compression Progress If the history of the entire universe were computable [123, 124], and there is no evidence against this possibility [84], then its simplest explanation would be the shortest program that computes it [65, 70]. Unfortunately there is no general way of finding the shortest program computing any given data [34,37,106,107]. Therefore physicists have traditionally proceeded incrementally, analyzing just a small aspect of the world at any given time, trying to find simple laws that allow for describing their limited observations better than the best previously known law, essentially trying to find a program that compresses the observed data better than the best previously known program. For example, Newton’s law of gravity can be formulated as a short piece of code which allows for substantially compressing many observation sequences involving falling apples and other objects. Although its predictive power is limited—for example, it does First version of this preprint published 23 Dec 2008; revised April 2009. Variants are scheduled to appear as references [90] and [91] (short version), distilling some of the essential ideas in earlier work (1990-2008) on this subject: [57,58,59,60,61,68,72,76,108] and especially recent papers [81, 87, 88, 89]. G. Pezzulo et al. (Eds.): ABiALS 2008, LNAI 5499, pp. 48–76, 2009. c © Springer-Verlag Berlin Heidelberg 2009 Driven by Compression Progress 49 not explain quantum fluctuations of apple atoms—it still allows for greatly reducing the number of bits required to encode the data stream, by assigning short codes to events that are predictable with high probability [28] under the assumption that the law holds. Einstein’s general relativity theory yields additional compression progress as it compactly explains many previously unexplained deviations from Newton’s predictions. Most physicists believe there is still room for further advances. Physicists, however, are not the only ones with a desire to improve the subjective compressibility of their observations. Since short and simple explanations of the past usually reflect some repetitive regularity that helps to predict the future as well, every intelligent system interested in achieving future goals should be motivated to compress the history of raw sensory inputs in response to its actions, simply to improve its ability to plan ahead. A long time ago, Piaget [49] already explained the explorative learning behavior of children through his concepts of assimilation (new inputs are embedded in old schemas—this may be viewed as a type of compression) and accommodation (adapting an old schema to a new input—this may be viewed as a type of compression improvement), but his informal ideas did not provide enough formal details to permit computer implementations of his concepts. How to model a compression progress drive in artificial systems? Consider an active agent interacting with an initially unknown world. We may use our general Reinforcement Learning (RL) framework of artificial curiosity (1990-2008) [57,58,59,60,61,68,72,76,81,87,88,89,108] to make the agent discover data that allows for additional compression progress and improved predictability. The framework directs the agent towards a better understanding the world through active exploration, even when external reward is rare or absent, through intrinsic reward or curiosity reward for actions leading to discoveries of previously unknown regularities in the action-dependent incoming data stream.",
"title": ""
},
{
"docid": "1f4a31c3d031dfc1b53e0fd817c32f00",
"text": "Credit and debit card data theft is one of the earliest forms of cybercrime. Still, it is one of the most common nowadays. Attackers often aim at stealing such customer data by targeting the Point of Sale (for short, PoS) system, i.e. the point at which a retailer first acquires customer data. Modern PoS systems are powerful computers equipped with a card reader and running specialized software. Increasingly often, user devices are leveraged as input to the PoS. In these scenarios, malware that can steal card data as soon as they are read by the device has flourished. As such, in cases where customer and vendor are persistently or intermittently disconnected from the network, no secure on-line payment is possible. This paper describes FRoDO, a secure off-line micro-payment solution that is resilient to PoS data breaches. Our solution improves over up to date approaches in terms of flexibility and security. To the best of our knowledge, FRoDO is the first solution that can provide secure fully off-line payments while being resilient to all currently known PoS breaches. In particular, we detail FRoDO architecture, components, and protocols. Further, a thorough analysis of FRoDO functional and security properties is provided, showing its effectiveness and viability.",
"title": ""
},
{
"docid": "6101fe189ad6ad7de6723784eec68b42",
"text": "We present a novel system for the automatic extraction of the main melody from polyphonic music recordings. Our approach is based on the creation and characterization of pitch contours, time continuous sequences of pitch candidates grouped using auditory streaming cues. We define a set of contour characteristics and show that by studying their distributions we can devise rules to distinguish between melodic and non-melodic contours. This leads to the development of new voicing detection, octave error minimization and melody selection techniques. A comparative evaluation of the proposed approach shows that it outperforms current state-of-the-art melody extraction systems in terms of overall accuracy. Further evaluation of the algorithm is provided in the form of a qualitative error analysis and the study of the effect of key parameters and algorithmic components on system performance. Finally, we conduct a glass ceiling analysis to study the current limitations of the method, and possible directions for future work are proposed.",
"title": ""
},
{
"docid": "8c1de6e57121c349cadc45068b69bb1f",
"text": "PURPOSE\nTo assess the relationship between serum insulin-like growth factor I (IGF-I) and diabetic retinopathy.\n\n\nMETHODS\nThis was a clinic-based cross-sectional study conducted at the Emory Eye Center. A total of 225 subjects were classified into four groups, based on diabetes status and retinopathy findings: no diabetes mellitus (no DM; n=99), diabetes with no background diabetic retinopathy (no BDR; n=42), nonproliferative diabetic retinopathy (NPDR; n=41), and proliferative diabetic retinopathy (PDR; n=43). Key exclusion criteria included type 1 diabetes and disorders that affect serum IGF-I levels, such as acromegaly. Subjects underwent dilated fundoscopic examination and were tested for hemoglobin A1c, serum creatinine, and serum IGF-I, between December 2009 and March 2010. Serum IGF-I levels were measured using an immunoassay that was calibrated against an international standard.\n\n\nRESULTS\nBetween the groups, there were no statistical differences with regards to age, race, or sex. Overall, diabetic subjects had similar serum IGF-I concentrations compared to nondiabetic subjects (117.6 µg/l versus 122.0 µg/l; p=0.497). There was no significant difference between serum IGF-I levels among the study groups (no DM=122.0 µg/l, no BDR=115.4 µg/l, NPDR=118.3 µg/l, PDR=119.1 µg/l; p=0.897). Among the diabetic groups, the mean IGF-I concentration was similar between insulin-dependent and non-insulin-dependent subjects (116.8 µg/l versus 118.2 µg/l; p=0.876). The univariate analysis of the IGF-I levels demonstrated statistical significance in regard to age (p=0.002, r=-0.20), body mass index (p=0.008, r=-0.18), and race (p=0.040).\n\n\nCONCLUSIONS\nThere was no association between serum IGF-I concentrations and diabetic retinopathy in this large cross-sectional study.",
"title": ""
},
{
"docid": "e0bfadccbcadbe46c4387bd8f0faed9d",
"text": "Reinforcement learning (RL) offers powerful algorithms to search for optimal controllers of systems with nonlinear, possibly stochastic dynamics that are unknown or highly uncertain. This review mainly covers artificial-intelligence approaches to RL, from the viewpoint of the control engineer. We explain how approximate representations of the solution make RL feasible for problems with continuous states and control actions. Stability is a central concern in control, and we argue that while the control-theoretic RL subfield called adaptive dynamic programming is dedicated to it, stability of RL largely remains an open question. We also cover in detail the case where deep neural networks are used for approximation, leading to the field of deep RL, which has shown great success in recent years. With the control practitioner in mind, we outline opportunities and pitfalls of deep RL; and we close the survey with an outlook that – among other things – points out some avenues for bridging the gap between control and artificial-intelligence RL techniques.",
"title": ""
},
{
"docid": "5950aadef33caa371f0de304b2b4869d",
"text": "Responding to a 2015 MISQ call for research on service innovation, this study develops a conceptual model of service innovation in higher education academic libraries. Digital technologies have drastically altered the delivery of information services in the past decade, raising questions about critical resources, their interaction with digital technologies, and the value of new services and their measurement. Based on new product development (NPD) and new service development (NSD) processes and the service-dominant logic (SDL) perspective, this research-in-progress presents a conceptual model that theorizes interactions between critical resources and digital technologies in an iterative process for delivery of service innovation in academic libraries. The study also suggests future research paths to confirm, expand, and validate the new service innovation model.",
"title": ""
},
{
"docid": "f31b3c4a2a8f3f05c3391deb1660ce75",
"text": "In the field of providing mobility for the elderly or disabled the aspect of dealing with stairs continues largely unresolved. This paper focuses on presenting continued development of the “Nagasaki Stairclimber”, a duel section tracked wheelchair capable of negotiating the large number of twisting and irregular stairs typically encounted by the residents living on the slopes that surround the Nagasaki harbor. Recent developments include an auto guidance system, auto leveling of the chair angle and active control of the frontrear track angle.",
"title": ""
},
{
"docid": "e03640352c1b0074a0bdd21cafbda61e",
"text": "The problem of finding an automatic thresholding technique is well known in applications involving image differencing like visual-based surveillance systems, autonomous vehicle driving, etc. Among the algorithms proposed in the past years, the thresholding technique based on the stable Euler number method is considered one of the most promising in terms of visual results. Unfortunately its high computational complexity made it an impossible choice for real-time applications. The implementation here proposed, called fast Euler numbers, overcomes the problem since it calculates all the Euler numbers in just one single raster scan of the image. That is, it runs in OðhwÞ, where h and w are the image s height and width, respectively. A technique for determining the optimal threshold, called zero crossing, is also proposed. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "cf0f63001493acd328a80c80430a5b44",
"text": "Random forest classification is a well known machine learning technique that generates classifiers in the form of an ensemble (\"forest\") of decision trees. The classification of an input sample is determined by the majority classification by the ensemble. Traditional random forest classifiers can be highly effective, but classification using a random forest is memory bound and not typically suitable for acceleration using FPGAs or GP-GPUs due to the need to traverse large, possibly irregular decision trees. Recent work at Lawrence Livermore National Laboratory has developed several variants of random forest classifiers, including the Compact Random Forest (CRF), that can generate decision trees more suitable for acceleration than traditional decision trees. Our paper compares and contrasts the effectiveness of FPGAs, GP-GPUs, and multi-core CPUs for accelerating classification using models generated by compact random forest machine learning classifiers. Taking advantage of training algorithms that can produce compact random forests composed of many, small trees rather than fewer, deep trees, we are able to regularize the forest such that the classification of any sample takes a deterministic amount of time. This optimization then allows us to execute the classifier in a pipelined or single-instruction multiple thread (SIMT) fashion. We show that FPGAs provide the highest performance solution, but require a multi-chip / multi-board system to execute even modest sized forests. GP-GPUs offer a more flexible solution with reasonably high performance that scales with forest size. Finally, multi-threading via Open MP on a shared memory system was the simplest solution and provided near linear performance that scaled with core count, but was still significantly slower than the GP-GPU and FPGA.",
"title": ""
},
{
"docid": "e38e7737e7a9f45c32c05603231e2d56",
"text": "Commercial dictation systems for continues speech have recently become available. Although they generally received positive reviews, error correction continuous to be limited to choosing from a list of alternatives, speaking or typing. We developed a set of interactive methods to correct errors without using keyboard or mouse, allowing the user to switch between the modalities continuous speech, spelling, handwriting and pen gestures. These correction methods were integrated with our large vocabulary speech recognition system to build a prototypical multimodal listening typewriter. The efficiency of different error correction methods was evaluated in a user study. The experiment compares multimodal correction with other methods available in current speech recognition applications. Results confirm the hypothesis that switching between modalities can significantly expedite corrections. Thus, state-of-the-art speech recognition technology with multimodal error correction makes it possible to input text at a faster speed than unskilled typing, including the time necessary to correct errors. In applications where a keyboard is acceptable, however, typing still remains the fastest way to correct errors for users with good typing skills.",
"title": ""
},
{
"docid": "8a36bdb2cc232ab541715a823625b586",
"text": "Artificial insemination (AI) is an important technique in all domestic species to ensure rapid genetic progress. The use of AI has been reported in camelids although insemination trials are rare. This could be because of the difficulties involved in collecting as well as handling the semen due to the gelatinous nature of the seminal plasma. In addition, as all camelids are induced ovulators, the females need to be induced to ovulate before being inseminated. This paper discusses the different methods for collection of camel semen and describes how the semen concentration and morphology are analyzed. It also examines the use of different buffers for liquid storage of fresh and chilled semen, the ideal number of live sperm to inseminate and whether pregnancy rates are improved if the animal is inseminated at the tip of the uterine horn verses in the uterine body. Various methods to induce ovulation in the female camels are also described as well as the timing of insemination in relation to ovulation. Results show that collection of semen is best achieved using an artificial vagina, and the highest pregnancy rates are obtained if a minimum of 150 × 106 live spermatozoa (diluted in Green Buffer, lactose (11%), or I.N.R.A. 96) are inseminated into the body of the uterus 24 h after the GnRH injection, given to the female camel to induce ovulation. Deep freezing of camel semen is proving to be a great challenge but the use of various freezing protocols, different diluents and different packaging methods (straws verses pellets) will be discussed. Preliminary results indicate that Green and Clear Buffer for Camel Semen is the best diluent to use for freezing dromedary semen and that freezing in pellets rather than straws result in higher post-thaw motility. Preservation of semen by deepfreezing is very important in camelids as it prevents the need to transport animals between farms and it extends the reproductive life span of the male, therefore further work needs to be carried out to improve the fertility of frozen/thawed camel spermatozoa.",
"title": ""
}
] |
scidocsrr
|
dd7b9972551d6a8b7413d0ff7d4b45d2
|
Cross-Platform Emoji Interpretation: Analysis, a Solution, and Applications
|
[
{
"docid": "17a0dfece42274180e470f23e532880d",
"text": "Emoji provide a way to express nonverbal conversational cues in computer-mediated communication. However, people need to share the same understanding of what each emoji symbolises, otherwise communication can breakdown. We surveyed 436 people about their use of emoji and ran an interactive study using a two-dimensional emotion space to investigate (1) the variation in people's interpretation of emoji and (2) their interpretation of corresponding Android and iOS emoji. Our results show variations between people's ratings within and across platforms. We outline our solution to reduce misunderstandings that arise from different interpretations of emoji.",
"title": ""
},
{
"docid": "546af5877fcd3bbf8d1354701f1ead12",
"text": "Recent studies have found that people interpret emoji characters inconsistently, creating significant potential for miscommunication. However, this research examined emoji in isolation, without consideration of any surrounding text. Prior work has hypothesized that examining emoji in their natural textual contexts would substantially reduce the observed potential for miscommunication. To investigate this hypothesis, we carried out a controlled study with 2,482 participants who interpreted emoji both in isolation and in multiple textual contexts. After comparing the variability of emoji interpretation in each condition, we found that our results do not support the hypothesis in prior work: when emoji are interpreted in textual contexts, the potential for miscommunication appears to be roughly the same. We also identify directions for future research to better understand the interplay between emoji and textual context.",
"title": ""
},
{
"docid": "dadd12e17ce1772f48eaae29453bc610",
"text": "Publications Learning Word Vectors for Sentiment Analysis. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. The 49 th Annual Meeting of the Association for Computational Linguistics (ACL 2011). Spectral Chinese Restaurant Processes: Nonparametric Clustering Based on Similarities. Richard Socher, Andrew Maas, and Christopher D. Manning. The 15 th International Conference on Artificial Intelligence and Statistics (AISTATS 2010). A Probabilistic Model for Semantic Word Vectors. Andrew L. Maas and Andrew Y. Ng. NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. One-Shot Learning with Bayesian Networks. Andrew L. Maas and Charles Kemp. Proceedings of the 31 st",
"title": ""
}
] |
[
{
"docid": "41e188c681516862a69fe8e90c58a618",
"text": "This paper explores the use of Information-Centric Networking (ICN) to support management operations in IoT deployments, presenting the design of a flexible architecture that allows the appropriate operation of IoT devices within a delimited ICN network domain. Our architecture has been designed with special consideration to naming, interoperation, security and energy-efficiency requirements. We theoretically assess the communication overhead introduced by the security procedures of our solution, both at IoT devices and clients. Additionally, we show the potential of our architecture to accommodate enhanced management applications, focusing on a specific use case, i.e. an information freshness service level agreement application. Finally, we present a proof-of-concept implementation of our architecture over an Arduino board, and we use it to carry out a set of experiments that validate the feasibility of our solution. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a6f6525af5a1d9306d6b62ebd821f4ba",
"text": "In this report, we introduce the outline of our system in Task 3: Disease Classification of ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection. We fine-tuned multiple pre-trained neural network models based on Squeeze-and-Excitation Networks (SENet) which achieved state-of-the-art results in the field of image recognition. In addition, we used the mean teachers as a semi-supervised learning framework and introduced some specially designed data augmentation strategies for skin lesion analysis. We confirmed our data augmentation strategy improved classification performance and demonstrated 87.2% in balanced accuracy on the official ISIC2018 validation dataset.",
"title": ""
},
{
"docid": "348115a5dddbc2bcdcf5552b711e82c0",
"text": "Enterococci are Gram-positive, catalase-negative, non-spore-forming, facultative anaerobic bacteria, which usually inhabit the alimentary tract of humans in addition to being isolated from environmental and animal sources. They are able to survive a range of stresses and hostile environments, including those of extreme temperature (5-65 degrees C), pH (4.5-10.0) and high NaCl concentration, enabling them to colonize a wide range of niches. Virulence factors of enterococci include the extracellular protein Esp and aggregation substances (Agg), both of which aid in colonization of the host. The nosocomial pathogenicity of enterococci has emerged in recent years, as well as increasing resistance to glycopeptide antibiotics. Understanding the ecology, epidemiology and virulence of Enterococcus species is important for limiting urinary tract infections, hepatobiliary sepsis, endocarditis, surgical wound infection, bacteraemia and neonatal sepsis, and also stemming the further development of antibiotic resistance.",
"title": ""
},
{
"docid": "44c5dd0001a05106839b534431b48bc8",
"text": "The Internet and finance are accelerating into integration in the 21 Century. Internet Finance was firstly proposed by Ma Weihua, the former president of China Merchants Bank in July 2012. On the basis of 74 latest research literatures selected from CSSCI Journals, Chinese Core Journals, authoritative magazines and related newspapers, this paper summarizes the current domestic research progress and trend about Internet Finance according to three dimensions, such as the sources of journals, research subjects and research contents. This research shows that the current domestic researches are not only shallow and superficial, but also lack the theoretical analyses and model applications; and the wealth-based and bank-based Internet Finance will be the research focus in the future.",
"title": ""
},
{
"docid": "d1f771fd1b0f8e5d91bbf65bc19aeb54",
"text": "Web-based systems are often a composition of infrastructure components, such as web servers and databases, and of applicationspecific code, such as HTML-embedded scripts and server-side applications. While the infrastructure components are usually developed by experienced programmers with solid security skills, the application-specific code is often developed under strict time constraints by programmers with little security training. As a result, vulnerable web-applications are deployed and made available to the Internet at large, creating easilyexploitable entry points for the compromise of entire networks. Web-based applications often rely on back-end database servers to manage application-specific persistent state. The data is usually extracted by performing queries that are assembled using input provided by the users of the applications. If user input is not sanitized correctly, it is possible to mount a variety of attacks that leverage web-based applications to compromise the security of back-end databases. Unfortunately, it is not always possible to identify these attacks using signature-based intrusion detection systems, because of the ad hoc nature of many web-based applications. Signatures are rarely written for this class of applications due to the substantial investment of time and expertise this would require. We have developed an anomaly-based system that learns the profiles of the normal database access performed by web-based applications using a number of different models. These models allow for the detection of unknown attacks with reduced false positives and limited overhead. In addition, our solution represents an improvement with respect to previous approaches because it reduces the possibility of executing SQL-based mimicry attacks.",
"title": ""
},
{
"docid": "da4bac81f8544eb729c7e0aafe814927",
"text": "This work focuses on representing very high-dimensional global image descriptors using very compact 64-1024 bit binary hashes for instance retrieval. We propose DeepHash: a hashing scheme based on deep networks. Key to making DeepHash work at extremely low bitrates are three important considerations – regularization, depth and fine-tuning – each requiring solutions specific to the hashing problem. In-depth evaluation shows that our scheme consistently outperforms state-of-the-art methods across all data sets for both Fisher Vectors and Deep Convolutional Neural Network features, by up to 20% over other schemes. The retrieval performance with 256-bit hashes is close to that of the uncompressed floating point features – a remarkable 512× compression.",
"title": ""
},
{
"docid": "1298ddbeea84f6299e865708fd9549a6",
"text": "Since its invention in the early 1960s (Rotman and Turner, 1963), the Rotman Lens has proven itself to be a useful beamformer for designers of electronically scanned arrays. Inherent in its design is a true time delay phase shift capability that is independent of frequency and removes the need for costly phase shifters to steer a beam over wide angles. The Rotman Lens has a long history in military radar, but it has also been used in communication systems. This article uses the developed software to design and analyze a microstrip Rotman Lens for the Ku band. The initial lens design will come from a tool based on geometrical optics (GO). A second stage of analysis will be performed using a full wave finite difference time domain (FDTD) solver. The results between the first-cut design tool and the comprehensive FDTD solver will be compared, and some of the design trades will be explored to gauge their impact on the performance of the lens.",
"title": ""
},
{
"docid": "9c25a2e343e9e259a9881fd13983c150",
"text": "Advances in cognitive, affective, and social neuroscience raise a host of new questions concerning the ways in which neuroscience can and should be used. These advances also challenge our intuitions about the nature of humans as moral and spiritual beings. Neuroethics is the new field that grapples with these issues. The present article surveys a number of applications of neuroscience to such diverse arenas as marketing, criminal justice, the military, and worker productivity. The ethical, legal, and societal effects of these applications are discussed. Less practical, but perhaps ultimately more consequential, is the impact of neuroscience on our worldview and our understanding of the human person.",
"title": ""
},
{
"docid": "300bff5036b5b4e83a4bc605020b49e3",
"text": "Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.",
"title": ""
},
{
"docid": "33880207bb52ce7e20c6f5ad80d67a47",
"text": "This research involves the digital transformation of an orthopedic surgical practice office housing three community orthopedic surgeons and a physical therapy treatment clinic in Toronto, Ontario. All three surgeons engage in both a private community orthopaedic surgery practice and hold surgical privileges at a local community hospital which serves a catchment area of more than 850,000 people in the northwest Greater Toronto Area. The clinic employs two full time physical therapists and one office manager for therapy services as well as four administrative assistants who manage the surgeon’s practices.",
"title": ""
},
{
"docid": "4efa56d9c2c387608fe9ddfdafca0f9a",
"text": "Accurate cardinality estimates are essential for a successful query optimization. This is not only true for relational DBMSs but also for RDF stores. An RDF database consists of a set of triples and, hence, can be seen as a relational database with a single table with three attributes. This makes RDF rather special in that queries typically contain many self joins. We show that relational DBMSs are not well-prepared to perform cardinality estimation in this context. Further, there are hardly any special cardinality estimation methods for RDF databases. To overcome this lack of appropriate cardinality estimation methods, we introduce characteristic sets together with new cardinality estimation methods based upon them. We then show experimentally that the new methods are-in the RDF context-highly superior to the estimation methods employed by commercial DBMSs and by the open-source RDF store RDF-3X.",
"title": ""
},
{
"docid": "c61559bdb209cf7098bb11c372a483c6",
"text": "This paper presents a lexicon model for the description of verbs, nouns and adjectives to be used in applicatons like sentiment analysis and opinion mining. The model aims to describe the detailed subjectivity relations that exist between the actors in a sentence expressing separate attitudes for each actor. Subjectivity relations that exist between the different actors are labeled with information concerning both the identity of the attitude holder and the orientation (positive vs. negative) of the attitude. The model includes a categorization into semantic categories relevant to opinion mining and sentiment analysis and provides means for the identification of the attitude holder and the polarity of the attitude and for the description of the emotions and sentiments of the different actors involved in the text. Special attention is paid to the role of the speaker/writer of the text whose perspective is expressed and whose views on what is happening are conveyed in the text. Finally, validation is provided by an annotation study that shows that these subtle subjectivity relations are reliably identifiable by human annotators.",
"title": ""
},
{
"docid": "bcf7d85007ebcb6c009bbcbb704e8df4",
"text": "This paper describes the speech activity detection (SAD) system developed by the Patrol team for the first phase of the DARPA RATS (Robust Automatic Transcription of Speech) program, which seeks to advance state of the art detection capabilities on audio from highly degraded communication channels. We present two approaches to SAD, one based on Gaussian mixture models, and one based on multi-layer perceptrons. We show that significant gains in SAD accuracy can be obtained by careful design of acoustic front end, feature normalization, incorporation of long span features via data-driven dimensionality reducing transforms, and channel dependent modeling. We also present a novel technique for normalizing detection scores from different systems for the purpose of system combination.",
"title": ""
},
{
"docid": "d3875bf0d0bf1af7b7b8044b06152c46",
"text": "This two-part article series covers the design, development, and testing of a reprogrammable UAV autopilot system. Here you get a detailed system-level description of the autopilot design, with specific emphasis on its hardware and software. nmanned aerial vehicle (UAV) usage has increased tremendously in recent years. Although this growth has been fueled mainly by demand from government defense agencies, UAVs are now being used for non-military endeavors as well. Today, UAVs are employed for purposes ranging from wildlife tracking to forest fire monitoring. Advances in microelectronics technology have enabled engineers to automate such aircraft and convert them into useful remote-sensing platforms. For instance, due to sensor development in the automotive industry and elsewhere, the cost of the components required to build such systems has fallen greatly. In this two-part article series, we'll present the design, development, and flight test results for a reprogrammable UAV autopi-lot system. The design is primarily focused on supporting guidance, navigation, and control (GNC) research. It facilitates a fric-tionless transition from software simulation to hardware-in-the-loop (HIL) simulation to flight tests, eliminating the need to write low-level source code. We can easily make, simulate, and test changes in the algorithms on the hardware before attempting flight. The hardware is primarily \" programmed \" using MathWorks Simulink, a block-diagram based tool for modeling, simulating, and analyzing dynamical systems.",
"title": ""
},
{
"docid": "a8981ddf9617beb921f12d5fbddadc56",
"text": "This paper develops an indoor intelligent service mobile robot that has multiple functions, can recognize and grip the target object, avoid obstacles, and accurately localize via relative position. The locating method of the robot uses the output values of the sensor module, which includes data from a gyroscope and a magnetometer, to correct the current rotation direction angle of the robot. An angle correction method can be divided into three parts. The first part calculates the angle values obtained from the gyroscope and the magnetometer that are installed on the robot. The second part explores the error characteristics between the sensor module and the actual rotation direction angle of the robot. The third part uses the error characteristic data to design the fuzzy rule base and the Kalman filter to eliminate errors and to get a more accurate orientation angle. These errors can be described as either regular or irregular. The former can be eliminated by fuzzy algorithm compensation, and the latter can be eliminated by the Kalman filter. The contribution of this paper is to propose an error correction method between the calculus rotation angle determined by the sensor and the actual rotation angle of the robot such that the three moving paths, i.e., specified, actual, and calculus paths, have more accurate approximation. The experimental results demonstrate that the combination of fuzzy compensation and the Kalman filter is an accurate correction method.",
"title": ""
},
{
"docid": "db83ca64b54bbd54b4097df425c48017",
"text": "This paper introduces the application of high-resolution angle estimation algorithms for a 77GHz automotive long range radar sensor. Highresolution direction of arrival (DOA) estimation is important for future safety systems. Using FMCW principle, major challenges discussed in this paper are small number of snapshots, correlation of the signals, and antenna mismatches. Simulation results allow analysis of these effects and help designing the sensor. Road traffic measurements show superior DOA resolution and the feasibility of high-resolution angle estimation.",
"title": ""
},
{
"docid": "8d4bdc3e5e84a63a76e6a226a9f0e558",
"text": "HTTP cookies are the de facto mechanism for session authentication in Web applications. However, their inherent security weaknesses allow attacks against the integrity of Web sessions. HTTPS is often recommended to protect cookies, but deploying full HTTPS support can be challenging due to performance and financial concerns, especially for highly distributed applications. Moreover, cookies can be exposed in a variety of ways even when HTTPS is enabled. In this article, we propose one-time cookies (OTC), a more robust alternative for session authentication. OTC prevents attacks such as session hijacking by signing each user request with a session secret securely stored in the browser. Unlike other proposed solutions, OTC does not require expensive state synchronization in the Web application, making it easily deployable in highly distributed systems. We implemented OTC as a plug-in for the popular WordPress platform and as an extension for Firefox and Firefox for mobile browsers. Our extensive experimental analysis shows that OTC introduces a latency of less than 6 ms when compared to cookies—a negligible overhead for most Web applications. Moreover, we show that OTC can be combined with HTTPS to effectively add another layer of security to Web applications. In so doing, we demonstrate that one-time cookies can significantly improve the security of Web applications with minimal impact on performance and scalability.",
"title": ""
},
{
"docid": "037ff53b19c51dca7ce6418e8dbbc4f8",
"text": "Critical driver genomic events in colorectal cancer have been shown to affect the response to targeted agents that were initially developed under the 'one gene, one drug' paradigm of precision medicine. Our current knowledge of the complexity of the cancer genome, clonal evolution patterns under treatment pressure and pharmacodynamic effects of target inhibition support the transition from a one gene, one drug approach to a 'multi-gene, multi-drug' model when making therapeutic decisions. Better characterization of the transcriptomic subtypes of colorectal cancer, encompassing tumour, stromal and immune components, has revealed convergent pathway dependencies that mandate a 'multi-molecular' perspective for the development of therapies to treat this disease.",
"title": ""
},
{
"docid": "7fece61e99d0b461b04bcf0dfa81639d",
"text": "The rapid advancement of robotics technology in recent years has pushed the development of a distinctive field of robotic applications, namely robotic exoskeletons. Because of the aging population, more people are suffering from neurological disorders such as stroke, central nervous system disorder, and spinal cord injury. As manual therapy seems to be physically demanding for both the patient and therapist, robotic exoskeletons have been developed to increase the efficiency of rehabilitation therapy. Robotic exoskeletons are capable of providing more intensive patient training, better quantitative feedback, and improved functional outcomes for patients compared to manual therapy. This review emphasizes treadmill-based and over-ground exoskeletons for rehabilitation. Analyses of their mechanical designs, actuation systems, and integrated control strategies are given priority because the interactions between these components are crucial for the optimal performance of the rehabilitation robot. The review also discusses the limitations of current exoskeletons and technical challenges faced in exoskeleton development. A general perspective of the future development of more effective robot exoskeletons, specifically real-time biological synergy-based exoskeletons, could help promote brain plasticity among neurologically impaired patients and allow them to regain normal walking ability.",
"title": ""
},
{
"docid": "b18e65ad7982944ef9ad213d98d45dad",
"text": "This paper provides an overview of the physical layer specification of Advanced Television Systems Committee (ATSC) 3.0, the next-generation digital terrestrial broadcasting standard. ATSC 3.0 does not have any backwards-compatibility constraint with existing ATSC standards, and it uses orthogonal frequency division multiplexing-based waveforms along with powerful low-density parity check (LDPC) forward error correction codes similar to existing state-of-the-art. However, it introduces many new technological features such as 2-D non-uniform constellations, improved and ultra-robust LDPC codes, power-based layered division multiplexing to efficiently provide mobile and fixed services in the same radio frequency (RF) channel, as well as a novel frequency pre-distortion multiple-input single-output antenna scheme. ATSC 3.0 also allows bonding of two RF channels to increase the service peak data rate and to exploit inter-RF channel frequency diversity, and to employ dual-polarized multiple-input multiple-output antenna system. Furthermore, ATSC 3.0 provides great flexibility in terms of configuration parameters (e.g., 12 coding rates, 6 modulation orders, 16 pilot patterns, 12 guard intervals, and 2 time interleavers), and also a very flexible data multiplexing scheme using time, frequency, and power dimensions. As a consequence, ATSC 3.0 not only improves the spectral efficiency and robustness well beyond the first generation ATSC broadcast television standard, but also it is positioned to become the reference terrestrial broadcasting technology worldwide due to its unprecedented performance and flexibility. Another key aspect of ATSC 3.0 is its extensible signaling, which will allow including new technologies in the future without disrupting ATSC 3.0 services. This paper provides an overview of the physical layer technologies of ATSC 3.0, covering the ATSC A/321 standard that describes the so-called bootstrap, which is the universal entry point to an ATSC 3.0 signal, and the ATSC A/322 standard that describes the physical layer downlink signals after the bootstrap. A summary comparison between ATSC 3.0 and DVB-T2 is also provided.",
"title": ""
}
] |
scidocsrr
|
f992ab9730adea9ef71dff62a2a962cb
|
A data mining framework for optimal product selection in retail supermarket data: the generalized PROFSET model
|
[
{
"docid": "74ef26e332b12329d8d83f80169de5c0",
"text": "It has been claimed that the discovery of association rules is well-suited for applications of market basket analysis to reveal regularities in the purchase behaviour of customers. Moreover, recent work indicates that the discovery of interesting rules can in fact only be addressed within a microeconomic framework. This study integrates the discovery of frequent itemsets with a (microeconomic) model for product selection (PROFSET). The model enables the integration of both quantitative and qualitative (domain knowledge) criteria. Sales transaction data from a fullyautomated convenience store is used to demonstrate the effectiveness of the model against a heuristic for product selection based on product-specific profitability. We show that with the use of frequent itemsets we are able to identify the cross-sales potential of product items and use this information for better product selection. Furthermore, we demonstrate that the impact of product assortment decisions on overall assortment profitability can easily be evaluated by means of sensitivity analysis.",
"title": ""
}
] |
[
{
"docid": "4c0c4b68cdfa1cf684eabfa20ee0b88b",
"text": "Orthogonal Frequency Division Multiplexing (OFDM) is an attractive technique for wireless communication over frequency-selective fading channels. OFDM suffers from high Peak-to-Average Power Ratio (PAPR), which limits OFDM usage and reduces the efficiency of High Power Amplifier (HPA) or badly degrades BER. Many PAPR reduction techniques have been proposed in the literature. PAPR reduction techniques can be classified into blind receiver and non-blind receiver techniques. Active Constellation Extension (ACE) is one of the best blind receiver techniques. While, Partial Transmit Sequence (PTS) can work as blind / non-blind technique. PTS has a great PAPR reduction gain on the expense of increasing computational complexity. In this paper we combine PTS with ACE in four possible ways to be suitable for blind receiver applications with better performance than conventional methods (i.e. PTS and ACE). Results show that ACE-PTS scheme is the best among others. Expectedly, any hybrid technique has computational complexity larger than that of its components. However, ACE-PTS can be used to achieve the same performance as that of PTS or worthy better, with less number of subblocks (i.e. with less computational complexity) especially in low order modulation techniques (e.g. 4-QAM and 16-QAM). Results show that ACE-PTS with V=8 can perform similar to or better than PTS with V=10 in 16-QAM or 4-QAM, respectively, with 74% and 40.5% reduction in required numbers of additions and multiplications, respectively.",
"title": ""
},
{
"docid": "1e868977ef9377d0dca9ba39b6ba5898",
"text": "During last decade, tremendous efforts have been devoted to the research of time series classification. Indeed, many previous works suggested that the simple nearest-neighbor classification is effective and difficult to beat. However, we usually need to determine the distance metric (e.g., Euclidean distance and Dynamic Time Warping) for different domains, and current evidence shows that there is no distance metric that is best for all time series data. Thus, the choice of distance metric has to be done empirically, which is time expensive and not always effective. To automatically determine the distance metric, in this paper, we investigate the distance metric learning and propose a novel Convolutional Nonlinear Neighbourhood Components Analysis model for time series classification. Specifically, our model performs supervised learning to project original time series into a transformed space. When classifying, nearest neighbor classifier is then performed in this transformed space. Finally, comprehensive experimental results demonstrate that our model can improve the classification accuracy to some extent, which indicates that it can learn a good distance metric.",
"title": ""
},
{
"docid": "b7b2049ef36bd778c32f505ee3b509e6",
"text": "The larger and longer body of multi-axle vehicle makes it difficult to steer as flexibly as usual. For this reason, a novel steering mode which combines traditional Ackerman steering and Skid steering method is proposed and the resulted turning characteristics is studied in this research. First, the research methods are identified by means of building and analysing a vehicle dynamical model. Then, the influence of rear-wheels' assisted steering on vehicle yaw rate, turning radius and wheel side-slip angle is analysed by solving a linear simplified model. An executive strategy of an additional yaw moment produced by rear-wheels during the vehicle steering at a relative lower speed is put forward. And a torque distribution method of rear-wheels is given. Finally, a comparison with all-wheel steering vehicles is made. It turned out that this steering mode can effectively decrease the turning radius or increase mobility and have an advantage over all-wheel steering.",
"title": ""
},
{
"docid": "037dcb40dff3d16a13843df2f618245c",
"text": "Deep convolutional neural networks (CNNs) can be applied to malware binary detection through images classification. The performance, however, is degraded due to the imbalance of malware families (classes). To mitigate this issue, we propose a simple yet effective weighted softmax loss which can be employed as the final layer of deep CNNs. The original softmax loss is weighted, and the weight value can be determined according to class size. A scaling parameter is also included in computing the weight. Proper selection of this parameter has been studied and an empirical option is given. The weighted loss aims at alleviating the impact of data imbalance in an end-to-end learning fashion. To validate the efficacy, we deploy the proposed weighted loss in a pre-trained deep CNN model and fine-tune it to achieve promising results on malware images classification. Extensive experiments also indicate that the new loss function can fit other typical CNNs with an improved classification performance. Keywords— Deep Learning, Malware Images, Convolutional Neural Networks, CNN, Image Classification, Imbalanced Data Classification, Softmaxloss",
"title": ""
},
{
"docid": "4f3936b753abd2265d867c0937aec24c",
"text": "A weighted constraint satisfaction problem (WCSP) is a constraint satisfaction problem in which preferences among solutions can be expressed. Bucket elimination is a complete technique commonly used to solve this kind of constraint satisfaction problem. When the memory required to apply bucket elimination is too high, a heuristic method based on it (denominated mini-buckets) can be used to calculate bounds for the optimal solution. Nevertheless, the curse of dimensionality makes these techniques impractical on large scale problems. In response to this situation, we present a memetic algorithm for WCSPs in which bucket elimination is used as a mechanism for recombining solutions, providing the best possible child from the parental set. Subsequently, a multi-level model in which this exact/metaheuristic hybrid is further hybridized with branch-and-bound techniques and mini-buckets is studied. As a case study, we have applied these algorithms to the resolution of the maximum density still life problem, a hard constraint optimization problem based on Conway’s game of life. The resulting algorithm consistently finds optimal patterns for up to date solved instances in less time than current approaches. Moreover, it is shown that this proposal provides new best known solutions for very large instances.",
"title": ""
},
{
"docid": "fc726cbc5f4c0b9faa47a52ca7e73f9a",
"text": "Osteoarthritis (OA) has long been considered a \"wear and tear\" disease leading to loss of cartilage. OA used to be considered the sole consequence of any process leading to increased pressure on one particular joint or fragility of cartilage matrix. Progress in molecular biology in the 1990s has profoundly modified this paradigm. The discovery that many soluble mediators such as cytokines or prostaglandins can increase the production of matrix metalloproteinases by chondrocytes led to the first steps of an \"inflammatory\" theory. However, it took a decade before synovitis was accepted as a critical feature of OA, and some studies are now opening the way to consider the condition a driver of the OA process. Recent experimental data have shown that subchondral bone may have a substantial role in the OA process, as a mechanical damper, as well as a source of inflammatory mediators implicated in the OA pain process and in the degradation of the deep layer of cartilage. Thus, initially considered cartilage driven, OA is a much more complex disease with inflammatory mediators released by cartilage, bone and synovium. Low-grade inflammation induced by the metabolic syndrome, innate immunity and inflammaging are some of the more recent arguments in favor of the inflammatory theory of OA and highlighted in this review.",
"title": ""
},
{
"docid": "5ee940efb443ee38eafbba9e0d14bdd2",
"text": "BACKGROUND\nThe stability of biochemical analytes has already been investigated, but results strongly differ depending on parameters, methodologies, and sample storage times. We investigated the stability for many biochemical parameters after different storage times of both whole blood and plasma, in order to define acceptable pre- and postcentrifugation delays in hospital laboratories.\n\n\nMETHODS\nTwenty-four analytes were measured (Modular® Roche analyzer) in plasma obtained from blood collected into lithium heparin gel tubes, after 2-6 hr of storage at room temperature either before (n = 28: stability in whole blood) or after (n = 21: stability in plasma) centrifugation. Variations in concentrations were expressed as mean bias from baseline, using the analytical change limit (ACL%) or the reference change value (RCV%) as acceptance limit.\n\n\nRESULTS\nIn tubes stored before centrifugation, mean plasma concentrations significantly decreased after 3 hr for phosphorus (-6.1% [95% CI: -7.4 to -4.7%]; ACL 4.62%) and lactate dehydrogenase (LDH; -5.7% [95% CI: -7.4 to -4.1%]; ACL 5.17%), and slightly decreased after 6 hr for potassium (-2.9% [95% CI: -5.3 to -0.5%]; ACL 4.13%). In plasma stored after centrifugation, mean concentrations decreased after 6 hr for bicarbonates (-19.7% [95% CI: -22.9 to -16.5%]; ACL 15.4%), and moderately increased after 4 hr for LDH (+6.0% [95% CI: +4.3 to +7.6%]; ACL 5.17%). Based on RCV, all the analytes can be considered stable up to 6 hr, whether before or after centrifugation.\n\n\nCONCLUSION\nThis study proposes acceptable delays for most biochemical tests on lithium heparin gel tubes arriving at the laboratory or needing to be reanalyzed.",
"title": ""
},
{
"docid": "f5a188c87dd38a0a68612352891bcc3f",
"text": "Sentiment analysis of online documents such as news articles, blogs and microblogs has received increasing attention in recent years. In this article, we propose an efficient algorithm and three pruning strategies to automatically build a word-level emotional dictionary for social emotion detection. In the dictionary, each word is associated with the distribution on a series of human emotions. In addition, a method based on topic modeling is proposed to construct a topic-level dictionary, where each topic is correlated with social emotions. Experiment on the real-world data sets has validated the effectiveness and reliability of the methods. Compared with other lexicons, the dictionary generated using our approach is language-independent, fine-grained, and volume-unlimited. The generated dictionary has a wide range of applications, including predicting the emotional distribution of news articles, identifying social emotions on certain entities and news events.",
"title": ""
},
{
"docid": "0bd7956dbee066a5b7daf4cbd5926f35",
"text": "Computer networks lack a general control paradigm, as traditional networks do not provide any networkwide management abstractions. As a result, each new function (such as routing) must provide its own state distribution, element discovery, and failure recovery mechanisms. We believe this lack of a common control platform has significantly hindered the development of flexible, reliable and feature-rich network control planes. To address this, we present Onix, a platform on top of which a network control plane can be implemented as a distributed system. Control planes written within Onix operate on a global view of the network, and use basic state distribution primitives provided by the platform. Thus Onix provides a general API for control plane implementations, while allowing them to make their own trade-offs among consistency, durability, and scalability.",
"title": ""
},
{
"docid": "40043360644ded6950e1f46bd2caaf96",
"text": "Recently, there has been a rapidly growing interest in deep learning research and their applications to real-world problems. In this paper, we aim at evaluating and comparing LSTM deep learning architectures for short-and long-term prediction of financial time series. This problem is often considered as one of the most challenging real-world applications for time-series prediction. Unlike traditional recurrent neural networks, LSTM supports time steps of arbitrary sizes and without the vanishing gradient problem. We consider both bidirectional and stacked LSTM predictive models in our experiments and also benchmark them with shallow neural networks and simple forms of LSTM networks. The evaluations are conducted using a publicly available dataset for stock market closing prices.",
"title": ""
},
{
"docid": "a3be253034ffcf61a25ad265fda1d4ff",
"text": "With the development of automated logistics systems, flexible manufacture systems (FMS) and unmanned automated factories, the application of automated guided vehicle (AGV) gradually become more important to improve production efficiency and logistics automatism for enterprises. The development of the AGV systems play an important role in reducing labor cost, improving working conditions, unifying information flow and logistics. Path planning has been a key issue in AGV control system. In this paper, two key problems, shortest time path planning and collision in multi AGV have been solved. An improved A-Star (A*) algorithm is proposed, which introduces factors of turning, and edge removal based on the improved A* algorithm is adopted to solve k shortest path problem. Meanwhile, a dynamic path planning method based on A* algorithm which searches effectively the shortest-time path and avoids collision has been presented. Finally, simulation and experiment have been conducted to prove the feasibility of the algorithm.",
"title": ""
},
{
"docid": "8c80b8b0e00fa6163d945f7b1b8f63e5",
"text": "In this paper, we propose an architecture model called Design Rule Space (DRSpace). We model the architecture of a software system as multiple overlapping DRSpaces, reflecting the fact that any complex software system must contain multiple aspects, features, patterns, etc. We show that this model provides new ways to analyze software quality. In particular, we introduce an Architecture Root detection algorithm that captures DRSpaces containing large numbers of a project’s bug-prone files, which are called Architecture Roots (ArchRoots). After investigating ArchRoots calculated from 15 open source projects, the following observations become clear: from 35% to 91% of a project’s most bug-prone files can be captured by just 5 ArchRoots, meaning that bug-prone files are likely to be architecturally connected. Furthermore, these ArchRoots tend to live in the system for significant periods of time, serving as the major source of bug-proneness and high maintainability costs. Moreover, each ArchRoot reveals multiple architectural flaws that propagate bugs among files and this will incur high maintenance costs over time. The implication of our study is that the quality, in terms of bug-proneness, of a large, complex software project cannot be fundamentally improved without first fixing its architectural flaws.",
"title": ""
},
{
"docid": "d2928d8227544e8251818f06099b17fd",
"text": "Driven by the dominance of the relational model, the requirements of modern applications, and the veracity of data, we revisit the fundamental notion of a key in relational databases with NULLs. In SQL database systems primary key columns are NOT NULL by default. NULL columns may occur in unique constraints which only guarantee uniqueness for tuples which do not feature null markers in any of the columns involved, and therefore serve a different function than primary keys. We investigate the notions of possible and certain keys, which are keys that hold in some or all possible worlds that can originate from an SQL table, respectively. Possible keys coincide with the unique constraint of SQL, and thus provide a semantics for their syntactic definition in the SQL standard. Certain keys extend primary keys to include NULL columns, and thus form a sufficient and necessary condition to identify tuples uniquely, while primary keys are only sufficient for that purpose. In addition to basic characterization, axiomatization, and simple discovery approaches for possible and certain keys, we investigate the existence and construction of Armstrong tables, and describe an indexing scheme for enforcing certain keys. Our experiments show that certain keys with NULLs do occur in real-world databases, and that related computational problems can be solved efficiently. Certain keys are therefore semantically well-founded and able to maintain data quality in the form of Codd’s entity integrity rule while handling the requirements of modern applications, that is, higher volumes of incomplete data from different formats.",
"title": ""
},
{
"docid": "c2d17d5a5db10efafa4e56a2b6cd7afa",
"text": "The main purpose of analyzing the social network data is to observe the behaviors and trends that are followed by people. How people interact with each other, what they usually share, what are their interests on social networks, so that analysts can focus new trends for the provision of those things which are of great interest for people so in this paper an easy approach of gathering and analyzing data through keyword based search in social networks is examined using NodeXL and data is gathered from twitter in which political trends have been analyzed. As a result it will be analyzed that, what people are focusing most in politics.",
"title": ""
},
{
"docid": "6f6706ee6f54d71a172c43403cdb6135",
"text": "Stator dc-excited vernier reluctance machines (dc-VRMs) are a kind of a novel vernier reluctance synchronous machine that employs doubly salient structures; their innovations include stator concentrated dc windings to generate the exciting field. Compared with the rotor wound field machines or stator/rotor PM synchronous machines, these machines are characterized by low cost due to the absence of PMs, a robust rotor structure, and a wide speed range resulting from the flexible stator dc exciting field. In this paper, with the proposed phasor diagram, the power factor of dc-VRMs is analyzed analytically and with the finite-element analysis, and the analysis results are confirmed with the experiment. It is found that, with constant slot sizes and slot fill, the power factor is mainly dependent on the ratio of the dc current to the armature winding current and also the ratio of the armature synchronous inductance to the mutual inductance between the field winding and the armature winding. However, torque will be sacrificed if measures are taken to further improve the power factor.",
"title": ""
},
{
"docid": "051d402ce90d7d326cc567e228c8411f",
"text": "CDM ESD event has become the main ESD reliability concern for integrated-circuits products using nanoscale CMOS technology. A novel CDM ESD protection design, using self-biased current trigger (SBCT) and source pumping, has been proposed and successfully verified in 0.13-lm CMOS technology to achieve 1-kV CDM ESD robustness. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ea29b3421c36178680ae63c16b9cecad",
"text": "Traffic engineering under OSPF routes along the shortest paths, which may cause network congestion. Software Defined Networking (SDN) is an emerging network architecture which exerts a separation between the control plane and the data plane. The SDN controller can centrally control the network state through modifying the flow tables maintained by routers. Network operators can flexibly split arbitrary flows to outgoing links through the deployment of the SDN. However, SDN has its own challenges of full deployment, which makes the full deployment of SDN difficult in the short term. In this paper, we explore the traffic engineering in a SDN/OSPF hybrid network. In our scenario, the OSPF weights and flow splitting ratio of the SDN nodes can both be changed. The controller can arbitrarily split the flows coming into the SDN nodes. The regular nodes still run OSPF. Our contribution is that we propose a novel algorithm called SOTE that can obtain a lower maximum link utilization. We reap a greater benefit compared with the results of the OSPF network and the SDN/OSPF hybrid network with fixed weight setting. We also find that when only 30% of the SDN nodes are deployed, we can obtain a near optimal performance.",
"title": ""
},
{
"docid": "99ea986731bd262e1b6380d1baac62c4",
"text": "A patient of 58 years of age without medical problems came to the clinic due to missing teeth in the upper posterior region and to change the partial fixed prosthesis in the upper anterior area. Proposed treatment: surgical phase of three conical shape tapering implants with prosthetic platform in occlusal direction with mechanize collar tissue level with fixtures to place implant-supported metal-ceramic restorations. In the anterior area, a zirconium oxide fixed partial prosthesis was vertical preparation of the tooth's. When preparing teeth to receive fixed prostheses, the definition and shape of finish lines has been a subject of endless discussion, modification, and change ever since the beginnings of restorative prosthetic dentistry. The BOPT technique (biologically oriented preparation technique) was first described in the context of tooth-supported restorations but has recently been applied to dental implants with the aim of ensuring healthy peri-implant tissue and creating the possibility of modeling the peri-implant sulcus by modifying prosthetic emergence profiles. Vertical preparation of teeth and abutments without finish line on implants is a technique which was found to be adequate for ensuring the remodeling and stability of peri-implant tissues. Key words:Peri-implant tissue health, shoulderless abutments.",
"title": ""
},
{
"docid": "cdbdd1a6cd129b42065183a6f7fc5bc9",
"text": "Many methods designed to create defenses against distributed denial of service (DDoS) attacks are focused on the IP and TCP layers instead of the high layer. They are not suitable for handling the new type of attack which is based on the application layer. In this paper, we introduce a new scheme to achieve early attack detection and filtering for the application-layer-based DDoS attack. An extended hidden semi-Markov model is proposed to describe the browsing behaviors of web surfers. In order to reduce the computational amount introduced by the model's large state space, a novel forward algorithm is derived for the online implementation of the model based on the M-algorithm. Entropy of the user's HTTP request sequence fitting to the model is used as a criterion to measure the user's normality. Finally, experiments are conducted to validate our model and algorithm.",
"title": ""
},
{
"docid": "c83d034e052926520677d0c5880f8800",
"text": "Sperm vitality is a reflection of the proportion of live, membrane-intact spermatozoa determined by either dye exclusion or osmoregulatory capacity under hypo-osmotic conditions. In this chapter we address the two most common methods of sperm vitality assessment: eosin-nigrosin staining and the hypo-osmotic swelling test, both utilized in clinical Andrology laboratories.",
"title": ""
}
] |
scidocsrr
|
defecff04057e8ff118193dd21c02d86
|
A 1.8-V 1.8 GHz LC VCO for WSN application
|
[
{
"docid": "45e5227a5b156806a3bdc560ce895651",
"text": "This paper presents reconfigurable RF integrated circuits (ICs) for a compact implementation of an intelligent RF front-end for multiband and multistandard applications. Reconfigurability has been addressed at each level starting from the basic elements to the RF blocks and the overall front-end architecture. An active resistor tunable from 400 to 1600 /spl Omega/ up to 10 GHz has been designed and an equivalent model has been extracted. A fully tunable active inductor using a tunable feedback resistor has been proposed that provides inductances between 0.1-15 nH with Q>50 in the C-band. To demonstrate reconfigurability at the block level, voltage-controlled oscillators with very wide tuning ranges have been implemented in the C-band using the proposed active inductor, as well as using a switched-spiral resonator with capacitive tuning. The ICs have been implemented using 0.18-/spl mu/m Si-CMOS and 0.18-/spl mu/m SiGe-BiCMOS technologies.",
"title": ""
}
] |
[
{
"docid": "8c043576bd1a73b783890cdba3a5e544",
"text": "We present a novel approach to collaborative prediction, using low-norm instead of low-rank factorizations. The approach is inspired by, and has strong connections to, large-margin linear discrimination. We show how to learn low-norm factorizations by solving a semi-definite program, and discuss generalization error bounds for them.",
"title": ""
},
{
"docid": "1f7fa34fd7e0f4fd7ff9e8bba2a78e3c",
"text": "Today many multi-national companies or organizations are adopting the use of automation. Automation means replacing the human by intelligent robots or machines which are capable to work as human (may be better than human). Artificial intelligence is a way of making machines, robots or software to think like human. As the concept of artificial intelligence is use in robotics, it is necessary to understand the basic functions which are required for robots to think and work like human. These functions are planning, acting, monitoring, perceiving and goal reasoning. These functions help robots to develop its skills and implement it. Since robotics is a rapidly growing field from last decade, it is important to learn and improve the basic functionality of robots and make it more useful and user-friendly.",
"title": ""
},
{
"docid": "aff9d415a725b9e1ea65897af2715729",
"text": "Survey research is believed to be well understood and applied by MIS scholars. It has been applied for several years, it is well defined, and it has precise procedures which, when followed closely, yield valid and easily interpretable data. Our assessment of the use of survey research in the MIS field between 1980 and 1990 indicates that this perception is at odds with reality. Our analysis indicates that survey methodology is often misapplied and is plagued by five important weaknesses: (1) single method designs where multiple methods are needed, (2) unsystematic and often inadequate sampling procedures, (3) low response rates, (4) weak linkages between units of analysis and respondents, and (5) over reliance on cross-sectional surveys where longitudinal surveys are really needed. Our assessment also shows that the quality of survey research varies considerably among studies of different purposes: explanatory studies are of good quality overall, exploratory and descriptive studies are of moderate to poor quality. This article presents a general framework for classifying and examining survey research and uses this framework to assess, review and critique the usage of survey research conducted in the past decade in the MIS field. In an effort to improve the quality of survey research, this article makes specific recommendations that directly address the major problems highlighted in the review. AUTHORS' BIOGRAPHIES Alain Pinsonneault holds a Ph.d. in administration from University of California at Irvine (1990) and a M.Sc. in Management Information Systems from Ecole des Hautes Etudes Commerciales de Montreal (1986). His current research interests include the organizational implications of computing, especially with regard to the centralization/decentralization of decision making authority and middle managers workforce; the strategic and political uses of computing, the use of information technology to support group decision making process; and the benefits of computing. He has published articles in Decision Support Systems, European Journal of Operational Research, and in Management Information Systems Quarterly, and one book chapter. He has also given numerous conferences and he is an associate editor of Informatization and the Public Sector journal. His doctoral dissertation won the 1990 International Center for Information Technology Doctoral Award. Kenneth L. Kraemer is the Director of the Public Policy Research Organization and Professor of Management and Information and Computer Science. He holds a Ph.D. from University of Southern California. Professor Kraemer has conducted research into the management of computing in organizations for more than 20 years. He is currently studying the diffusion of computing in Asia-Pacific countries, the dynamics of computing development in organizations, the impacts of computing on productivity in the work environment, and policies for successful implementation of computer-based information systems. In addition, Professor Kraemer is coeditor of a series of books entitled Computing, Organization, Policy, and Society (CORPS) published by Columbia University Press. He has published numerous books on computing, the most recent of which being Managing Information Systems. He has served as a consultant to the Department of Housing and Urban Development, the Office of Technology Assessment and the United Nations, and as a national expert to the Organization for Economic Cooperation and Development. He was recently Shaw Professor in Information Systems and Computer Sciences at the National University of Singapore.",
"title": ""
},
{
"docid": "bdefc710647c80630cb089aec9d79197",
"text": "This chapter introduces a new computational intelligence paradigm to perform pattern recognition, named Artificial Immune Systems (AIS). AIS take inspiration from the immune system in order to build novel computational tools to solve problems in a vast range of domain areas. The basic immune theories used to explain how the immune system perform pattern recognition are described and their corresponding computational models are presented. This is followed with a survey from the literature of AIS applied to pattern recognition. The chapter is concluded with a trade-off between AIS and artificial neural networks as pattern recognition paradigms.",
"title": ""
},
{
"docid": "e21aed852a892cbede0a31ad84d50a65",
"text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.09.010 ⇑ Corresponding author. Tel.: +1 662 915 5519. E-mail addresses: crego@bus.olemiss.edu (C. R (D. Gamboa), fred.glover@colorado.edu (F. Glover), colin.j.osterman@navy.mil (C. Osterman). Heuristics for the traveling salesman problem (TSP) have made remarkable advances in recent years. We survey the leading methods and the special components responsible for their successful implementations, together with an experimental analysis of computational tests on a challenging and diverse set of symmetric and asymmetric TSP benchmark problems. The foremost algorithms are represented by two families, deriving from the Lin–Kernighan (LK) method and the stem-and-cycle (S&C) method. We show how these families can be conveniently viewed within a common ejection chain framework which sheds light on their similarities and differences, and gives clues about the nature of potential enhancements to today’s best methods that may provide additional gains in solving large and difficult TSPs. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "beea84b0d96da0f4b29eabf3b242a55c",
"text": "Recent years have seen a growing interest in creating virtual agents to populate the cast of characters for interactive narrative. A key challenge posed by interactive characters for narrative environments is devising expressive dialogue generators. To be effective, character dialogue generators must be able to simultaneously take into account multiple sources of information that bear on dialogue, including character attributes, plot development, and communicative goals. Building on the narrative theory of character archetypes, we propose an archetype-driven character dialogue generator that uses a probabilistic unification framework to generate dialogue motivated by character personality and narrative history to achieve communicative goals. The generator’s behavior is illustrated with character dialogue generation in a narrative-centered learning environment, CRYSTAL ISLAND.",
"title": ""
},
{
"docid": "21be75a852ab69d391d8d6f4ed911f46",
"text": "We have been developing an exoskeleton robot (ExoRob) for assisting daily upper limb movements (i.e., shoulder, elbow and wrist). In this paper we have focused on the development of a 2DOF ExoRob to rehabilitate elbow joint flexion/extension and shoulder joint internal/external rotation, as a step toward the development of a complete (i.e., 3DOF) shoulder motion assisted exoskeleton robot. The proposed ExoRob is designed to be worn on the lateral side of the upper arm in order to provide naturalistic movements at the level of elbow (flexion/extension) and shoulder joint internal/external rotation. This paper also focuses on the modeling and control of the proposed ExoRob. A kinematic model of ExoRob has been developed based on modified Denavit-Hartenberg notations. In dynamic simulations of the proposed ExoRob, a novel nonlinear sliding mode control technique with exponential reaching law and computed torque control technique is employed, where trajectory tracking that corresponds to typical rehab (passive) exercises has been carried out to evaluate the effectiveness of the developed model and controller. Simulated results show that the controller is able to drive the ExoRob efficiently to track the desired trajectories, which in this case consisted in passive arm movements. Such movements are used in rehabilitation and could be performed very efficiently with the developed ExoRob and the controller. Experiments were carried out to validate the simulated results as well as to evaluate the performance of the controller.",
"title": ""
},
{
"docid": "025932fa63b24d65f3b61e07864342b7",
"text": "The realization of the Internet of Things (IoT) paradigm relies on the implementation of systems of cooperative intelligent objects with key interoperability capabilities. One of these interoperability features concerns the cooperation among nodes towards a collaborative deployment of applications taking into account the available resources, such as electrical energy, memory, processing, and object capability to perform a given task, which are",
"title": ""
},
{
"docid": "254f2ef4608ea3c959e049073ad063f8",
"text": "Recently, the long-term evolution (LTE) is considered as one of the most promising 4th generation (4G) mobile standards to increase the capacity and speed of mobile handset networks [1]. In order to realize the LTE wireless communication system, the diversity and multiple-input multiple-output (MIMO) systems have been introduced [2]. In a MIMO mobile user terminal such as handset or USB dongle, at least two uncorrelated antennas should be placed within an extremely restricted space. This task becomes especially difficult when a MIMO planar antenna is designed for LTE band 13 (the corresponding wavelength is 390 mm). Due to the limited space available for antenna elements, the antennas are strongly coupled with each other and have narrow bandwidth.",
"title": ""
},
{
"docid": "9458b13e5a87594140d7ee759e06c76c",
"text": "Digital ecosystem, as a neoteric terminology, has emerged along with the appearance of Business Ecosystem which is a form of naturally existing business network of small and medium enterprises. However, few researches have been found in the field of defining digital ecosystem. In this paper, by means of ontology technology as our research methodology, we propose to develop a conceptual model for digital ecosystem. By introducing an innovative ontological notation system, we create the hierarchical framework of digital ecosystem form up to down, based on the related theories form Digital ecosystem and business intelligence institute.",
"title": ""
},
{
"docid": "3725224178318d33b4c8ceecb6f03cfd",
"text": "The 'chain of survival' has been a useful tool for improving the understanding of, and the quality of the response to, cardiac arrest for many years. In the 2005 European Resuscitation Council Guidelines the importance of recognising critical illness and preventing cardiac arrest was highlighted by their inclusion as the first link in a new four-ring 'chain of survival'. However, recognising critical illness and preventing cardiac arrest are complex tasks, each requiring the presence of several essential steps to ensure clinical success. This article proposes the adoption of an additional chain for in-hospital settings--a 'chain of prevention'--to assist hospitals in structuring their care processes to prevent and detect patient deterioration and cardiac arrest. The five rings of the chain represent 'staff education', 'monitoring', 'recognition', the 'call for help' and the 'response'. It is believed that a 'chain of prevention' has the potential to be understood well by hospital clinical staff of all grades, disciplines and specialties, patients, and their families and friends. The chain provides a structure for research to identify the importance of each of the various components of rapid response systems.",
"title": ""
},
{
"docid": "a7dccceaa84296d2c9a32386295dcb65",
"text": "There has been much debate surrounding the potential benefits and costs of online interaction. The present research argues that engagement with online discussion forums can have underappreciated benefits for users’ well-being and engagement in offline civic action, and that identification with other online forum users plays a key role in this regard. Users of a variety of online discussion forums participated in this study. We hypothesized and found that participants who felt their expectations had been exceeded by the forum reported higher levels of forum identification. Identification, in turn, predicted their satisfaction with life and involvement in offline civic activities. Formal analyses confirmed that identification served as a mediator for both of these outcomes. Importantly, whether the forum concerned a stigmatized topic moderated certain of these relationships. Findings are discussed in the context of theoretical and applied implications. 2015 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http:// creativecommons.org/licenses/by/4.0/).",
"title": ""
},
{
"docid": "049f0308869c53bbb60337874789d569",
"text": "In machine learning, one of the main requirements is to build computational models with a high ability to generalize well the extracted knowledge. When training e.g. artificial neural networks, poor generalization is often characterized by over-training. A common method to avoid over-training is the hold-out crossvalidation. The basic problem of this method represents, however, appropriate data splitting. In most of the applications, simple random sampling is used. Nevertheless, there are several sophisticated statistical sampling methods suitable for various types of datasets. This paper provides a survey of existing sampling methods applicable to the data splitting problem. Supporting experiments evaluating the benefits of the selected data splitting techniques involve artificial neural networks of the back-propagation type.",
"title": ""
},
{
"docid": "038637eebbf8474bf15dab1c9a81ed6d",
"text": "As the surplus market of failure analysis equipment continues to grow, the cost of performing invasive IC analysis continues to diminish. Hardware vendors in high-security applications utilize security by obscurity to implement layers of protection on their devices. High-security applications must assume that the attacker is skillful, well-equipped and well-funded. Modern security ICs are designed to make readout of decrypted data and changes to security configuration of the device impossible. Countermeasures such as meshes and attack sensors thwart many state of the art attacks. Because of the perceived difficulty and lack of publicly known attacks, the IC backside has largely been ignored by the security community. However, the backside is currently the weakest link in modern ICs because no devices currently on the market are protected against fully-invasive attacks through the IC backside. Fully-invasive backside attacks circumvent all known countermeasures utilized by modern implementations. In this work, we demonstrate the first two practical fully-invasive attacks against the IC backside. Our first attack is fully-invasive backside microprobing. Using this attack we were able to capture decrypted data directly from the data bus of the target IC's CPU core. We also present a fully invasive backside circuit edit. With this attack we were able to set security and configuration fuses of the device to arbitrary values.",
"title": ""
},
{
"docid": "fb1c9fcea2f650197b79711606d4678b",
"text": "Self-similarity based super-resolution (SR) algorithms are able to produce visually pleasing results without extensive training on external databases. Such algorithms exploit the statistical prior that patches in a natural image tend to recur within and across scales of the same image. However, the internal dictionary obtained from the given image may not always be sufficiently expressive to cover the textural appearance variations in the scene. In this paper, we extend self-similarity based SR to overcome this drawback. We expand the internal patch search space by allowing geometric variations. We do so by explicitly localizing planes in the scene and using the detected perspective geometry to guide the patch search process. We also incorporate additional affine transformations to accommodate local shape variations. We propose a compositional model to simultaneously handle both types of transformations. We extensively evaluate the performance in both urban and natural scenes. Even without using any external training databases, we achieve significantly superior results on urban scenes, while maintaining comparable performance on natural scenes as other state-of-the-art SR algorithms.",
"title": ""
},
{
"docid": "d58f60013b507b286fcfc9f19304fea6",
"text": "The outcome of patients suffering from spondyloarthritis is determined by chronic inflammation and new bone formation leading to ankylosis. The latter process manifests by new cartilage and bone formation leading to joint or spine fusion. This article discusses the main mechanisms of new bone formation in spondyloarthritis. It reviews the key molecules and concepts of new bone formation and ankylosis in animal models of disease and translates these findings to human disease. In addition, proposed biomarkers of new bone formation are evaluated and the translational current and future challenges are discussed with regards to new bone formation in spondyloarthritis.",
"title": ""
},
{
"docid": "9cdcf6718ace17a768f286c74c0eb11c",
"text": "Trapa bispinosa Roxb. which belongs to the family Trapaceae is a small herb well known for its medicinal properties and is widely used worldwide. Trapa bispinosa or Trapa natans is an important plant of Indian Ayurvedic system of medicine which is used in the problems of stomach, genitourinary system, liver, kidney, and spleen. It is bitter, astringent, stomachic, diuretic, febrifuge, and antiseptic. The whole plant is used in gonorrhea, menorrhagia, and other genital affections. It is useful in diarrhea, dysentery, ophthalmopathy, ulcers, and wounds. These are used in the validated conditions in pitta, burning sensation, dipsia, dyspepsia, hemorrhage, hemoptysis, diarrhea, dysentery, strangely, intermittent fever, leprosy, fatigue, inflammation, urethrorrhea, fractures, erysipelas, lumbago, pharyngitis, bronchitis and general debility, and suppressing stomach and heart burning. Maybe it is due to photochemical content of Trapa bispinosa having high quantity of minerals, ions, namely, Ca, K, Na, Zn, and vitamins; saponins, phenols, alkaloids, H-donation, flavonoids are reported in the plants. Nutritional and biochemical analyses of fruits of Trapa bispinosa in 100 g showed 22.30 and 71.55% carbohydrate, protein contents were 4.40% and 10.80%, a percentage of moisture, fiber, ash, and fat contents were 70.35 and 7.30, 2.05 and 6.35, 2.30 and 8.50, and 0.65 and 1.85, mineral contents of the seeds were 32 mg and 102.85 mg calcium, 1.4 and 3.8 mg Iron, and 121 and 325 mg phosphorus in 100 g, and seeds of Trapa bispinosa produced 115.52 and 354.85 Kcal of energy, in fresh and dry fruits, respectively. Chemical analysis of the fruit and fresh nuts having considerable water content citric acid and fresh fruit which substantiates its importance as dietary food also reported low crude lipid, and major mineral present with confirming good amount of minerals as an iron and manganese potassium were contained in the fruit. Crude fiber, total protein content of the water chestnut kernel, Trapa bispinosa are reported. In this paper, the recent reports on nutritional, phytochemical, and pharmacological aspects of Trapa bispinosa Roxb, as a medicinal and nutritional food, are reviewed.",
"title": ""
},
{
"docid": "6c2317957daf4f51354114de62f660a1",
"text": "This paper proposes a framework for recognizing complex human activities in videos. Our method describes human activities in a hierarchical discriminative model that operates at three semantic levels. At the lower level, body poses are encoded in a representative but discriminative pose dictionary. At the intermediate level, encoded poses span a space where simple human actions are composed. At the highest level, our model captures temporal and spatial compositions of actions into complex human activities. Our human activity classifier simultaneously models which body parts are relevant to the action of interest as well as their appearance and composition using a discriminative approach. By formulating model learning in a max-margin framework, our approach achieves powerful multi-class discrimination while providing useful annotations at the intermediate semantic level. We show how our hierarchical compositional model provides natural handling of occlusions. To evaluate the effectiveness of our proposed framework, we introduce a new dataset of composed human activities. We provide empirical evidence that our method achieves state-of-the-art activity classification performance on several benchmark datasets.",
"title": ""
},
{
"docid": "c82cecc94eadfa9a916d89a9ee3fac21",
"text": "In this paper, we develop a supply chain network model consisting of manufacturers and retailers in which the demands associated with the retail outlets are random. We model the optimizing behavior of the various decision-makers, derive the equilibrium conditions, and establish the finite-dimensional variational inequality formulation. We provide qualitative properties of the equilibrium pattern in terms of existence and uniqueness results and also establish conditions under which the proposed computational procedure is guaranteed to converge. Finally, we illustrate the model through several numerical examples for which the equilibrium prices and product shipments are computed. This is the first supply chain network equilibrium model with random demands for which modeling, qualitative analysis, and computational results have been obtained.",
"title": ""
},
{
"docid": "362bd9e95f9b0304fa95a647a8a7ee45",
"text": "Cluster labelling is a technique which provides useful information about the cluster to the end users. In this paper, we propose a novel approach which is the follow-up of our previous work. Our earlier approach generates clusters of web documents by using a modified apriori approach which is more efficient and faster than the traditional apriori approach. To label the clusters, the propose approach used an effective feature selection technique which selects the top features of a cluster. Rather than labelling the cluster with ‘bag of words’, a concept driven mechanism has been developed which uses the Wikipedia that takes the top features of a cluster as input to generate the possible candidate labels. Mutual information (MI) score technique has been used for ranking the candidate labels and then the topmost candidates are considered as potential labels of a cluster. Experimental results on two benchmark datasets demonstrate the efficiency of our approach.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.